Is there an example to make a union and find an algorithm without ranking according to ranks in Omega (q log n)?

I recently read this and was surprised that the time complexity of combining and finding an algorithm with path compression only was O((m+n) log n) , where m is the number of “find” queries and n is the number of “merge” queries.

I was interested in this complexity, (because I usually implement this algorithm without a rank, and even when I applied the union rank upside down, the performance is not bad!) And I tried to find an example that can make this temporary complexity, But because of the big path compression power it was very difficult ...

Are there any examples that Omega((m+n) log n) can achieve, or is this complexity just theoretical, not practical?

+8
algorithm disjoint-sets
source share
2 answers

Yes, there is a corresponding lower bound due to Michael J. Fisher in 1972 (see section 3). In his example, a binomial tree of depth log n is used, where a binomial tree of depth 0 is the only node, and a binomial tree of depth k is two binomial trees of depth k - 1 with a root of one pointing to the root of the other. Repeating the union of the binomial tree root to point to a singleton and performing a search on the deepest node, we perform an expensive (logarithmic operation) operation that leaves another embedded binomial tree for use.

Python demo: this prints (k+1) * 2**k , where k is the example argument representing an approximate operation counter for O(2**k) operations on O(2**k) keys.

 p = {} def find(a): global count b = a while (b in p): count += 1 b = p[b] while (a in p): pa = p[a] p[a] = b a = pa return b def union(a, b): p[find(a)] = find(b) def deepest(): maxd = (0, None) for (a, pa) in p.items(): d = 1 while (pa in p): d += 1 pa = p[pa] maxd = max(maxd, (d, a)) return maxd[1] def example(k): global count, p count = 0 p.clear() for a in range(((2 ** k) - 1), 0, (- 1)): union(a, (a & (a - 1))) r = 0 for a in range((2 ** k), (2 ** (k + 1))): union(r, a) find(deepest()) r = a example(9) print(count) 
+6
source share

or is this complexity just theoretical, not practical?

Yes. The complexity of the algorithm is a purely theoretical construct to describe how well the algorithm scales for different input sizes (in this case, the numbers of finds and unions).

This gives no guarantee of the number of steps required for a particular input instance (say: 5 find and 3 union) - of course, of course. In fact, a large O-notation uses the concept of an arbitrarily large multiplicative constant, which does not allow us to calculate the exact battery life, but this is enough to distinguish the algorithms into complexity classes .

+2
source share

All Articles