Can someone give me an intuitive explanation of why the Ackermann http://en.wikipedia.org/wiki/Ackermann_function function is related to the amortized complexity of the join-search algorithm used for the disjoint sets http://en.wikipedia.org/wiki / Disjoint-set_data_structure ?
The analysis in the structure of the Tarjan data structure is not very intuitive.
I also reviewed it in Introduction to Algorithms, but it also seems too rigorous and unintuitive.
Thank you for your help!
from Wikipedia
(on searching and combining) These two methods complement each other; applied together, the amortized time per operation is only O (ฮฑ (n)), where ฮฑ (n) is the inverse of the function f (n) = A (n, n), and A is the extremely fast-growing Ackerman function. Since ฮฑ (n) is the inverse of this function, ฮฑ (n) is less than 5 for all remotely practical values โโof n. Thus, the amortized operating time for the constant works efficiently.
from Kruskal algoritm
Lg * n function, lg * n - , , lg n. , lg lg n, lg n. f (n) = 2 ^ 2 ^ 2 ^... ^ 2, . n >= 5 f (n) . , , f (n) n, . , O (e). , , , O (e) . , , lg * n A (p, n) A - , a . lg * n, , .
, lg * n - , , lg n. , lg lg n, lg n. f (n) = 2 ^ 2 ^ 2 ^... ^ 2, . n >= 5 f (n) . , , f (n) n, . , O (e). , , , O (e) . , , lg * n A (p, n) A - , a . lg * n, , .