Gabriel Lamé's theorem limits the number of steps in log (1 / sqrt (5) * (a + 1/2)) - 2, where the base of the log is (1 + sqrt (5)) / 2. This refers to the worst case scenario for the algorithm, and this happens when the inputs are Fibanoxy sequential numbers.
A slightly more liberal border: log a, where the log base (sqrt (2)) is implied by Koblitz.
For cryptographic purposes, we usually consider the bitwise complexity of the algorithms, taking into account that the bit size is set at approximately k = loga.
Here is a detailed analysis of the bitwise complexity of the Euclidean algorithm:
Although in most references the bitwise complexity of the Euclidean algorithm is given by O (loga) ^ 3, there is a more stringent estimate, which is O (loga) ^ 2.
Consider; r0 = a, r1 = b, r0 = q1.r1 + r2. ,, ri-1 = qi.ri + ri + 1,. ,, rm-2 = qm-1.rm-1 + rm rm-1 = qm.rm
note that: a = r0> = b = r1> r2> r3 ...> rm-1> rm> 0 .......... (1)
and rm is the greatest common factor of a and b.
According to the statement in the book of Koblitz (course in number theory and cryptography), it can be proved that: ri + 1 <(ri-1) / 2 ................. (2)
Again in Koblitz, the number of bit operations necessary to divide a k-bit positive integer by a 1-bit positive integer (provided k> = l) is specified as: (kl + 1) .l ....... ............ (3)
By (1) and (2), the number of divisions is O (loga), and therefore (3) the total complexity is O (loga) ^ 3.
Now this can be reduced to O (loga) ^ 2 remark in Koblitz.
consider ki = logri +1
by (1) and (2) we have: ki + 1 <= ki for i = 0,1, ..., m-2, m-1 and ki + 2 <= (ki) -1 for = 0,1 , ..., t-2
and (3) the total cost of m-divisions is limited: SUM [(ki-1) - ((ki) -1))] ki for i = 0,1,2, ..., t
rearranging this: SUM [(ki-1) - ((ki) -1))] * ki <= 4 * k0 ^ 2
Thus, the bitwise complexity of the Euclidean algorithm is O (loga) ^ 2.