Not. Consider that time complexity describes time asymptotically - we absorb lower time complexity into higher.
O(n²) means k × n² + c , where he suggested that c so low that we don't care.
This is a permanent effect, a certain amount of overhead for all this (c) and a certain amount of costs for any of our complexity. One n² algorithm will beat another n² algorithm if it has lower k , or if they are equal, lower c . In addition, O (n²) coincides with O (1) at sufficiently low values of n (and we usually don’t care, but each of k can be massive, as well as if we execute each m times then O (n²m ) exceeds O (m) if n is small, and not what was really compared).
In any case, this is a deliberate over-simplification, because k and c may not be constant, as good as constants. Therefore, if something really was O(n² + log(n)) , we would call it O(n²) because someone cares about this little log(n) when we worry about n² ?
So. Looking at your business. We do an outer loop, n times. For each of them, we do the inner loop n-1 times. For each of these internal cycles, we do the first print (any difference in cost is not related to n any way, therefore it is essentially constant) and a test. The test passes in about half the cases, which often leads to the cost of a second print. A.
Thus, the total cost:
cost of setting up everything + n × cost of looping + (n - 1) × cost of loop + (n × (n - 1)) × cost of print + (n × (n - 1)) × cost of test + (n × (n - 1)) / 2 × cost of print.
Assigning the values to the above constants we get:
k + n × a + (n - 1) × b + (n × (n - 1)) × c + (n × (n - 1)) × d + (n × (n - 1)) / 2 × e. = k + n × a + (n - 1) × b + (n × (n - 1)) × (c + d + (e / 2))
Now, since c + d + e / 2 itself is a constant, this could become:
n × a + (n - 1) × b + (n × (n - 1)) × c + k
Or reorder first in highest order:
(n × (n - 1)) × c + (n - 1) × b + n × a + k
If n is big enough for us to be damn, then n is proportionally so close to n - 1 that we could also consider them the same (another aspect of time complexity that describes things asymptotically, that is, as n approaches ∞ and therefore , the difference between n² and (n × (n - 1)) approaches 0). Consequently:
n² × c + n × b + n × a = n² × c + n × (b + a) + k
Again, b + a is a constant, so it is equivalent to:
n² × k + n × c + a
And now we are doing what was mentioned earlier about absorbing lower orders of time, who cares about n × c , doesn’t it matter? If n is high enough for us to care in general, then it's all about n² . For comparison, we could simply consider the difference with the total overhead as noise and consider it as:
n² × k + c
Or, in other words, like:
O(n²)
So, yes, in fact you came across it, and the if statement does not affect complexity.
Given this, we can note that for temporary complexity, you can hide what really excites us. If, for example, we had the O (n²) algorithm, then such an analysis showed that the time value of n² × k + n × c was 200 μs and c was 15 s, then until n exceeds 750,000, actually the per-n bit is what it costs, not the on-n² bit. With a lower n, this comes closer to what we expect from O (n) than from O (n²).
The reason for the time complexity is utility, the combination of so much inconsistency is rare, and that we care about time and, therefore, time complexity, more when n gets higher (you could hide some terrible O (n!) Monster in your code, which you call once in the blue moon, with three elements and just don’t care). Therefore, in order to achieve real improvements, we ideally want to reduce the time complexity or failure to reduce the height of the constant k (or, in other words, if you can start doing nnnn times instead of n² times, then, otherwise reduce the cost of this thing you do n² times, not something else that you do n times).
In other words, it helps us focus on what is usually a mattress.