I will try to provide a more or less realistic example of legitimate, meaningful and useful testing for even distribution.
#include <stdio.h>
I would prefer not to explain the bisection method, but emphasize the stopping condition. It has exactly the form under discussion: (a == a+d) , where both sides are floats: a is our current approximation of the root of the equation, and d is our current accuracy. Given the precondition of the algorithm - what should be the root between range_start and range_end - we guarantee at each iteration that the root remains between a and a+d , and d cut by half at each step, reducing the boundaries.
And then, after several iterations, d becomes so small that when adding with a it rounds to zero! That is, a+d is closer to a , then to any other float ; and therefore FPU rounds it to the nearest value: to a itself. This can easily be illustrated by calculation on a hypothetical computer; let him have a 4-digit decimal mantissa and some large range of exhibitors. Then what result should the machine give 2.131e+02 + 7.000e-3 ? The exact answer is 213.107 , but our machine cannot represent such a number; he must round it. And 213.107 much closer to 213.1 than to 213.2 - therefore, the rounded result becomes 2.131e+02 - the small term disappeared, rounded to zero. It is likewise guaranteed what happens at some iteration of our algorithm - and at that moment we can no longer continue. We found the root of the greatest possible accuracy.
The encouraging conclusion, apparently, is that the floats are complex. They look just like real numbers, so every programmer is tempted to think of them as real numbers. But this is not so. They have their own behavior, a bit reminiscent of reality, but not quite the same. You must be very careful with them, especially when comparing for equality.
Update
Repeating the answer after a while, I also noticed an interesting fact: in the above algorithm, you can not actually use a "small amount" in a stopped state. For any choice of number, data will be entered that will consider your choice too large , which will lead to a loss of accuracy, and data will be entered that will consider your choice too small , causing excessive iterations or even entering an infinite loop. The following is a detailed discussion.
You may already know that there is no “small number” in calculus: for any real number, you can easily find infinitely many even smaller ones. The problem is that one of those “even smaller” ones may be what we are really looking for; this may be the root of our equation. Worse, for different equations there can be different roots (for example, 2.51e-8 and 1.38e-8 ), both of which will approach the same number if our stopping condition looks like d < 1e-6 . Whatever “small number” you choose, many roots that would be correctly found with maximum accuracy with the stop condition a == a+d will be corrupted if the “epsilon” is too large .
True, however, the exponent has a limited range in floating point numbers, so you can find the smallest nonzero positive FP number (for example, 1e-45 denorm for IEEE 754 with one FP precision). But it is useless! while (d < 1e-45) {...} will loop forever, assuming a one-point (positive non-zero) d .
Leaving aside these cases of the pathological margin, any choice of a "small number" in the stopped state d < eps will be too small for the "equation". In those equations where the root has a high enough index, the result of subtracting two mantissas that differ only by the least significant digit will easily exceed our epsilon. For example, with a 6-digit mantissa 7.00023e+8 - 7.00022e+8 = 0.00001e+8 = 1.00000e+3 = 1000 , which means that the smallest possible difference between numbers with an index of +8 and a 5-digit mantissa is .. . 1000! This will never fit, say, in 1e-4 . For these numbers with a (relatively) high exponent, we simply do not have enough accuracy to see the difference 1e-4 .
My implementation above also took this last issue into account, and you can see that d cut in two steps each time, instead of being recounted as the difference (possibly exponentially huge) a and b . Therefore, if we change the stop condition to d < eps , the algorithm will not get stuck in an infinite loop with huge roots (it could very well with (ba) < eps ), but it will still perform unnecessary iterations when compressing d lower accuracy a .
Such reasoning may seem overly theoretical and unnecessarily deep, but his goal is to again illustrate the trick of the floats. One must be very careful about their ultimate accuracy when writing arithmetic operators around them.