The question is ridiculously worded. Let me break it down into many smaller questions:
Why is one-tenth plus two-tenths not always equal to three-tenths in floating point arithmetic?
Let me give you an analogy. Suppose we have a mathematical system where all numbers are rounded to five decimal places. Suppose you say:
x = 1.00000 / 3.00000;
You would expect x to be 0.33333, right? Because this is the closest number in our system to a real answer. Now suppose you said
y = 2.00000 / 3.00000;
You expect y to be 0.66667, right? Because again this is the closest number in our system to the real answer. 0.666666 is farther from two-thirds than 0.666667.
Please note that in the first case we are rounded, and in the second case we are rounded.
Now when we say
q = x + x + x + x; r = y + x + x; s = y + y;
what will we get? If we did exact arithmetic, then each of them would obviously make up four-thirds, and all of them would be equal. But they are not equal. Although 1.33333 is the closest number in our system to four-thirds, only r has this value.
q is 1.33332 - because x was a little small, each addition accumulated this error, and the final result is quite small. Similarly, s is too large; this is 1.33334 because y was a little too big. r gets the correct answer, because too significant y is canceled by too small x, and the result ends correctly.
Does the number of precision points affect the magnitude and direction of the error?
Yes; higher accuracy reduces the amount of error, but can change whether the calculation causes loss or gain due to error. For example:
b = 4.00000 / 7.00000;
b will be 0.57143, which is rounded off from the true value of 0.571428571 ... If we went to eight places, which would be 0.57142857, which has a much smaller error, but in the opposite direction; it is rounded.
Since a change in accuracy can change whether the error is a gain or loss in each individual calculation, this can change whether these errors in the aggregate calculation will reinforce each other or cancel each other out. The end result is that sometimes a calculation with a lower accuracy is closer to a “true” result than a calculation with a higher accuracy, because when calculating a lower accuracy, you are lucky and the errors go in different directions.
We expect that performing the calculation with greater accuracy always gives the answer closer to the true answer, but this argument shows the opposite. This explains why sometimes calculating in floats gives a “correct” answer, but calculating in doubles that have twice the exact accuracy gives a “wrong” answer, right?
Yes, this is exactly what happens in your examples, except that instead of five decimal digits, we have a certain number of binary digits. Just as one third cannot be exactly represented in five - or any finite number - decimal digits, 0.1, 0.2 and 0.3 cannot be exactly represented in any finite number of binary digits. Some of them will be rounded, some of them will be rounded, and whether their additions to the error are added or the error is canceled depends on the specific details of the number of binary digits in each system. That is, changes in accuracy can change the answer better or worse. As a rule, the higher the accuracy, the closer the answer to the true answer, but not always.
How can I get exact decimal arithmetic calculations, then if with floating point and double binary digits?
If you need accurate decimal math, use the decimal type; it uses decimal fractions, not binary fractions. The price you pay is that it is much larger and slower. And, of course, as we have seen, fractions such as one third or four sevenths will not be represented accurately. Any fraction that is actually a decimal fraction, however, will be represented by a zero error, up to about 29 significant digits.
OK, I agree that all floating-point circuits introduce inaccuracies due to a presentation error and that these inaccuracies can sometimes accumulate or be canceled from each other based on the number of precision bits used in the calculation. Do we have at least a guarantee that these inaccuracies will be agreed upon?
No, you do not have such a guarantee for floats or doubles. The compiler and runtime are allowed to perform floating point calculations with greater precision than what is required by the specification. In particular, the compiler and the runtime are allowed to perform single-point (32-bit) arithmetic in 64-bit or 80-bit or 128-bit or any bit greater than 32, as they like.
The compiler and runtime are allowed to do this, but at the same time they feel that way. They should not be consistent from machine to machine, from start to start, etc. Since this can only make the calculations more accurate, this is not considered an error. This is a feature. A feature that makes it incredibly difficult to write programs that behave predictably, but nonetheless.
Thus, this means that calculations performed during compilation, for example, literals 0.1 + 0.2, can give different results than the same calculations performed at run time with variables?
Yes.
How to compare the results 0.1 + 0.2 == 0.3 with (0.1 + 0.2).Equals(0.3) ?
Since the first one is calculated by the compiler, and the second one is calculated by the runtime, and I just said that they are allowed to arbitrarily use higher accuracy than the specification requires on their whim, yes, this can give different results. Perhaps one of them prefers to do calculations only with an accuracy of 64 bits, while the other chooses 80-bit or 128-bit accuracy for partial or all calculations and gets a different answer.
So, hold on a minute here. You are not only talking about the fact that 0.1 + 0.2 == 0.3 may differ from (0.1 + 0.2).Equals(0.3) . You say that 0.1 + 0.2 == 0.3 can be calculated as true or false completely at the whim of the compiler. This can lead to truth on Tuesdays and false on Thursdays, it can lead to truth on one machine and false on another, it can lead to true and false if the expression appears twice in the same program. This expression may matter for any reason; the compiler may be completely unreliable here.
Right.
As usually reported to the C # compiler command, someone has an expression that produces true when they are compiled into debug and false when compiled in release mode. This is the most common situation in which this happens because the generation of debugging and release code changes the register allocation schemes. But the compiler is allowed to do whatever he likes with this expression if it selects true or false. (This cannot, for example, create a compile-time error.)
This is madness.
Right.
Who should I blame this mess?
Not me, this is true for a darn.
Intel decided to create a mathematical floating point chip in which it would be much more expensive to make consistent results. A small choice in the compiler about which registration operations and which operations to store on the stack can lead to large differences in the results.
How to ensure consistent results?
Use the decimal type, as I said. Or do all your math with integers.
I need to use doubles or floats; can i do anything to encourage consistent results?
Yes. If you save any result in any static field, any field of an instance of an element of a class or an array of type float or double, then it is guaranteed that it will be truncated to 32 or 64-bit precision. (This guarantee is clearly not intended for stores by local residents or formal parameters.) Also, if you execute the runtime in (float) or (double) in an expression that is already of this type, then the compiler will issue a special code that forces the result to truncate as if it were assigned to a field or array element. (Castes that are executed at compile time, i.e. discarded by constant expressions) are not guaranteed.)
To clarify this last point: Does the C # language specification provide these guarantees?
No. The runtime ensures that it is stored in an array or field truncated. The C # specification does not guarantee that the listing of data is truncated, but the Microsoft implementation has regression tests to ensure that every new version of the compiler has this behavior.
All language specifications should say on this subject that floating point operations can be performed with greater precision at the discretion of the implementation.