Decimal inaccuracy in .NET.

Something strange happened to me during debugging yesterday, and I can’t explain it:

Decimal calculation

Parenthesized decimal

So maybe I don't see the obvious here, or I misunderstood something about the decimal value in .NET, but shouldn't the result be the same?

+57
decimal c # floating-accuracy
Aug 25 '15 at 7:34
source share
4 answers

decimal not magic, everything is mathematical for me type. It is still a floating point number - the main difference from float is that it is a decimal floating point number, not a binary one. Thus, you can easily represent 0.3 as a decimal (this is not possible as a finite binary number), but you do not have infinite precision.

This makes him work much closer to the person doing the same calculations, but you should still imagine that someone is doing each operation individually. It is specially designed for financial calculations, where you do not do what you do in Maths - you just go step by step, rounding each result in accordance with fairly specific rules.

In fact, for many cases, decimal can work much worse than a float (or better, double ). This is because decimal does not perform automatic rounding at all. Doing the same with double gives you 22, as expected, because it automatically assumed that the difference does not matter - in decimal , it does - this is one of the important points in decimal . You can emulate this by introducing Math.Round guide, of course, but that doesn't make much sense.

+65
Aug 25 '15 at 7:37
source share

Decimal can only store values ​​that are exactly represented in decimal form. Here is 22/24 = 0.91666666666666666666666 ... which needs infinite precision or a rational type for storage, and it is no longer equal to 22/24 after rounding. If you first do the multiplication, then all the values ​​will be accurately represented, therefore, you will see the result.

+29
Aug 25 '15 at 7:48
source share

By adding parentheses, you will see that division is calculated before multiplication. This seems to be enough to influence the calculations enough to introduce floating precision .

Since computers cannot actually produce all possible numbers, you must make sure that you take this into account in your calculations.

+14
Aug 25 '15 at 7:40
source share

Although Decimal has higher precision than Double , its main useful function is that each value exactly matches its human readable representation. Although the fixed decimal types available in some languages ​​can ensure that neither adding or subtracting two fixed-point fixed-point values ​​or multiplying a fixed-point type by an integer will ever cause rounding errors, large decimal "types like those found in Java can ensure that no multiplication never cause rounding errors, floating-point types with Decimal , similar to those found in .NET, do not provide such guarantees, and no decimal types can not gara ted that the division operation can be performed without rounding errors (in Java have the ability to throw an exception in the case of rounding needed).

While those who decided to make Decimal floating point type, they probably suggested that it could be used for situations requiring more digits to the right of the decimal point or more to the left, with a floating point, regardless of -10 or base-2, make rounding issues inevitable for all operations.

+1
Aug 25 '15 at 16:02
source share



All Articles