decimal not magic, everything is mathematical for me type. It is still a floating point number - the main difference from float is that it is a decimal floating point number, not a binary one. Thus, you can easily represent 0.3 as a decimal (this is not possible as a finite binary number), but you do not have infinite precision.
This makes him work much closer to the person doing the same calculations, but you should still imagine that someone is doing each operation individually. It is specially designed for financial calculations, where you do not do what you do in Maths - you just go step by step, rounding each result in accordance with fairly specific rules.
In fact, for many cases, decimal can work much worse than a float (or better, double ). This is because decimal does not perform automatic rounding at all. Doing the same with double gives you 22, as expected, because it automatically assumed that the difference does not matter - in decimal , it does - this is one of the important points in decimal . You can emulate this by introducing Math.Round guide, of course, but that doesn't make much sense.
Luaan Aug 25 '15 at 7:37 2015-08-25 07:37
source share