Double - decimal without rounding after 15 digits

When converting "high" precision Double to Decimal, I lose accuracy with Convert.ToDecimal or casting to (decimal) due to rounding.

Example:

double d = -0.99999999999999956d; decimal result = Convert.ToDecimal(d); // Result = -1 decimal result = (Decimal)(d); // Result = -1 

The decimal value returned by Convert.ToDecimal (double) contains a maximum of 15 significant digits. If a value parameter contains more than 15 significant digits, it is rounded to the nearest nearest.

So, in order to maintain my accuracy, I have to convert my double object to a string, and then call Convert.ToDecimal (String):

 decimal result = System.Convert.ToDecimal(d.ToString("G20")); // Result = -0.99999999999999956d 

This method works, but I would like to avoid using the String variable to convert Double to Decimal without rounding after 15 digits?

+7
double decimal c # precision
source share
2 answers

One possible solution is to decompose d as the exact sum of n doubles, the last of which is small and contains all the final significant digits you want when converting to decimal, and the first (n-1) that will be converted exactly to decimal.

For a double d source between -1.0 and 1.0:

  decimal t = 0M; bool b = d < 0; if (b) d = -d; if (d >= 0.5) { d -= 0.5; t = 0.5M; } if (d >= 0.25) { d -= 0.25; t += 0.25M; } if (d >= 0.125) { d -= 0.125; t += 0.125M; } if (d >= 0.0625) { d -= 0.0625; t += 0.0625M; } t += Convert.ToDecimal(d); if (b) t = -t; 

Test it at ideone.com.

Note that d -= operations d -= accurate even if C # calculates binary floating point operations with higher precision than double (which allows itself to be done).

This is cheaper than converting from double to string, and gives a few extra digits of accuracy for the result (four precision bits for the four previous if-then-elses).

Note: if C # did not allow itself to perform floating point calculations with higher precision, a good trick would be to use Decker separation to separate d into two values d1 and d2 , which convert each to exactly decimal. Alas, Decker splitting only works with a strict interpretation of the multiplication and addition of IEEE 754.


Another idea is to use the C # frexp version to get the s value and the exponent d , and also calculate (Decimal)((long) (s * 4503599627370496.0d)) * <however one computes 2^e in Decimal> .

+4
source share

There are two approaches, one of which will work for values ​​below 2 ^ 63, and the other will work for values ​​above 2 ^ 53.

Divide smaller values ​​into integers and fractional parts. Part of the integers can be precisely cast to long , and then Decimal [note that direct casting to Decimal may be inaccurate!] The fractional part can be exactly multiplied by 9007199254740992.0 (2 ^ 53), converted to long , then Decimal , and then divided by 9007199254740992.0m. Adding the result of this division to the part of the integer should give a Decimal value that is within the same least significant digit of correctness [it may not be exactly rounded, but it will still be much better than the built-in conversion!]

For large values, multiply by (1.0 / 281474976710656.0) (2 ^ -48), take the integer part of this result, multiply it by 281474976710656.0 and subtract from the original result. Convert the results of an integer from division and subtraction to Decimal (they must convert exactly), multiply the former by 281474976710656m and add the latter.

+1
source share

All Articles