Wasn't double-type precision 15 digits in C #?

I tested this code from Brainteasers:

        double d1 = 1.000001;

        double d2 = 0.000001;

        Console.WriteLine((d1 - d2) == 1.0);

And the result is False. When I change the data type:

        decimal d1 = 1.000001M;

        decimal d2 = 0.000001M;

        decimal d3 = d1-d2;

        Console.WriteLine(d3 == 1);

The program records the correct answer: "Truth."

This problem just uses 6 digits after floating point. What happened with an accuracy of 15 digits?

+5
source share
6 answers

This has nothing to do with accuracy - this is due to representative rounding errors.

System.Decimal , , . System.Single System.Double , , .

System.Decimal , , , System.Single System.Double .

. System.Double:

, , , . Double 15 , 17 . :

  • , .

  • , , .

+28

, , , ( epsilon) . ,

if (Math.Abs(d1 - d2) <= Double.Epsilon) ...

, d1 d2 , .

( 2 2015 .)

:

// Assumes that d1 and d2 are not both zero
if (Math.Abs(d1 - d2) / Math.Max(Math.Abs(d1), Math.Abs(d2)) <= Double.Epsilon) ...
, d1 d2, d1 d2, Epsilon.


& ; http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
& ; http://msdn.microsoft.com/en-us/library/system.double.aspx#Precision

+8

, double - .

, , , , . . 1.0M/3.0M, , 0,333333, .

FP , , , , .

- ; .NET , , FP , , FP.

, 15 15 . d1 7 , 6, d2 1 . , , .

+4

, . , decimal.

+3

, .

.1 , . .000110011001100110011... . .

+3

.

+2

All Articles