Double Multiplication in C #

I have a problem with simple multiplication, which I cannot understand ... I work with .Net Framework 4 and build in x86. I am executing the following code:

double x = 348333.673899683; double y = 4521014.98461396; double aux = x * y; 

The expected value for aux is 1574821759346,09949827752137468 (I did this with a simple calculator). However, the value I get in aux is 1574821822464 . See that it’s not an error of accuracy, even the integer part has been changed.

If I put a break point in the multiplication and hover over the operator de *, I will see x * y = 1574821759346.0994 , which is ok. If I am above the aux variable, I see aux = 1574821822464

To clarify the last paragraph, you can see two pictures below:

enter image description here

enter image description here

Firstly, I thought it might be because the compilation is x86, but reading the following entry, I discard this option:

Double byte size in 32-bit and 64-bit OS

I can not understand what is happening here. Any help would be appreciated.

--- EDIT MORE INFO ---

I am using VS2015. I added three more lines for debugging it:

 log.Info(x); log.Info(y); log.Info(aux); 

To show the logs, I use the log4net library. Output:

 23322 [8] INFO Art.Model.Scenarios (null) - 348333,673899683 24745 [8] INFO Art.Model.Scenarios (null) - 4521014,98461396 26274 [8] INFO Art.Model.Scenarios (null) - 1574821822464 

So this is not a bug in the debugger. If I create a completely new project and solution, it works fine, but I do not understand why it does not work in this solution.

--- SECOND PICTURE ---

Thanks to the comments, I tried something new:

 double x = 348333.673899683; double y = 4521014.98461396; double aux = x * y; decimal xx = 348333.673899683m; decimal yy = 4521014.98461396m; decimal auxx = xx * yy; log.Info(x); log.Info(y); log.Info(aux); log.Info(xx); log.Info(yy); log.Info(auxx); 

And the result:

 16129 [8] INFO Art.Model.Scenarios (null) - 348333,673899683 16145 [8] INFO Art.Model.Scenarios (null) - 4521014,98461396 16145 [8] INFO Art.Model.Scenarios (null) - 1574821822464 16145 [8] INFO Art.Model.Scenarios (null) - 348333,673899683 16145 [8] INFO Art.Model.Scenarios (null) - 4521014,98461396 16145 [8] INFO Art.Model.Scenarios (null) - 1574821759346,0994982775213747 

Thus, it works with decimal , but not with double . Can someone explain this? I do not understand why this is happening.

+5
source share
2 answers

Most likely, if you use DirectX (the only reason I can find behind your problem), this problem is due to the fact that every time a device is created and / or processed, it causes FPUs to have the same accuracy thus losing accuracy and causing double, long, decimal variables to truncate. If I try the IEEE-754 floating point converter and enter your data, I get this result, which is exactly your case: your data at some point was read as a double-precision number, but then it was truncated into one .precision floating point, as you can see:

enter image description here

This problem can be solved by explicitly building the Device object under the FpuPreserve flag.

I also had this problem, and at the beginning, albeit about incorrect casting or so, until after a long trace it became clear that the values ​​were truncated after I built the DirectX Device object.

+1
source

This phenomenon is due to the definition of data types.

double : "Accuracy. When you work with floating point numbers, remember that they do not always have an accurate representation in memory." https://msdn.microsoft.com/en-us//de/library/x99xtshc.aspx

decimal : "Compared to floating point types, the decimal type has higher precision and a smaller range, which makes it suitable for financial and monetary calculations." https://msdn.microsoft.com/en-us//library/364x0z75.aspx

Binary data types are stored as a binary fraction, because of which they cannot be represented exactly if it is not a binary fraction. https://msdn.microsoft.com/en-us//ae382yt8

0
source

All Articles