C # float error? 0.1 - 0.1 = 1.490116E-08

What's happening?! Subtraction works fine until I get to 0.1 - 0.1. I am in visual C # 2008 using the nonoba.com API.

Console.WriteLine("hit! " + Users[targetNum].character.health + " : " + player.character.profile.attackPower); Users[targetNum].character.health -= player.character.profile.attackPower; Console.WriteLine("health! " + Users[targetNum].character.health); 

exit:

 hit! 0.1 : 0.1 health! 1.490116E-08 

Thanks to everyone - I could use the decimal type, as I usually add / subtract good round numbers. For now, I just go in:

 if (Users[targetNum].character.health <= 0.00001) 

By the way, I knew that this would not be a “mistake” in C # - I thought that it would be either a mistake in my code, or some misunderstanding of what it was.

After reading all the recommended reading, I’m going to conclude that my stupidity is due to the fact that the ActionScript number type is usually used, which may have a decimal rather than a binary floating-point number - in any case, it will never give this result.

0
floating-point c #
source share
5 answers

This seems pretty normal for floating point math ... you always need to check a small delta to take into account subtle rounding differences. Depending on the script, decimal might be what you want.

In principle, if you cannot be sure that in both cases it is exactly the same 0.1 (however, nothing has been done), you are unlikely to get zero; in general, you get something very close to zero. With decimal you usually get more than you intuitively expect.

See also Jon Skeet pages:

+17
source share

You obviously need to read "What Every Computer Scientist Should Know About Floating Point Numbers . "

Instead of thinking that I found a mistake in such situations, I usually assume that one of my assumptions needs to be checked first.

+12
source share

If you always add and subtract “nice round” numbers, that is, tenths or hundredths, then you can track your hit and health values ​​in whole tenths. Analogy is a financial program that tracks money in whole cents instead of floating point dollars. Using integers avoids all the problems of floating point math.

+1
source share

Floating point math is always approximate, in any language, because that’s how processors work. If you need the absolute accuracy of your answers - for example, because you are dealing with money - then you should not use floating points.

0
source share

A bit offtopic, but it’s interesting to read here that describes why cos(x) != cos(y) may be true even if x == y :

http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.18

0
source share

All Articles