Not. Suppose a = c, a very large number, and b is a very small number. It is possible that a - b has a representation less than a , but a + b so close to a (and more) that it is still most accurately represented as a .
Here is an example:
double a = 1L << 53; double b = 1; double c = a; Console.WriteLine(a - b < c);
EDIT:
Here is another example that matches your edited question:
double a = 1.0; double b = 1.0 / (1L << 53); double c = a; Console.WriteLine(a - b < c);
In other words, when we subtract a very small number from 1, we get a result less than 1. When we add the same number to 1, we simply get 1 back due to double precision restrictions.
Jon skeet
source share