Are there compilers that optimize floating point operations for accuracy (as opposed to speed)?

We know that compilers are getting better and better by optimizing our code and speeding up its work, but in my question there are compilers that can optimize floating point operations to provide greater accuracy.

For example, the basic rule is to perform multiplications before adding, this is due to the fact that multiplication and division using floating point numbers do not introduce inaccuracies than the errors of addition and subtraction, but can increase the amount of inaccuracies introduced by addition and subtraction, therefore, In many cases, this must be done first.

So a floating point operation like

y = x*(a + b); // faster but less accurate

Should be changed to

y = x*a + x*b; // slower but more accurate

Are there any compilers that will optimize to improve floating point accuracy due to speed, as I showed above? Or is the main problem on the part of compilers is not to look at the precision of floating point operations?

thanks

Update: the selected answer showed a very good example when this type of optimization will not work, so it would be impossible for the compiler to find out what is a more accurate way to evaluate y. Thanks for the counter example.

+5
source share
3 answers

. x*(a + b), ( ) , x*a + x*b. , , (, , ), .

- x, a b , , .

, , , , x*(a+b) , ? .

, , , , , .

- , , :

x = 3.1415926535897931
a = 1.0e15
b = -(1.0e15 - 1.0)

, double, :

x*(a + b) = 3.1415926535897931

x*a + x*b = 3.0
+10

"" , IEEE 754. , , FP , . ( C ), , .

, , , , GCC -funsafe-math-optimizations -ffinite-math-only, . .

+2

, . , , ; , .

If you, as a programmer, have some information about the ranges of numbers that you manipulate, you can use parentheses, temporary variables, and similar constructs to strongly hint to the compiler about how you want.

0
source

All Articles