We know that compilers are getting better and better by optimizing our code and speeding up its work, but in my question there are compilers that can optimize floating point operations to provide greater accuracy.
For example, the basic rule is to perform multiplications before adding, this is due to the fact that multiplication and division using floating point numbers do not introduce inaccuracies than the errors of addition and subtraction, but can increase the amount of inaccuracies introduced by addition and subtraction, therefore, In many cases, this must be done first.
So a floating point operation like
y = x*(a + b);
Should be changed to
y = x*a + x*b;
Are there any compilers that will optimize to improve floating point accuracy due to speed, as I showed above? Or is the main problem on the part of compilers is not to look at the precision of floating point operations?
thanks
Update: the selected answer showed a very good example when this type of optimization will not work, so it would be impossible for the compiler to find out what is a more accurate way to evaluate y. Thanks for the counter example.
source
share