As you know, IEEE floating point numbers can store exact representations of all integers and integer multiple inversions-of-powers-of-two, such as 1/2 or 3/4, since these numbers are stored within the floating point type range.
However, if floating point parsers, as a rule, guarantee the exact results of the analysis of decimal representations of such numbers?
For example, if I use 0.75 as a double literal in a C program, will the compiler guarantee that the compiled code contains the exact 3/4 representation, or is there a risk that it will create the sum of some inaccurate representation of 0.7 and some inaccurate representation of 0, 05?
Or, if I use 3e4 as a double literal, is it possible to accurately multiply 3 by some inaccurate representation of 2 ^ (4 * ln (10) / ln (2)) or some similar math
Are there any standards that FP parsers usually need to follow in this matter, or is it usually completely implementation related? If this is the latter, does anyone know how practically important implementations like GCC or glibc actually work?
Basically, I just ask for curiosity, and not because I want to rely on behavior; but sometimes it can be quite convenient to know that comparisons of FP equalities are guaranteed to work if the values ββcan only be known from literals.
source share