AFAIK, on "regular" PCs (x86 with x87-like mathematical coprocessor) the difference in speed does not matter, because the calculations are internally performed in any case with an accuracy of 80 bits.
Floats can take on meaning when you have large arrays of floating point numbers for control (scientific calculations or similar things), so having a smaller data type can be convenient for using less memory or for quick reading from RAM / disk .
It may also be useful to use floats instead of doubles on machines that lack a floating point unit (for example, most microcontrollers), where all floating point arithmetic is performed in software using the code inserted by the compiler; in this case, there may be a speed gain acting on the floats (and in such environments every bit of memory also often occurs).
On IMO computers, you can simply use double in "normal" contexts, just try to avoid mixing data types (double, float, ints, ...) in one expression to avoid unnecessary costly conversions. In any case, with literals, the compiler must be smart enough to perform the conversion at compile time.
Matteo Italia Sep 12 '10 at 10:29 2010-09-12 22:29
source share