I am curious why the IEEE calls a 32-bit floating-point number a single precision. It’s just a standardization tool, or “single” actually refers to one “something.”
Is this just a standardized level? Like in, accuracy level 1 (single), accuracy level 2 (double), etc.? I searched everything and found a lot about the history of floating point numbers, but didn’t answer anything to my question.
double floating-point
Keith grout
source share