Why floating "single precision"?

I am curious why the IEEE calls a 32-bit floating-point number a single precision. It’s just a standardization tool, or “single” actually refers to one “something.”

Is this just a standardized level? Like in, accuracy level 1 (single), accuracy level 2 (double), etc.? I searched everything and found a lot about the history of floating point numbers, but didn’t answer anything to my question.

+8
double floating-point
source share
4 answers

On the machine that I was working on at that time, the float occupied one 36-bit register. The double took two 36-bit registers. The hardware had separate instructions for working with 1 register and 2 versions of the number register. I do not know exactly what the terminology came from, but it is possible.

+7
source share

In addition to introducing hardware on most systems, the 32-bit format was used to implement the “real” Fortran type and the 64-bit format to implement the Fortran double-precision type.

+1
source share

I think this just refers to the number of bits used to represent a floating point number, where single-precision uses 32 bits and double-precision uses 64 bits, i.e. doubles the number of bits.

0
source share

The terminology "double" is not entirely correct, but close enough.

The 64-bit float uses 52 bits for the fraction instead of the 23 bits used for the fractional part in the 32-bit float - it is not really “double”, but it uses double common bits.

The answer to this question is very interesting - you should let it read.

0
source share

All Articles