Integers for counting; floating point numbers are used for calculating. We have them as in mathematics (where they are called integers and real numbers, respectively), so we need them in algorithms and programs too. The end of the story.
Of course, the range of most implementations of fp is larger than the range of most integer implementations, but I could come up with a language tomorrow where I allow 512-bit integers, but only 16-bit floating-point numbers (1 sign bit, 3 exponential bits , 12 significant bits). Integers are still not closed during division, and floating-point numbers are still not used for counting, because although there is a successor function on fp numbers, there are no real numbers, and we like to pretend that fp numbers are close implementation of real numbers.
No, integers on the processor are not so simple, the processor performs fundamental logical operations on bits. And if processor X1 performs integer arithmetic faster than fp arithmetic, trawl through memory banks will find a counter example.
fp , .
- , . fp ( , !)