The answer to the first question is in duplicate.
The answer to the second question:
Both Float and Doubles do not have infinite accuracy. You can easily think that they have about 16 digits of accuracy. All that has passed and you will have rounding and truncation errors.
So, 1.0e0 + 1e-38 will be devoid of precision to do anything other than the end with 1.0e0 due to truncation of extra precision.
Indeed, like the rest of the answer, it requires an understanding of how floating point numbers in IEEE format are actually added in binary format. The main idea is that the non-sign and exponent part of the binary floating-point number is shifted in the IEEE-754 block on the processor (80 bits wide by Intel, which means that there is always a truncation at the end of the calculation) to represent it real number. In decimal, it will look like this:
Digit: 1 234567890123456
Value: 1.0000000000000000000000000000 ... 0000
Value: 0.0000000000000000000000000000 ... 0001
After processing the add, itβs practically:
Digit: 1 234567890123456
Value: 1.0000000000000000000000000000 ... 0001
So, keeping in mind that the value is truncated around the 16-digit character (in decimal, this is exactly 22 bits in a 32-bit float and 51 bits in a 64-bit bit, ignoring the very important fact that the leading 1 is shifted (relative to the exponent ) and assumed (effectively compresses 23 bits into 22 for 32-bit and 52-51 for 64-bit), this is a very interesting point, but you should read more detailed examples, such as here for these details).
Shortened:
Digit: 1 234567890123456
Value: 1.00000000000000000000
Note that the really small decimal part is truncated, leaving 1.
Here is a good page that I use whenever I have problems with the representation of the actual representation in memory: Decimal to 32-bit IEEE-754 format . The site has links for 64-bit, as well as reverse links.