Why doesn't printf () output this integer as a floating point number?

I have this code:

#include <stdio.h> int main() { int i = 12345; printf("%f", i); return 0; } 

printf () will return 0.000000, should printf () not interpret the bits contained in i as a floating point number?

+1
c windows printf stdio
Dec 18 '14 at 9:44
source share
8 answers

This is technically undefined behavior, so anything can happen, for example, what @Art describes in his answer.

So this answer explains what happens if you try to print 12345 as a float, assuming printf really sees the correct 32-bit value.

Analyze the binary representation for the number you are trying to represent.

Using http://www.h-schmidt.net/FloatConverter/IEEE754.html , we can see that the decimal number 12345 has the following 32-bit representation:

 decimal 12345 hexadecimal 0x00003039 

Converted bit to bit to 32-bit IEEE-754 floating point value, this means:

 float 1.7299E-41 

try printing it:

 #include <stdio.h> int main() { printf("%f\n", 1.7299E-41); return 0; } 

Fingerprints:

 0.000000 

Now read the printf man page as shown here: http://linux.die.net/man/3/printf

f, f

The double argument is rounded and converted to decimal notation in the style of [-] ddd.ddd, where the number of digits after the decimal point is equal to the specification of precision. If the accuracy is absent, it is taken equal to 6; if the precision is clearly zero, the decimal point symbol does not appear. If a decimal point appears, at least one digit appears in front of it.

The value 1.7299E-41 cannot be represented using this flag with the default precision, so the result is indeed the correct value.

+4
Dec 18 '14 at 10:01
source share

Technically, this behavior is undefined .

You must use a union for this and print it with %e instead of %f , because this is a small number:

 #include <stdio.h> union int2float { int a; float b; }; int main() { union int2float tmp; tmp.a = 12345; printf("%e\n", tmp.b); return 0; } 

The result is 1.729903e-41 .

with %.100f you get 0.0000000000000000000000000000000000000000172990295420898667405534417057140146406548336724655872023410 as output.

+4
Dec 18 '14 at 10:04
source share

The most likely reason printf does not interpret the bits in i , since the floating point number is because printf does not see i at all. I suspect that you are using x86_64 or a similar platform where arguments are passed to functions in registers. Usually printf interprets your int as any format specifier that you gave it. But floating point arguments for functions are handled differently on x86_64 and placed in different registers. So the reason you get 0 as the output is because printf extracts the first floating point argument after the format string instead of the first general-purpose register and prints this. Since you did not use floating point registers before your call to printf , they are most likely reset to zero from running the program.

Here you can try:

 #include <stdio.h> int main(int argc, char **argv) { printf("%f\n", 17.42); printf("%f\n", 0x87654321); return 0; } 

I get this output:

 $ ./foo 17.420000 17.420000 

Having said all this, the only correct answer: this is undefined behavior, do not do this. But it’s interesting to see what happens under the hood.

If you want to dive into this and work on Linux / MacOS / * BSD this document, section 3.2.3 describes how arguments (including varargs functions like printf) are described. Windows does things a little differently, but in this case the result on Windows will be exactly the same. printf on Linux / MacOS / * BSD expects the argument to be printed to %xmm0 ( %xmm1 on Windows), while your call passes it to %rsi ( %rdx on Windows).

Of course, if I'm wrong about x86_64, ignore everything I said and look at the answer from @SirDarius, because this applies to many (especially old) architectures. If you use alpha, sparc, powerpc, arm, etc., you are on your own. Call printf once with int, once with a float, parse the compiled code and see how the arguments are passed.

+2
Dec 18 '14 at 10:05
source share

If you want to print int as a float, produce it:

 printf("%f", (float)i); 
+1
Dec 18 '14 at 9:49
source share

The only way printf() knows the data type of the arguments after the format string (and their count) by reading the format string. The representation of an integer in memory is different from a floating point number. You are passing printf () data in integer format, but it accepts floating point format. This is why you get garbage in the console. You can fix this by casting types.

 printf("%f", (float) i); 
+1
Dec 18 '14 at 9:50
source share

when you use the %f format specifier, you must use the float variable and print the value. but you declared i as an integer and try to print using %f , it will print some value. Since this is an integer, you need to use the %d specifier. In printf %f will look for the variable float. but there will be no float variable, so it prints some value.

You can enter the value of integers during printing. printf("%f",(float)i);

0
Dec 18 '14 at 9:50
source share

To make type i as a float , you need to do a type listing.

you can use

 printf("%f", (float)i); 

Example.

 float myFloat; int myInt; myFloat = (float)myInt ; // Type casting 
0
Dec 18 '14 at 9:51
source share

try it

 int i = 12345; printf("%.2f", i/1.0f); 
0
Dec 18 '14 at 9:52
source share



All Articles