When the argument variable is called in c, integer parameters are incremented to int values, and floating-point parameters are incremented to double
Since the prototype does not specify types for optional arguments, argument arguments are added to the call to the variational function by optional argument values. This means that objects of type char or short int (whether signed or not) are advertised as int or unsigned int , depending on the situation; and that objects of type float promoted to type double . Thus, if the caller passes char as an optional argument, he is assigned an int , and the function can access it using va_arg (ap, int) .
int type must be 4 bytes on 32-bit machines and 8 bytes on 64-bit machines, right?
Therefore, I am wondering what to add when I pass a long long int variable variable, for example printf with the format %lld .
And once again it is interesting what to add when I pass the variable long double to printf with the format %Lf (regardless of whether on 32 or 64-bit machines).
[ Edited ]
on a 32 bit machine, I tried this:
#include <stdio.h> int main(void) { printf("sizeof(int) %d\n", sizeof(int)); printf("sizeof(long int) %d\n", sizeof(long int)); printf("sizeof(long long int) %d\n", sizeof(long long int)); printf("%lld\n", 1LL<<33); printf("sizeof(float) %d\n", sizeof(float)); printf("sizeof(double) %d\n", sizeof(double)); printf("sizeof(long double) %d\n", sizeof(long double)); return 0; }
Result:
sizeof(int) 4 sizeof(long int) 4 sizeof(long long int) 8 8589934592 sizeof(float) 4 sizeof(double) 8 sizeof(long double) 12
this makes me think that not all parameters are advanced to int , otherwise I would print 0 instead of 8589934592.
Perhaps only arguments smaller than int are promoted to int . And something like this could be for floating point types.
[ Edited ]
on a 64 bit machine, I run this:
int main(void) { printf("sizeof(int) %lu\n", sizeof(int)); printf("sizeof(long) %lu\n", sizeof(long)); printf("sizeof(long long) %lu\n", sizeof(long long)); return 0; }
and get
sizeof(int) 4 sizeof(long) 8 sizeof(long long) 8
if I understand the standard well, only char and short advance to int . Interestingly, what happens in a smaller architecture, such as a 16-bit or 8-bit MCU. I think the size of an int is architecture dependent, but I wonder if sizeof(int) be 1 on an 8-bit architecture. In this case, moving short to int may not be possible if you do not lose some bits.