The easiest canonical and possibly the most portable way is to ask snprintf() how much space is needed:
char sbuf[2]; int ndigits; ndigits = snprintf(sbuf, (size_t) 1, "%lld", (long long) INT_MIN);
slightly less portable, possibly using intmax_t and %j :
ndigits = snprintf(sbuf, (size_t) 1, "%j", (intmax_t) INT_MIN);
It can be considered that it is too expensive to execute at runtime, but it can work for any value, not just MIN / MAX values โโof any integer type.
Of course, you could just simply calculate the number of digits that the required integer should be expressed in Base 10 notation with a simple recursive function:
unsigned int numCharsB10(intmax_t n) { if (n < 0) return numCharsB10((n == INTMAX_MIN) ? INTMAX_MAX : -n) + 1; if (n < 10) return 1; return 1 + numCharsB10(n / 10); }
but this, of course, also requires the processor at runtime, even when it is built-in, although perhaps a little less than snprintf() .
@R. the answer is higher, although more or less wrong, but on the right track. Here is the correct conclusion of some very well and widely tested and portable macros that implement compile-time calculations using sizeof() using a little @R correction. initial wording to start:
First, we can easily see (or show) that sizeof(int) is the log base 2 of UINT_MAX divided by the number of bits represented by one unit sizeof() (8, aka CHAR_BIT ):
sizeof (int) == log2 (UINT_MAX) / 8
because UINT_MAX , of course, is only 2 ^ (sizeof (int) * 8)), and log2 (x) is the inverse of 2 ^ x.
We can use the identity "logb (x) = log (x) / log (b)" (where log () is the natural logarithm) to find the logarithms of other bases. For example, you can calculate "database 2" "x" using:
log2 (x) = log (x) / log (2)
and:
log10 (x) = log (x) / log (10)
So, we can conclude that:
log10 (v) = log2 (v) / log2 (10)
Now what we want at the end is base 10 of the UINT_MAX , since UINT_MAX (10) is about 3, and since we know from the top that log2 () is in terms of sizeof() , we can say that log10 ( UINT_MAX ) approximately:
log10 (2 ^ (sizeof (int) * 8)) ~ = (sizeof (int) * 8) / 3
This is not ideal, especially since we really want this ceiling value, but with some minor adjustments to account for the integer rounding of log2 (10) to 3, we can get what we need by first adding it to log2, then subtracting 1 from result for any larger integer, as a result we get this โreasonably goodโ expression:
#if 0 #define __MAX_B10STRLEN_FOR_UNSIGNED_TYPE(t) \ ((((sizeof(t) * CHAR_BIT) + 1) / 3) - ((sizeof(t) > 2) ? 1 : 0)) #endif
Even better, we can multiply our first log2 () by 1 / log2 (10) (multiplying by the inverse of the divisor is the same as dividing by the divisor), and this makes it possible to find the best integer approximation. I recently (re?) Came across this proposal while reading Sean Anderson's cue ball: http://graphics.stanford.edu/~seander/bithacks.html#IntegerLog10
To do this with integer math in the best approximation, we need to find the ideal relation representing our inverse. This can be found by looking for the smallest fractional part of multiplying our desired value 1 / log2 (10) by successive degrees 2 within a reasonable range of degrees 2, for example, with the following small AWK script:
awk 'BEGIN { minf=1.0 } END { for (i = 1; i <= 31; i++) { a = 1.0 / (log(10) / log(2)) * 2^i if (a > (2^32 / 32)) break; n = int(a) f = a - (n * 1.0) if (f < minf) { minf = f minn = n bits = i } # printf("a=%f, n=%d, f=%f, i=%d\n", a, n, f, i) } printf("%d + %f / %d, bits=%d\n", minn, minf, 2^bits, bits) }' < /dev/null 1233 + 0.018862 / 4096, bits=12
Thus, we can get a good integer approximation of multiplying our value of log2 (v) by 1 / log2 (10) by multiplying it by 1233 followed by a right shift of 12 (2 ^ 12 is, of course, 4096):
log10 (UINT_MAX) ~ = ((sizeof (int) * 8) + 1) * 1233 โ 12
and, together with the addition of one, to do the equivalent of finding the ceiling value, eliminates the need to mess with odd values:
#define __MAX_B10STRLEN_FOR_UNSIGNED_TYPE(t) \ (((((sizeof(t) * CHAR_BIT)) * 1233) >> 12) + 1) #define __MAX_B10STRLEN_FOR_SIGNED_TYPE(t) \ (__MAX_B10STRLEN_FOR_UNSIGNED_TYPE(t) + ((sizeof(t) == 8) ? 0 : 1)) #define __MAX_B10STRLEN_FOR_INT_TYPE(t) \ (((t) -1 < 0) ? __MAX_B10STRLEN_FOR_SIGNED_TYPE(t) \ : __MAX_B10STRLEN_FOR_UNSIGNED_TYPE(t))
whereas normally the compiler will evaluate the expression my __MAX_B10STRLEN_FOR_INT_TYPE() macro at compile time. Of course, my macro always calculates the maximum space required by a given integer type, not the exact space needed for a specific integer value.