The task is to obtain a formula for determining the number of digits that a given decimal number in a given database can have.
For example: The decimal number 100006 can be represented by 17.11.9.8.7.6.8 digits in bases 2,3,4,5,6,7,8 respectively.
Well, the formula we obtained looks like this: (log10 (num) / log10 (base)) + 1.
in C / C ++ I used this formula to calculate the above results.
long long int size = ((double)log10(num) / (double)log10(base)) + 1.0;
But, unfortunately, the formula does not give the correct answer in some cases, for example:
Number 8 in base 2 : 1,0,0,0 Number of digits: 4 Formula returned: 3 Number 64 in base 2 : 1,0,0,0,0,0,0 Number of digits: 7 Formula returned: 6 Number 64 in base 4 : 1,0,0,0 Number of digits: 4 Formula returned: 3 Number 125 in base 5 : 1,0,0,0 Number of digits: 4 Formula returned: 3 Number 128 in base 2 : 1,0,0,0,0,0,0,0 Number of digits: 8 Formula returned: 7 Number 216 in base 6 : 1,0,0,0 Number of digits: 4 Formula returned: 3 Number 243 in base 3 : 1,0,0,0,0,0 Number of digits: 6 Formula returned: 5 Number 343 in base 7 : 1,0,0,0 Number of digits: 4 Formula returned: 3
So, the error is 1 digit. I just want someone to help me correct the formula so that it works for all possible cases.
Edit: According to input specification, I have to deal with cases like 10000000000, i.e. 10 ^ 10, I don’t think log10 () in C / C ++ can handle such cases? Therefore, any other procedure / formula for this problem will be highly appreciated.