I am writing a simple method that will calculate the number of decimal places in a decimal value. The method is as follows:
public int GetDecimalPlaces(decimal decimalNumber) { try { int decimalPlaces = 1; double powers = 10.0; if (decimalNumber > 0.0m) { while (((double)decimalNumber * powers) % 1 != 0.0) { powers *= 10.0; ++decimalPlaces; } } return decimalPlaces;
I ran it against some test values ββto make sure everything was working fine, but I return to the last one really weird:
int test = GetDecimalPlaces(0.1m); int test2 = GetDecimalPlaces(0.01m); int test3 = GetDecimalPlaces(0.001m); int test4 = GetDecimalPlaces(0.0000000001m); int test5 = GetDecimalPlaces(0.00000000010000000001m); int test6 = GetDecimalPlaces(0.0000000001000000000100000000010000000001000000000100000000010000000001000000000100000000010000000001m);
Tests 1-5 work fine, but test6 returns 23. I know that the value passed exceeds the maximum decimal precision, but why 23? Another thing that I found odd is when I set a breakpoint inside the GetDecimalPlaces method after my call from test6, the decimalNumber inside the method goes through the same value that would be obtained from test5 (20 decimal places), although the value passed , has 20 decimal places 23. Returns.
Perhaps this is simply because I pass in a number that has too many decimal places and everything goes wrong, but I want to make sure that I have not missed something fundamentally wrong here, which may discard the calculations for other values ββlater along the way .
source share