Double has a range greater than a 64-bit integer, but its accuracy is less than its representation (since double is also 64-bit, it cannot correspond to more real values). Thus, when representing large integers, you begin to lose precision in the integer part.
#include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; }
Fingerprints:
2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000
Note how max int can fit into double without loss of precision, and boost::intmax_t (in this case 64-bit) cannot. float cannot even contain int .
Now the question is: is there a way in C ++ to check whether the entire range of a given integer type can fit into the loating point type without loss of precision?
Preferably,
- it will be a compile-time check that can be used in a static statement,
- and will not include an enumeration of constants that the compiler needs to know or can calculate.
c ++ double floating-point precision
Alex b
source share