double pow(double, int);
has not been removed from the specification. It is simply reformulated. He currently lives in [c.math] / p11. How it is calculated is a detail of the implementation. The only C ++ 03 signature that has changed:
float pow(float, int);
Now this returns double:
double pow(float, int);
And this change was made for C compatibility.
Explanation
26.8 [cmath] / p11 says:
In addition, there must be additional overloads sufficient to ensure:
If any argument matching a double parameter is of type long double, then all arguments matching the double parameters are effectively applied to the long double.
Otherwise, if any argument corresponding to a double parameter is of type double or an integer type, then all arguments corresponding to double arguments are effectively applied.
Otherwise, all arguments corresponding to double parameters are effectively thrown into the float.
This paragraph involves a number of overloads, including:
double pow(double, int); double pow(double, unsigned); double pow(double, unsigned long long);
and etc.
These can be actual overloads or can be implemented with limited templates. I personally implemented it in both directions and strongly supported the implementation of a limited template.
Second update for address optimization problems:
Implementation allows you to optimize any overload. But remember that optimization should be just that. The optimized version should return the same answer. The experience from developers of functions such as pow is that by the time you run into a problem so that your implementation with an integral metric gives the same answer as the implementation with a floating point metric, “optimization” is often slower.
As a demonstration, the following program prints pow(.1, 20) twice using std :: pow, and a second time using an “optimized” algorithm using an integral metric:
#include <cmath> #include <iostream> #include <iomanip> int main() { std::cout << std::setprecision(17) << std::pow(.1, 20) << '\n'; double x = .1; double x2 = x * x; double x4 = x2 * x2; double x8 = x4 * x4; double x16 = x8 * x8; double x20 = x16 * x4; std::cout << x20 << '\n'; }
On my system, this produces:
1.0000000000000011e-20 1.0000000000000022e-20
Or in hex notation:
0x1.79ca10c92422bp-67 0x1.79ca10c924232p-67
And yes, pow developers really care about all these bits at the lower end.
Thus, although the freedom lies in shuffling pow(double, int) onto a separate algorithm, most of the developers I know about have abandoned this strategy, with the possible exception of checking very small integral indicators. And in this case, it is usually advantageous to put this check in the implementation with a floating point metric to get the biggest hack for your optimization.