I suspect that if you are interested in very large or small p values, it might be best to do some form of algebraic manipulation by a generalized average formula before entering numerical values.
For example, in the small-p limit, it can be shown that the generalized mean tends to the nth root from the product x_1 * x_2 * ... x_n. Higher-order terms in p include sums and products of log (x_i), which should also be relatively numerically stable for computation. In fact, I believe that the first-order expansion in p has a simple connection with the variance log (x_i):

If we apply this formula to a set of 100 random numbers drawn evenly from the range [0.2, 2], we get the following tendency:

which here shows that the asymptotic formula becomes quite accurate for p less than about 0.3, and a simple formula only fails when p is less than about 1e-10.
In the case of large p, the x_i that has the largest value prevails (let's call this index i_max). You can rearrange the generalized average formula to take the following form, which has less pathological behavior for large p:

If this is applied (using standard numpy procedures, including numpy.log1p ), another 100 evenly distributed samples over [0.2, 2.0] that the rebuilt formula exactly matches the simple formula, but remains true for much larger p values, for which the simple formula overflows when calculating degrees x_i.

(Note that the left chart has a blue curve for a simple formula shifted by 0.1, so you can see where it ends due to overflow. For p less than about 1000, the two curves would otherwise be indistinguishable.)
rwp
source share