Floating-point problems in the approximation of asymptotic functions - Python

New for python coming from MATLAB.

I use hyperbolic tangent truncation of the scale function. I am facing my problem when applying the function 0.5 * math.tanh(r/rE-r0) + 0.5 to an array of range values r = np.arange(0.1,100.01,0.01) . I get several 0.0 values ​​for a function on the side approaching zero, which causes problems with the domain when running the logarithm:

 P1 = [ (0.5*m.tanh(x / rE + r0 ) + 0.5) for x in r] # truncation function 

I use this work:

 P1 = [ -m.log10(x) if x!=0.0 else np.inf for x in P1 ] 

which is sufficient for what I am doing, but it is a bit of a group help solution.

As requested for mathematical clarity:

In astronomy, the scale of quantities works something like this:

 mu = -2.5log(flux) + mzp # apparent magnitude 

where mzp is the value at which one photon per second could be seen. Therefore, large flows are equated to a smaller (or more negative) apparent value. I make models for sources that use several component functions. Ex. two seric functions with different sulfur indices with external truncation P1 on the internal component and internal truncation 1-P1 on the external component. Thus, when the truncation function is added to each component, the value determined by the radius becomes very large due to how small mu1-2.5 * log ( P1 ) gets as P1 asymptotically approaches zero.

TL; DR: I would like to know if there is a way to save floating points whose accuracy is insufficient to be non-zero (in particular, in the results of functions asymptotically approaching zero). This is important because when you take the logarithm of such numbers, the result is a domain error.

The last number before entering the non-logarithmic P1 starts counting zero: 5.551115123125783e-17 , which is the general result of a rounding error with rounding with a floating point, where the desired result should be zero.

Any input is appreciated.

@user: Dan without putting my whole script:

 xc1,yc1 = 103.5150,102.5461; Ee1 = 23.6781; re1 = 10.0728*0.187; n1 = 4.0234; # radial brightness profile (magnitudes -- really surface brightness but fine in ex.) mu1 = [ Ee1 + 2.5/m.log(10)*bn(n1)*((x/re1)**(1.0/n1) - 1) for x in r]; # outer truncation rb1 = 8.0121 drs1 = 11.4792 P1 = [ (0.5*m.tanh( (2.0 - B(rb1,drs1) ) * x / rb1 + B(rb1,drs1) ) + 0.5) for x in r] P1 = [ -2.5*m.log10(x) if x!=0.0 else np.inf for x in P1 ] # band-aid for problem mu1t = [x+y for x,y in zip(P1,mu1)] # m1 truncated by P1 

where bn (n1) = 7.72 and B (rb1, drs1) = 2.65 - 4.98 * (r_b1 / (- drs1));

mu1 is the amplitude profile of the truncated component. P1 is the truncation function. Many of the final entries for P1 are zero, which is due to the fact that the floating points are not different from zero due to floating point precision.

A simple way to see the problem:

 >>> r = np.arange(0,101,1) >>> P1 = [0.5*m.tanh(-x)+0.5 for x in r] >>> P1 [0.5, 0.11920292202211757, 0.01798620996209155, 0.002472623156634768, 0.000335350130466483, 4.539786870244589e-05, 6.144174602207286e-06, 8.315280276560699e-07, 1.1253516207787584e-07, 1.5229979499764568e-08, 2.0611536366565986e-09, 2.789468100949932e-10, 3.775135759553905e-11, 5.109079825871277e-12, 6.914468997365475e-13, 9.35918009759007e-14, 1.2656542480726785e-14, 1.7208456881689926e-15, 2.220446049250313e-16, 5.551115123125783e-17, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] 

Pay attention to floats before zeros.

+6
source share
2 answers

Recall that the hyperbolic tangent can be expressed as (1-e ^ {- 2x}) / (1 + e ^ {- 2x}). With a small number of algebras, we can get that 0.5 * tanh (x) -0.5 (negative result of your function) is equal to e ^ {- 2x} / (1 + e ^ {- 2x}). The logarithm of this will be -2*x-log(1+exp(-2*x)) , which will work and be stable everywhere.

That is, I recommend that you replace:

 P1 = [ (0.5*m.tanh( (2.0 - B(rb1,drs1) ) * x / rb1 + B(rb1,drs1) ) + 0.5) for x in r] P1 = [ -2.5*m.log10(x) if x!=0.0 else np.inf for x in P1 ] # band-aid for problem 

With this simpler and more stable way to do this:

 r = np.arange(0.1,100.01,0.01) #r and xvals are numpy arrays, so numpy functions can be applied in one step xvals=(2.0 - B(rb1,drs1) ) * r / rb1 + B(rb1,drs1) P1=2*xvals+np.log1p(np.exp(-2*xvals)) 
+5
source

Two things you can try.

(1) brute force approach: find a variable-precision floating-point arithmetic package and use this instead of the built-in fixed precision. I play with your problem in Maxima [1], and I believe that I need to increase the accuracy of the float quite a bit to avoid underutilization, but it is possible. I can post Maxima code if you want. I would suggest that for Python there is some suitable floating point library of variable precision.

(2) approximately log ((1/2) (1 + tanh (-x)) with a Taylor series or some other approximation to avoid log (tanh (...)) at all.

[1] http://maxima.sourceforge.net

+1
source

All Articles