Is there a scaled optional error function in python?

Matlab has a special function that is not available in any of the collections for Python that I know (numpy, scipy, mpmath, ...).

Perhaps there are other places where you can find such features?

UPD For anyone who thinks the question is trivial, try computing this function first for an argument of ~ 30.

UPD2 Arbitrary precision is a good workaround, but if possible, I would rather avoid this. I need the "standard" accuracy of the machine (no more no less) and maximum speed.

UPD3 It turns out mpmath gives a surprisingly inaccurate result. Even where standard python math works, mpmath results mpmath worse. This makes it completely useless.

UPD4 Code for comparing different methods of calculating erfcx.

 import numpy as np def int_erfcx(x): "Integral which gives erfcx" from scipy import integrate def f(xi): return np.exp(-x*xi)*np.exp(-0.5*xi*xi) return 0.79788456080286535595*integrate.quad(f, 0.0,min(2.0,50.0/(1.0+x))+100.0,limit=500)[0] def my_erfcx(x): """MM Shepherd and JG Laframboise, MATHEMATICS OF COMPUTATION 36, 249 (1981) Note that it is reasonable to compute it in long double (or whatever python has) """ ch_coef=[np.float128(0.1177578934567401754080e+01), np.float128( -0.4590054580646477331e-02), np.float128( -0.84249133366517915584e-01), np.float128( 0.59209939998191890498e-01), np.float128( -0.26658668435305752277e-01), np.float128( 0.9074997670705265094e-02), np.float128( -0.2413163540417608191e-02), np.float128( 0.490775836525808632e-03), np.float128( -0.69169733025012064e-04), np.float128( 0.4139027986073010e-05), np.float128( 0.774038306619849e-06), np.float128( -0.218864010492344e-06), np.float128( 0.10764999465671e-07), np.float128( 0.4521959811218e-08), np.float128( -0.775440020883e-09), np.float128( -0.63180883409e-10), np.float128( 0.28687950109e-10), np.float128( 0.194558685e-12), np.float128( -0.965469675e-12), np.float128( 0.32525481e-13), np.float128( 0.33478119e-13), np.float128( -0.1864563e-14), np.float128( -0.1250795e-14), np.float128( 0.74182e-16), np.float128( 0.50681e-16), np.float128( -0.2237e-17), np.float128( -0.2187e-17), np.float128( 0.27e-19), np.float128( 0.97e-19), np.float128( 0.3e-20), np.float128( -0.4e-20)] K=np.float128(3.75) y = (xK) / (x+K) y2 = np.float128(2.0)*y (d, dd) = (ch_coef[-1], np.float128(0.0)) for cj in ch_coef[-2:0:-1]: (d, dd) = (y2 * d - dd + cj, d) d = y * d - dd + ch_coef[0] return d/(np.float128(1)+np.float128(2)*x) def math_erfcx(x): import scipy.special as spec return spec.erfc(x) * np.exp(x*x) def mpmath_erfcx(x): import mpmath return mpmath.exp(x**2) * mpmath.erfc(x) if __name__ == "__main__": x=np.linspace(1.0,26.0,200) X=np.linspace(1.0,100.0,200) intY = np.array([int_erfcx(xx*np.sqrt(2)) for xx in X]) myY = np.array([my_erfcx(xx) for xx in X]) myy = np.array([my_erfcx(xx) for xx in x]) mathy = np.array([math_erfcx(xx) for xx in x]) mpmathy = np.array([mpmath_erfcx(xx) for xx in x]) mpmathY = np.array([mpmath_erfcx(xx) for xx in X]) print ("Integral vs exact: %g"%max(np.abs(intY-myY)/myY)) print ("math vs exact: %g"%max(np.abs(mathy-myy)/myy)) print ("mpmath vs math: %g"%max(np.abs(mpmathy-mathy)/mathy)) print ("mpmath vs integral:%g"%max(np.abs(mpmathY-intY)/intY)) exit() 

For me he gives

 Integral vs exact: 6.81236e-16 math vs exact: 7.1137e-16 mpmath vs math: 4.90899e-14 mpmath vs integral:8.85422e-13 

Obviously, math gives the best possible accuracy when it works, and mpmath gives an error a couple of orders of magnitude more, where math works and even more for large arguments.

+7
source share
5 answers

Here is a simple and quick implementation that provides 12โ€“13 digit accuracy worldwide:

 from scipy.special import exp, erfc def erfcx(x): if x < 25: return erfc(x) * exp(x*x) else: y = 1. / x z = y * y s = y*(1.+z*(-0.5+z*(0.75+z*(-1.875+z*(6.5625-29.53125*z))))) return s * 0.564189583547756287 
+5
source

The gmpy2 library provides access to the MPFR multipoint library. For normal accuracy, it is almost 5 times faster than mpmath.

 $ py27 -m timeit -s "import mpmath" -s "def erfcx(x):return mpmath.exp(x**2) * mpmath.erfc(x)" "erfcx(30)" 10000 loops, best of 3: 47.3 usec per loop $ py27 -m timeit -s "import gmpy2" -s "def erfcx(x):return gmpy2.exp(x**2) * gmpy2.erfc(x)" "erfcx(30)" 100000 loops, best of 3: 10.8 usec per loop 

Both libraries return the same result for 30.

 >>> import mpmath >>> import gmpy2 >>> mpmath.exp(30**2) * mpmath.erfc(30) mpf('0.018795888861416751') >>> gmpy2.exp(30**2) * gmpy2.erfc(30) mpfr('0.018795888861416751') >>> 

Disclaimer: I support gmpy2. I am actively working on a new release, but there should not be any problems with the current version for this calculation.

Edit: I was curious about the overhead of making two function calls instead of one, so I implemented gmpy2.erfcx () completely in C, but still use MPFR to do the calculations. The improvement was less than I expected. If you think erfcx () will be useful, I can add it to the next version.

 $ py27 -m timeit -s "import gmpy2" "gmpy2.erfcx(30)" 100000 loops, best of 3: 9.45 usec per loop 
+3
source

The highly optimized erfcx implementation in C ++ (for both real and complex arguments) was recently integrated into SciPy and should be in SciPy version 0.12.

+3
source

I do not know that any of the standard sources includes this function, but you can implement it in a straightforward way, at least if you use mpmath and are not too worried about performance:

 import math import mpmath def erfcx(x): return math.exp(x**2) * math.erfc(x) def erfcx_mp(x): return mpmath.exp(x**2) * mpmath.erfc(x) where = mpmath.linspace(1, 50, 10) + mpmath.linspace(100, 1000, 5) for x in where: try: std = erfcx(x) except OverflowError: std = None new = erfcx_mp(x) approx = (1/(x*mpmath.pi**0.5)) print x, std, new, (new-approx)/approx 

produces

 1.0 0.427583576156 0.427583576155807 -0.242127843858688 6.44444444444444 0.0865286153111 0.0865286153111425 -0.0116285899486798 11.8888888888889 0.0472890800456 0.0472890800455829 -0.00350053472385845 17.3333333333333 0.032495498521 0.0324954985209682 -0.00165596082986796 22.7777777777778 0.024745497 0.0247454970000106 -0.000960939188986022 28.2222222222222 None 0.0199784436993529 -0.000626572735073611 33.6666666666667 None 0.0167507236463156 -0.000440550710337029 39.1111111111111 None 0.0144205913280408 -0.000326545959369654 44.5555555555556 None 0.0126594222570918 -0.00025167403795913 50.0 None 0.0112815362653238 -0.000199880119832415 100.0 None 0.00564161378298943 -4.99925018743586e-5 325.0 None 0.00173595973189465 -4.73366058776083e-6 550.0 None 0.00102579754728657 -1.6528843659911e-6 775.0 None 0.000727985953393782 -8.32464102161289e-7 1000.0 None 0.000564189301453388 -4.9999925011689e-7 

And he behaves in the same way as in the case of overflowing mathematics. *. Supporting the mpmath interval is not quite suitable for the task (without any hacking that I'm too lazy), but for this I am sure mpfs will be sufficient, since erfcx is just a product of two things that mpmath can easily figure out.

+2
source

This feature is now available in the latest version of scipy, see http://docs.scipy.org/doc/scipy/reference/special.html .

+1
source

All Articles