How to calculate a logical sigmoid function in Python?

This is a logical sigmoid function:

enter image description here

I know x. How can I calculate F (x) in Python now?

Say x = 0.458.

F (x) =?

+118
python
Oct 21 2018-10-10T00:
source share
7 answers

This should do it:

import math def sigmoid(x): return 1 / (1 + math.exp(-x)) 

And now you can check it by calling:

 >>> sigmoid(0.458) 0.61253961344091512 

Refresh . Note that the above was primarily intended as a direct, one-to-one translation of this expression into Python code. It has not been tested or is not known as the digital version. If you know that you need a very reliable implementation, I am sure that there are other people who really posed this problem, thought about it.

+175
21 Oct '10 at 8:37
source share

It is also available at scipy: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html

 In [1]: from scipy.stats import logistic In [2]: logistic.cdf(0.458) Out[2]: 0.61253961344091512 

which is just an expensive shell (since it allows you to scale and translate the logistic function) of another scipy function:

 In [3]: from scipy.special import expit In [4]: expit(0.458) Out[4]: 0.61253961344091512 

If you are concerned about performance, keep reading, otherwise just use expit .

Some benchmarking:

 In [5]: def sigmoid(x): ....: return 1 / (1 + math.exp(-x)) ....: In [6]: %timeit -r 1 sigmoid(0.458) 1000000 loops, best of 1: 371 ns per loop In [7]: %timeit -r 1 logistic.cdf(0.458) 10000 loops, best of 1: 72.2 µs per loop In [8]: %timeit -r 1 expit(0.458) 100000 loops, best of 1: 2.98 µs per loop 

As expected, logistic.cdf (much) slower than expit . expit is still slower than the python sigmoid function when called with a single value, because it is a universal function written in C ( http://docs.scipy.org/doc/numpy/reference/ufuncs.html ) and thus overhead. This overhead is more than the acceleration of the computation expit given by its compiled character when called with a single value. But this gets insignificant when it comes to large arrays:

 In [9]: import numpy as np In [10]: x = np.random.random(1000000) In [11]: def sigmoid_array(x): ....: return 1 / (1 + np.exp(-x)) ....: 

(You will notice a slight change from math.exp to np.exp (the first does not support arrays, but much faster if you only have one value to calculate)

 In [12]: %timeit -r 1 -n 100 sigmoid_array(x) 100 loops, best of 1: 34.3 ms per loop In [13]: %timeit -r 1 -n 100 expit(x) 100 loops, best of 1: 31 ms per loop 

But when you really need performance, it is common practice to have a pre-computed sigmoid function table stored in RAM, and trade some precision and memory for some speed (for example: http://radimrehurek.com/2013/09/word2vec-in-python -part-two-optimizing / )

Also note that the expit implementation expit numerically stable since version 0.14.0: https://github.com/scipy/scipy/issues/3385

+188
Aug 06 '14 at 15:32
source share

Here you can implement a logistic sigmoid with numerical stability (as described here ):

 def sigmoid(x): "Numerically-stable sigmoid function." if x >= 0: z = exp(-x) return 1 / (1 + z) else: z = exp(x) return z / (1 + z) 

Or maybe this is more accurate:

 import numpy as np def sigmoid(x): return math.exp(-np.logaddexp(0, -x)) 

Inside, it fulfills the same conditions as above, but then uses log1p .

In general, a polynomial logistic sigmoid:

 def nat_to_exp(q): max_q = max(0.0, np.max(q)) rebased_q = q - max_q return np.exp(rebased_q - np.logaddexp(-max_q, np.logaddexp.reduce(rebased_q))) 

(However, logaddexp.reduce may be more accurate.)

+36
Apr 25 '15 at 10:11
source share

another way

 >>> def sigmoid(x): ... return 1 /(1+(math.e**-x)) ... >>> sigmoid(0.458) 
+7
Oct 21 '10 at 9:02
source share

I feel that many may be interested in free parameters in order to change the shape of a sigmoid function. Secondly, for many applications you want to use a mirror sigmoid function. Thirdly, you can do a simple normalization, for example, the output values ​​are in the range from 0 to 1.

Try:

 def normalized_sigmoid_fkt(a, b, x): ''' Returns array of a horizontal mirrored normalized sigmoid function output between 0 and 1 Function parameters a = center; b = width ''' s= 1/(1+np.exp(b*(xa))) return 1*(s-min(s))/(max(s)-min(s)) # normalize function to 0-1 

And draw and compare:

 def draw_function_on_2x2_grid(x): fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2) plt.subplots_adjust(wspace=.5) plt.subplots_adjust(hspace=.5) ax1.plot(x, normalized_sigmoid_fkt( .5, 18, x)) ax1.set_title('1') ax2.plot(x, normalized_sigmoid_fkt(0.518, 10.549, x)) ax2.set_title('2') ax3.plot(x, normalized_sigmoid_fkt( .7, 11, x)) ax3.set_title('3') ax4.plot(x, normalized_sigmoid_fkt( .2, 14, x)) ax4.set_title('4') plt.suptitle('Different normalized (sigmoid) function',size=10 ) return fig 

Finally:

 x = np.linspace(0,1,100) Travel_function = draw_function_on_2x2_grid(x) 

Sigmoid function graph

+6
Jun 04 '16 at 11:46 on
source share

Another way: transforming the tanh function:

 sigmoid = lambda x: .5 * (math.tanh(.5 * x) + 1) 
+4
Apr 6 '16 at 2:33
source share

Good answer from @unwind. However, it cannot handle an extreme negative number (throwing an OverflowError).

My improvement:

 def sigmoid(x): try: res = 1 / (1 + math.exp(-x)) except OverflowError: res = 0.0 return res 
+2
Dec 25 '15 at 9:45
source share



All Articles