Failure avoidance for shared probabilities using NumPy

I am faced with the problem of estimating the joint probability for independent variables in a simple setup. Currently, I have an array of 100 random variables, and I would like to get their joint probability without falling into the problem with insufficient flow. Any ideas how to achieve this goal in numpy? If it is possible?

If it weren’t for someone, please explain to me the further role of the NumPy procedure (logaddexp), as I thought it could help me in this situation.

+4
source share
1 answer

logaddexp allows you to expand the range (decreasing accuracy) of the presented values, instead storing and processing their logarithm.

 e1, e2 = np.log(p1), np.log(p2) # convert p1, p2 to log form e1e2 = e1 + e2 # e1e2 is now np.log(p1 * p2) e3 = np.logaddexp(e1, e2) # e3 = np.log(p1 + p2) 

You just need to translate the code from ** to * , * to + and + to np.logaddexp and convert back from np.exp to the end.

A normal 64-bit floating point double precision has the smallest positive normal value of 2.2E-308; log storage gives you the effective least positive normal 1E- (1.7E308).

+9
source

All Articles