Speed ​​up core grading sample

Here's MWE much more code that I use. In principle, he performs KDE integration with Monte Carlo ( kernel density estimation ) for all values ​​located below a certain threshold (the integration method on this subject is proposed BTW: Integration of 2D kernel density estimation ).

 import numpy as np from scipy import stats import time # Generate some random two-dimensional data: def measure(n): "Measurement model, return two coupled measurements." m1 = np.random.normal(size=n) m2 = np.random.normal(scale=0.5, size=n) return m1+m2, m1-m2 # Get data. m1, m2 = measure(20000) # Define limits. xmin = m1.min() xmax = m1.max() ymin = m2.min() ymax = m2.max() # Perform a kernel density estimate on the data. x, y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] values = np.vstack([m1, m2]) kernel = stats.gaussian_kde(values) # Define point below which to integrate the kernel. x1, y1 = 0.5, 0.5 # Get kernel value for this point. tik = time.time() iso = kernel((x1,y1)) print 'iso: ', time.time()-tik # Sample from KDE distribution (Monte Carlo process). tik = time.time() sample = kernel.resample(size=1000) print 'resample: ', time.time()-tik # Filter the sample leaving only values for which # the kernel evaluates to less than what it does for # the (x1, y1) point defined above. tik = time.time() insample = kernel(sample) < iso print 'filter/sample: ', time.time()-tik # Integrate for all values below iso. tik = time.time() integral = insample.sum() / float(insample.shape[0]) print 'integral: ', time.time()-tik 

The result looks something like this:

 iso: 0.00259208679199 resample: 0.000817060470581 filter/sample: 2.10829401016 integral: 4.2200088501e-05 

which clearly means that calling the filter / sample consumes almost all the time that the code uses to run. I have to run this block of code iteratively several thousand times so that it can get quite a lot of time.

Is there a way to speed up the filtering / sampling process?


Add

Here's a slightly more realistic MWE my actual code with the multi-threaded Ophion solution written on it:

 import numpy as np from scipy import stats from multiprocessing import Pool def kde_integration(m_list): m1, m2 = [], [] for item in m_list: # Color data. m1.append(item[0]) # Magnitude data. m2.append(item[1]) # Define limits. xmin, xmax = min(m1), max(m1) ymin, ymax = min(m2), max(m2) # Perform a kernel density estimate on the data: x, y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] values = np.vstack([m1, m2]) kernel = stats.gaussian_kde(values) out_list = [] for point in m_list: # Compute the point below which to integrate. iso = kernel((point[0], point[1])) # Sample KDE distribution sample = kernel.resample(size=1000) #Create definition. def calc_kernel(samp): return kernel(samp) #Choose number of cores and split input array. cores = 4 torun = np.array_split(sample, cores, axis=1) #Calculate pool = Pool(processes=cores) results = pool.map(calc_kernel, torun) #Reintegrate and calculate results insample_mp = np.concatenate(results) < iso # Integrate for all values below iso. integral = insample_mp.sum() / float(insample_mp.shape[0]) out_list.append(integral) return out_list # Generate some random two-dimensional data: def measure(n): "Measurement model, return two coupled measurements." m1 = np.random.normal(size=n) m2 = np.random.normal(scale=0.5, size=n) return m1+m2, m1-m2 # Create list to pass. m_list = [] for i in range(60): m1, m2 = measure(5) m_list.append(m1.tolist()) m_list.append(m2.tolist()) # Call KDE integration function. print 'Integral result: ', kde_integration(m_list) 

The solution provided by Ophion works fine with the source code that I presented, but not with an error in this version:

 Integral result: Exception in thread Thread-3: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 504, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 319, in _handle_tasks put(task) PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed 

I tried to move the calc_kernel function around, since one of the answers in this question is Multiprocessing: how to use Pool.map for the function defined in the class? that "the function you give map () must be available through the import of your module"; but I still can't get this code to work.

Any help would be greatly appreciated.


Add 2

Implementing the Ophion clause to remove the calc_kernel function and simply using:

 results = pool.map(kernel, torun) 

works to get rid of PicklingError , but now I see that if I create an initial m_list just about 62-63 items, I get this error:

 Traceback (most recent call last): File "~/gauss_kde_temp.py", line 67, in <module> print 'Integral result: ', kde_integration(m_list) File "~/gauss_kde_temp.py", line 38, in kde_integration pool = Pool(processes=cores) File "/usr/lib/python2.7/multiprocessing/__init__.py", line 232, in Pool return Pool(processes, initializer, initargs, maxtasksperchild) File "/usr/lib/python2.7/multiprocessing/pool.py", line 161, in __init__ self._result_handler.start() File "/usr/lib/python2.7/threading.py", line 494, in start _start_new_thread(self.__bootstrap, ()) thread.error: can't start new thread 

Since my actual list in my actual implementation of this code can contain up to 2000 elements, this problem makes the code unusable. Line 38 is:

 pool = Pool(processes=cores) 

obviously this has something to do with the number of cores i use?

This question "Unable to start a new thread error" in Python suggests using:

 threading.active_count() 

to check the number of threads that I have when I get this error. I checked and it always crashes when it reaches threads 374 . How can I create code around this problem?


Here's a new question related to this latest release: Stream error: cannot start a new stream

+8
performance python numpy montecarlo
source share
2 answers

Probably the easiest way to speed this up is to parallelize kernel(sample) :

Taking this piece of code:

 tik = time.time() insample = kernel(sample) < iso print 'filter/sample: ', time.time()-tik #filter/sample: 1.94065904617 

Modify this to use multiprocessing :

 from multiprocessing import Pool tik = time.time() #Create definition. def calc_kernel(samp): return kernel(samp) #Choose number of cores and split input array. cores = 4 torun = np.array_split(sample, cores, axis=1) #Calculate pool = Pool(processes=cores) results = pool.map(calc_kernel, torun) #Reintegrate and calculate results insample_mp = np.concatenate(results) < iso print 'multiprocessing filter/sample: ', time.time()-tik #multiprocessing filter/sample: 0.496874094009 

Double check returns the same answer:

 print np.all(insample==insample_mp) #True 

3.9x upgrade on 4 cores. Not sure if you are using this, but after about 6 processors the size of your input array is not large enough to make a significant profit. For example, using 20 processors, it is only 5.8 times faster.

+4
source share

Claim in the comments section of this article (link below)

"SciPys gaussian_kde does not use FFT, while there is an implementation of statsmodels that does"

... which is a possible cause of the observed poor results. It talks about improving ordering using FFT. See @ jseabold answer.

http://slendrmeans.wordpress.com/2012/05/01/will-it-python-machine-learning-for-hackers-chapter-2-part-1-summary-stats-and-density-estimators/

Disclaimer: I have no experience with statsmodels or scipy.

+2
source share

All Articles