Gaussian image filtering with Nan in Python

From the list of 2D coordinates and the third variable (speed) I created a 2D numpy array covering the entire sampling area. I intend to create an image in which each pixel contains the average speed of the points lying in it. After that, filter this image using a Gaussian filter.

The problem is that the area is unevenly selected. Therefore, I have several pixels without information ( Nan ) in the middle of the image. When I try to filter an array through a Gaussian filter, Nan spreads, destroying the entire image.

I need to filter this image, but discard all pixels without information. In other words, if a pixel does not contain information, it should not be considered for filtering.

Here is an example of my code for averaging:

 Mean_V = np.zeros([len(x_bins), len(y_bins)]) for i, x_bin in enumerate(x_bins[:-1]): bin_x = (x > x_bins[i]) & (x <= x_bins[i+1]) for j, y_bin in enumerate(y_bins[:-1]): bin_xy = (y[bin_x] > y_bins[j]) & (y[bin_x] <= y_bins[j+1]) if (sum(x > 0 for x in bin_xy) > 0) : Mean_V[i,j]=np.mean(V[bin_x][bin_xy]) else: Mean_V[i,j]=np.nan 

EDIT:

Surfing the Internet I finished this question, which I made in 2013. A solution to this problem can be found in the astrophysics library:

http://docs.astropy.org/en/stable/convolution/

Astropic convolution replaces NaN pixels with kernel-weighted interpolation from their neighbors.

Thanks guys!

+11
python numpy matplotlib image-processing imagefilter
source share
3 answers

in words:

A Gaussian filter that ignores NaN in a given array U can be easily obtained by applying a standard Gaussian filter to two auxiliary arrays V and W and taking their ratio to get the result Z.

Here V is a copy of the original U with replacing NaN with zeros, and W is an array of ones with zeros indicating the position of NaNs in the original U.

The idea is that replacing NaN with zeros introduces an error in the filtered array, which, however, can be compensated by applying the same Gaussian filter to another auxiliary array and combining the two.

in Python:

 import scipy as sp import scipy.ndimage U=sp.randn(10,10) # random array... U[U<2]=np.nan # ...with NaNs for testing V=U.copy() V[U!=U]=0 VV=sp.ndimage.gaussian_filter(V,sigma=2.0) W=0*U.copy()+1 W[U!=U]=0 WW=sp.ndimage.gaussian_filter(W,sigma=2.0) Z=VV/WW 

in numbers:

Here, the coefficients of the Gaussian filter are set for [0.25.0.50.0.25] for demonstration purposes and are summed up to one 0.25 + 0.50 + 0.25 = 1 without loss of generality.

After replacing NaN with zeros and applying a Gaussian filter (see below), it is clear that the zeros introduce an error, i.e. due to the "missing" data, the coefficients 0.25 + 0.50 = 0.75 are not summed to one and, therefore, underestimate the "true" value.

However, this can be compensated by using the second auxiliary array (see WW below), which after filtering with the same Gaussian simply contains the sum of the coefficients.

Therefore, the separation of the two filtered auxiliary arrays scales the coefficients so that they add up to unity, while the NaN positions are ignored.

 array U 1 2 NaN 1 2 auxiliary V 1 2 0 1 2 auxiliary W 1 1 0 1 1 position abcde filtered VV_b = 0.25*V_a + 0.50*V_b + 0.25*V_c = 0.25*1 + 0.50*2 + 0 = 1.25 filtered WW_b = 0.25*W_a + 0.50*W_b + 0.25*W_c = 0.25*1 + 0.50*1 + 0 = 0.75 ratio Z = VV_b / WW_b = (0.25*1 + 0.50*2) / (0.25*1 + 0.50*1) = 0.333*1 + 0.666*2 = 1.666 
+15
source share

How about replacing Z = VV / WW with Z = VV / (WW + epsilon) with epsilon = 0.000001 to automatically handle cases without any comments in the previous sentence

0
source share

The simplest thing is to turn nan into zeros through nan_to_num . Whether this is significant or not is a separate issue.

-4
source share

All Articles