Python / numpy slicing issue

I have a problem with some notes. I need a numpy array to behave unusually, returning a slice in the form of data that I sliced, not copies. So here is an example of what I want to do:

Say we have a simple array:

a = array([1, 0, 0, 0]) 

I would like to update sequential entries in an array (moving from left to right) with the previous entry from the array using a syntax similar to this:

 a[1:] = a[0:3] 

This will result in the following result:

 a = array([1, 1, 1, 1]) 

Or something like this:

 a[1:] = 2*a[:3] # a = [1,2,4,8] 

To illustrate further, I want the following behavior:

 for i in range(len(a)): if i == 0 or i+1 == len(a): continue a[i+1] = a[i] 

Also, I want numpy speed.

The default behavior of numpy is to take a copy of the slice, so I really get the following:

 a = array([1, 1, 0, 0]) 

I already have this array as a subclass of ndarray, so I can make additional changes to it, if necessary, I just need the slice on the right side to be constantly updated, since it updates the fragment on the left side.

Do I dream or is this magic possible?

Update: all this because I'm trying to use the Gauss-Seidel iteration to solve linear algebra problems, more or less. This is a special case related to harmonic functions, I tried to avoid it, because it really is not needed and, probably, things get confused further, but it goes here.

The algorithm is as follows:

 while not converged: for i in range(len(u[:,0])): for j in range(len(u[0,:])): # skip over boundary entries, i,j == 0 or len(u) u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1]) 

Right? But you can do this in two ways, Jacobi includes updating each element with its neighbors without taking into account the updates that you have already done before the while loop, to do this in cycles, you must copy the array and then update one array from the copied array. However, Gauss-Seidel uses the information that you have already updated for each of the i-1 and j-1 entries, so there is no need for copying, the loop should essentially “know”, since the array was reevaluated after updating each individual element, that is, each the time we call a record like u [i-1, j] or u [i, j-1], the information calculated in the previous loop will be there.

I want to replace this slow and ugly situation of a nested loop with one beautiful clean line of code using numpy slicing:

 u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:]) 

But the result is an iteration of Jacobi, because when you take the fragment: u [:, - 2,1: -1], you copy the data, so the slice does not know about the updates made. Now is numpy still looping? Its not parallel it is just a faster loop path that looks like a parallel operation in python. I want to use this numpy hacking behavior to return a pointer instead of a copy when I take a piece. Right? Then each time after numpy cycles this slice will be “updated” or really just copy everything that happened in the update. To do this, I need slices on both sides of the array to be pointers.

In any case, if there is a really really smart person who is amazing there, but I very much reconciled myself with the belief that the only answer is a loop in C.

+3
source share
9 answers

Late answer, but it turned out to be on Google, so I probably point to the document that the OP needs. Your problem is understandable: when using NumPy pieces, time series are created. Wrap your code with a quick call to weave.blitz to get rid of temporary ones and have the behavior you want.

For more information, see the weave.blitz PerformancePython Guide .

+4
source

to accumulate is designed to do what you think is necessary; those. request an operation along the array. Here is an example:

 from numpy import * a = array([1,0,0,0]) a[1:] = add.accumulate(a[0:3]) # a = [1, 1, 1, 1] b = array([1,1,1,1]) b[1:] = multiply.accumulate(2*b[0:3]) # b = [1 2 4 8] 

Another way to do this is to explicitly specify an array of results as an input array. Here is an example:

 c = array([2,0,0,0]) multiply(c[:3], c[:3], c[1:]) # c = [ 2 4 16 256] 
+2
source

Just use a loop. I can’t immediately think of any way to make the slice operator behave the way you say what you want, except maybe by subclassing the numpy array and overriding the corresponding method with some kind of Python voodoo ... but more what the idea of a[1:] = a[0:3] copying the first value of a into the next three slots seems completely pointless to me. I guess this can easily confuse anyone else looking at your code (at least the first few times).

+1
source

It must have something to do with the purpose of the slice. However, the operators, as you already know, follow your expected behavior:

 >>> a = numpy.array([1,0,0,0]) >>> a[1:]+=a[:3] >>> a array([1, 1, 1, 1]) 

If you already have zeros in your real problem, where is your example, then this solves. Otherwise, with added value, set them to zero either by multiplying by zero or assigning them to zero (whichever is faster)

edit: I had a different thought. You may prefer this:

 numpy.put(a,[1,2,3],a[:3]) 
+1
source

This is not the correct logic. I will try to use letters to explain this.

Image array = abcd with elements a, b, c, d.
Now array[1:] means the element from position 1 (starting at 0 ) by.
In this case: bcd and array[0:3] means the character from position 0 to the third character (at position 3-1 ) in this case: 'abc' .

Writing something like:
array[1:] = array[0:3]

means: replace bcd with abc

To get the desired result, now in python you should use something like:

 a[1:] = a[0] 
+1
source

Numpy should check if the destination array matches the input array when calling setkey. Fortunately, there are ways around. Firstly, I tried using numpy.put instead

 In [46]: a = numpy.array([1,0,0,0]) In [47]: numpy.put(a,[1,2,3],a[0:3]) In [48]: a Out[48]: array([1, 1, 1, 1]) 

And then from the documentation on this, I tried using flatiters ( a.flat )

 In [49]: a = numpy.array([1,0,0,0]) In [50]: a.flat[1:] = a[0:3] In [51]: a Out[51]: array([1, 1, 1, 1]) 

But this does not solve the problem that you had in mind

 In [55]: a = np.array([1,0,0,0]) In [56]: a.flat[1:] = 2*a[0:3] In [57]: a Out[57]: array([1, 2, 0, 0]) 

This does not work, because the multiplication is performed before the assignment, and not in parallel, as you would like.

Numpy is designed to reapply the same operation in parallel across an array. To make something more complicated, if you cannot decompose it into functions like numpy.cumsum and numpy.cumprod , you will have to resort to something like scipy.weave or write a function in C. (See PerfomancePython for more details. .) (Also, I never used weave, so I can’t guarantee that it will do what you want.)

+1
source

You can look at np.lib.stride_tricks.

These excellent slides contain information: http://mentat.za.net/numpy/numpy_advanced_slides/

with stride_tricks starting at slide 29.

I do not quite understand this question, although I cannot offer anything more specific - although I will probably do it in cython or fortran with f2py or with weave. I like fortran more at the moment, because by the time you add all the necessary annotations of the type in the ziton, I think it looks less comprehensible than fortran.

Here is a comparison of these approaches:

WWW. Scipy org / PerformancePython

(I can’t post more links since I’m a new user) with an example that is similar to your case.

+1
source

In the end, I ran into the same problem as you. I had to resort to using the Jacobi and weaver iteration:

  while (iter_n < max_time_steps): expr = "field[1:-1, 1:-1] = (field[2:, 1:-1] "\ "+ field[:-2, 1:-1]+"\ "field[1:-1, 2:] +"\ "field[1:-1, :-2] )/4." weave.blitz(expr, check_size=0) #Toroidal conditions field[:,0] = field[:,self.flow.n_x - 2] field[:,self.flow.n_x -1] = field[:,1] iter_n = iter_n + 1 

It works and works fast, but not Gauss-Zeidel, so convergence can be a bit complicated. The only option is to use a Gauss-Seidel as a traditional index loop.

+1
source

I would suggest cython instead of a loop in c. there may be some fancy multi-user way to get your example to work using a lot of intermediate steps ... but since you know how to write it to c already, just write it quickly as a cython function and let cython magic do the rest of the work convenient for you.

0
source

All Articles