I have several (about 1000) 3D form arrays (1000, 800, 1024) that I want to study. I need to calculate the average value along the axis = 0, but before I can do this, I need to collapse the data along axis 2 until it is βin the right placeβ.
That sounds weird, so I'll try to explain. The 1D sub-array of form (1024) is data from the physical ring buffer. The ring buffer is read in different ways that I know. Therefore, I have several pos arrays of the form (1000, 800). Telling me where the ring buffer stuck. And my 3D arrays of data form (1000, 800, 1024) that I need to roll over pos .
Only after rolling the 3D arrays make sense to me, and I can begin to analyze them. In C, you can write code that does this quite simply, so I wonder if I can somehow "talk" about the numpy mean () or sum () methods, which they should start with different indexes and roll around at the end 1D subarray.
I am currently doing the following:
rolled = np.zeros_like(data)
It takes ~ 60 seconds. And then I do, for example:
m = rolled.mean(axis=0) s = rolled.std(axis=0)
It only takes 15 seconds.
My point is that creating a thumbnail takes a lot of space and time (well, I could save space by writing the thumbnail back to data ), although there is definitely a way (in C) to implement this averaging and rolling in a single loop , which saves a lot of time. My question is ... if there is a way to do something like this with numpy?
python numpy
Dominik neise
source share