We have a rather complicated image processing script written in Python that uses PIL and numpy. For one of the stages, we have very sensitive multichannel gradients, which are a lookup table. Once it was created, it is saved up to several different lower resolutions. However, when this happens, the green channel, which has a gradient going left and right, suddenly becomes lost. It should lose 1 out of 255 values ββevery 50 pixels or so. Instead, it begins to decline with values ββof 2 for every 100 pixels. This causes enormous problems, and I cannot understand why PIL does this. However, I see transitions 1 in other parts of the map, so I donβt think itβs easy, since its missing one bit of accuracy. I also noticed on another channel it seemedthat the whole card was shifted by 1 value. The whole thing seems inaccurate after scaling, even when using the "Nearest" filter.
For a full-sized image, we create it from our numpy array with the following:
image = Image.fromarray(imageIn.astype(np.uint8))
Then we scale it:
new_image = image.resize(new_size, scaleFilter)
The scale is always twice as large, and I tried all the available zoom options.
Then we save it in PNG as follows:
new_image.save(file_name, 'PNG')
We save both large and immediately after step 1 with the same save command, and this is normal. After the scale, we have a problem on the green channel. Any help would be great!
EDIT:
Now it seems like this is a problem in SciPy. The following problem still causes the problem:
new_array = misc.imresize(imageIn, (x_size, y_size, 4), interp='nearest')
misc.imsave(file_name,new_array)
I do not understand how I even get distortion with the closest. I allocate this array as float64, but it should include rounding issues in code
EDIT # 2:
OSX, , ! Adobe After Effects, . imagemagick, . , , .
β 3
, . , OSX, "", , Photoshop .
:

.

, , ,
EDIT # 4
OpenGL , , ! ?