The reconstructed image after the Laplacian pyramid. Not the same as the original image.

I convert an RGB image to YCbCr and then want to calculate the Laplacian pyramid for the same. After color conversion, I experiment with the code in the OpenCV Pyramid image tutorial to find the Laplace pyramid of the image and then restore the original image. However, if I increase the number of levels in my code to a higher number, say 10, then the restored image (after converting back to RGB) will not look the same as the original image (the image looks blurry - see the link below for the exact image) . I do not know why this is happening. Is this assumed when the levels increase or something is wrong in the code?

frame = cv2.cvtColor(frame_RGB, cv2.COLOR_BGR2YCR_CB)
height = 10
Gauss = frame.copy()
gpA = [Gauss]
for i in xrange(height):
    Gauss = cv2.pyrDown(Gauss)
    gpA.append(Gauss)

lbImage = [gpA[height-1]]

for j in xrange(height-1,0,-1):
    GE = cv2.pyrUp(gpA[j])
    L = cv2.subtract(gpA[j-1],GE)
    lbImage.append(L)

ls_ = lbImage[0]     
for j in range(1,height,1):
    ls_ = cv2.pyrUp(ls_)
    ls_ = cv2.add(ls_,lbImage[j])

ls_ = cv2.cvtColor(ls_, cv2.COLOR_YCR_CB2BGR)                
cv2.imshow("Pyramid reconstructed Image",ls_)
cv2.waitKey(0)

. .

+1
2

pyrDown , . (gpA[] ) , ().

, .

: . , .

+1

np.add() np.substract(). . + . , :

L = gpA[j-1] - GE

:

L = cv2.subtract(gpA[j-1],GE)
0

All Articles