Invert real index grid

OpenCV remap() uses a real index grid to fetch a grid of values ​​from an image using bilinear interpolation and returns a grid of samples as a new image.

To be precise, let:

 A = an image X = a grid of real-valued X coords into the image. Y = a grid of real-valued Y coords into the image. B = remap(A, X, Y) 

Then for all coordinates of pixels i, j,

 B[i, j] = A(X[i, j], Y[i, j]) 

Where the parenthesis designation A(x, y) denotes the use of bilinear interpolation to determine the pixel value of the image A using floating coordinates x and y .

My question is: given the index grid X , Y , how can I generate a "reverse grid" X^-1 , Y^-1 such that:

 X(X^-1[i, j], Y^-1[i, j]) = i Y(X^-1[i, j], Y^-1[i, j]) = j 

And

 X^-1(X[i, j], Y[i, j]) = i Y^-1(X[i, j], Y[i, j]) = j 

For all integer pixel coordinates i, j ?

FWIW, images and index cards X and Y have the same shape. However, X and Y index charts do not have an a priori structure. For example, they are not necessarily affine or rigid transformations. They may even be irreversible, for example, if X, Y maps multiple pixels in A to the same exact pixel coordinate in B. I'm looking for ideas for a method that will find a reasonable inverse map, if it exists.

The solution does not have to be based on OpenCV, since I do not use OpenCV, but another library that has a remap() implementation. Although any suggestions are welcome, I am particularly interested in “mathematically correcting”, that is, if my map M is completely reversible, the method should find the perfect converse, with a small accuracy limit of the machine.

+12
math image-processing opencv
source share
7 answers

Well, I just needed to solve this remapping inversion problem , and I will outline my solution.

Given X , Y for the remap() function, which does the following:

 B[i, j] = A(X[i, j], Y[i, j]) 

I calculated Xinv , Yinv , which can be used by the remap() function to invert the process:

 A[x, y] = B(Xinv[x,y],Yinv[x,y]) 

First, I build a KD tree for a set of 2D points {(X[i,j],Y[i,j]} so that I can efficiently find the N nearest neighbors to a given point (x,y). . I use Euclidean distance for my distance metric, I found a great C ++ header library for KD-Trees on GitHub.

Then I iterate over all the values (x,y) in the grid A and find the nearest neighbors N = 5 {(X[i_k,j_k],Y[i_k,j_k]) | k = 0 .. N-1} {(X[i_k,j_k],Y[i_k,j_k]) | k = 0 .. N-1} in my set of points.

  • If the distance d_k == 0 for some k , then Xinv[x,y] = i_k and Yinv[x,y] = j_k , otherwise ...

  • Use Reverse Distance Weighting (IDW) to calculate the interpolated value:

    • let it weigh w_k = 1 / pow(d_k, p) (I use p = 2 )
    • Xinv[x,y] = (sum_k w_k * i_k)/(sum_k w_k)
    • Yinv[x,y] = (sum_k w_k * j_k)/(sum_k w_k)

Note that if B is an image of W x H , then X and Y are floating point W x H arrays. If A is a wxh image, then Xinv and Yinv are wxh arrays for floats. It is important that you match the size of the image and map.

It works like a charm! In my first version, I tried searching through the search and did not even wait for it to complete. I switched to KD-Tree, then started getting reasonable runtimes. If I had time, I would like to add this to OpenCV.

The second image below uses remap() to remove lens distortion from the first image. The third image is the result of the inversion of the process.

enter image description here enter image description here enter image description here

+5
source share

If your map is derived from the homography of H you can invert H and directly create reverse maps using cv::initUndistortRectifyMap() .

for example in Python:

 import numpy as np. map_size = () # fill in your map size H_inv = np.linalg.inv(H) map1, map2 = cv2.initUndistortRectifyMap(cameraMatrix=np.eye(3), distCoeffs=np.zeros(5), R=H_inv, newCameraMatrix=np.eye(3), size=map_size, m1type=cv2.CV_32FC1) 

The OpenCV documentation says initUndistortRectifyMap() :

The function actually builds maps for the inverse mapping algorithm used by remap() . That is, for each pixel (u, v) in the target image, the function calculates the corresponding coordinates in the original image.

In case you have just given cards, you must do it yourself. However, the interpolation of the coordinates of the new maps is not trivial, since the support area for one pixel can be very large.

Here is a simple Python solution that inverts maps by performing point-to-point mapping. This is likely to leave some coordinates unassigned, while others will be updated several times. So there may be holes on the map.

Here is a small Python program demonstrating both approaches:

 import cv2 import numpy as np def invert_maps(map_x, map_y): assert(map_x.shape == map_y.shape) rows = map_x.shape[0] cols = map_x.shape[1] m_x = np.ones(map_x.shape, dtype=map_x.dtype) * -1 m_y = np.ones(map_y.shape, dtype=map_y.dtype) * -1 for i in range(rows): for j in range(cols): i_ = round(map_y[i, j]) j_ = round(map_x[i, j]) if 0 <= i_ < rows and 0 <= j_ < cols: m_x[i_, j_] = j m_y[i_, j_] = i return m_x, m_y def main(): img = cv2.imread("pigeon.png", cv2.IMREAD_GRAYSCALE) # a simply rotation by 45 degrees H = np.array([np.sin(np.pi/4), -np.cos(np.pi/4), 0, np.cos(np.pi/4), np.sin(np.pi/4), 0, 0, 0, 1]).reshape((3,3)) H_inv = np.linalg.inv(H) map_size = (img.shape[1], img.shape[0]) map1, map2 = cv2.initUndistortRectifyMap(cameraMatrix=np.eye(3), distCoeffs=np.zeros(5), R=H, newCameraMatrix=np.eye(3), size=map_size, m1type=cv2.CV_32FC1) map1_inv, map2_inv = cv2.initUndistortRectifyMap(cameraMatrix=np.eye(3), distCoeffs=np.zeros(5), R=H_inv, newCameraMatrix=np.eye(3), size=map_size, m1type=cv2.CV_32FC1) map1_simple_inv, map2_simple_inv = invert_maps(map1, map2) img1 = cv2.remap(src=img, map1=map1, map2=map2, interpolation=cv2.INTER_LINEAR) img2 = cv2.remap(src=img1, map1=map1_inv, map2=map2_inv, interpolation=cv2.INTER_LINEAR) img3 = cv2.remap(src=img1, map1=map1_simple_inv, map2=map2_simple_inv, interpolation=cv2.INTER_LINEAR) cv2.imshow("Original image", img) cv2.imshow("Mapped image", img1) cv2.imshow("Mapping forth and back with H_inv", img2) cv2.imshow("Mapping forth and back with invert_maps()", img3) cv2.waitKey(0) if __name__ == '__main__': main() 
+3
source share

There is no standard way to do this using OpenCV .

If you are looking for a complete, ready-to-use solution, I'm not sure I can help, but I can at least describe the method that I used several years ago to complete this task.

First of all, you must create a map reassignment with the same size as the original image. I created maps with large sizes for easier interpolation, and at the last stage I cut them to the desired size. Then you have to fill them with the values ​​existing in the previous redefinition maps (not so difficult: just iterate over them and if the coordinates of the x and y coordinates are within your image, take their row and column as new y and x and put them in the old x and y and the line of the new card). This is a fairly simple solution, but it gives a pretty good result. For ideal, you should interpolate the old x and y values ​​to integer values ​​using the interpolation method and neighboring pixels.

After that, you must either redirect the pixel colors manually, or completely fill out your remapping map with pixel coordinates and use the version from OpenCV.

You will encounter a rather difficult task: you have to interpolate pixels in empty areas. In other words, you must take the distances to the nearest non-zero pixel coordinates and mix the color (if you reassign the colors) or the coordinates (if you perform calculations with full display) according to these distances. Actually, it is also not so difficult for linear interpolation, and you can even look at the implementation of remap() in the OpenCV github page . For NN interpolation this is much simpler - just take the color / coordinate of the nearest neighbor.

And the ultimate goal is to extrapolate areas outside the boundaries of the reprogrammed pixel area. You can also use the OpenCV algorithm as a reference.

+2
source share

Here is the implementation of @wcochran's answer. I tried to restore the lens correction caused by the lens function.

 mod = lensfunpy.Modifier(lens, cam.crop_factor, width, height) mod.initialize(focal_length, aperture, distance) undist_coords = mod.apply_geometry_distortion() ## the lens correction part # im_undistorted = cv2.remap(im, undist_coords, None, cv2.INTER_CUBIC) # im_undistorted = cv2.remap(im, undist_coords, None, cv2.INTER_LANCZOS4) # cv2.imwrite(undistorted_image_path, im_undistorted) undist_coords_f = undist_coords.reshape((-1, 2)) tree = KDTree(undist_coords_f) def calc_val(point_pos): nearest_dist, nearest_ind = tree.query([point_pos], k=5) if nearest_dist[0][0] == 0: return undist_coords_f[nearest_ind[0][0]] # starts inverse distance weighting w = np.array([1.0 / pow(d, 2) for d in nearest_dist]) sw = np.sum(w) # embed() x_arr = np.floor(nearest_ind[0] / 1080) y_arr = (nearest_ind[0] % 1080) xx = np.sum(w * x_arr) / sw yy = np.sum(w * y_arr) / sw return (xx, yy) un_correction_x = np.zeros((720, 1080)) un_correction_y = np.zeros((720, 1080)) ## reverse the lens correction for i in range(720): print("row %d operating" % i) for j in range(1080): un_correction_x[i][j], un_correction_y[i][j] = calc_val((i, j)) # print((i, j), calc_val((j, i))) dstMap1, dstMap2 = cv2.convertMaps(un_correction_x.astype(np.float32), un_correction_y.astype(np.float32), cv2.CV_32FC2) im_un_undistorted = cv2.remap(im_undistorted, dstMap1, dstMap2, cv2.INTER_LANCZOS4) 
+2
source share

You can invert the map at known points and interpolate it into a new grid. This will work fine, while the distortion is not very large.

Here is a very simple implementation in Python using scipy.interpolate.griddata:

 map_x, map_y = cv2.initUndistortRectifyMap(K, D, None, new_K, image_size, cv2.CV_32FC1) points = np.stack([map_x.flatten(), map_y.flatten()], axis=1) grid = np.mgrid[:map_x.shape[0], :map_y.shape[1]] values = grid.reshape(2, -1).T[..., ::-1] from scipy.interpolate import griddata grid_y, grid_x = grid map_back = griddata(points, values, (grid_x, grid_y), method='cubic').astype(map_undistort.dtype) 

If you use CV_32FC2 for maps, you can simplify the construction of points:

 map_undistort, _ = cv2.initUndistortRectifyMap(K, D, None, new_K, image_size, cv2.CV_32FC2) points = map_undistort.reshape(-1, 2) 
+1
source share

From what I understand, you have the original image and the transformed image, and you want to restore the nature of the applied transformation, not knowing it, but assuming that it is something reasonable, like rotation or fish, eye distortion.

What I would try is a threshold image to convert it to binary, both an index image and a regular image. Then try to identify the objects. Most comparisons, at least, keep the connection and Euler number, basically the largest object in the index will still be the largest object in the plain.

Then take points for matching images / indexed pairs and see if you can remove translation, rotation and scaling. This gives you some reverse cards that you can try to stitch together. (It’s tough if the transformation is not easy, but the general problem of recreating only a transformation cannot be solved).

0
source share

OP is here. I seem to have found the answer. I have not implemented it yet, and if someone comes with a less uncomfortable solution (or finds something wrong with this), I will choose my answer instead.

Problem Statement

Let A be the source image, B be the target image, and M be a map from A-coordinates to B-coordinates, i.e.

 B[k, l, :] == A(M[k, l, 0], M[k, l, 1], :) for all k, l in B coords. 

... where the square brackets indicate a search for an array with integer indices, and the curly brackets indicate a search for bilinear interpolation with floating point indices. We repeat this above using more economical notation:

 B = A(M) 

We want to find the inverse map N that best maps B back to A:

 Find N st A \approx B(N) 

The problem can be indicated without reference to A or B:

 Find N = argmin_N || M(N) - I_n || 

... where ||*|| indicates the Frobenius norm, and I_n is the identity map with the same dimensions as N, i.e. mapping, where:

 I_n[i, j, :] == [i, j] for all i, j 

Naive decision

If the values ​​of M are integers and M is an isomorphism, then you can build N directly as:

 N[M[k, l, 0], M[k, l, 1], :] = [k, l] for all k, l 

Or in our simplified notation:

 N[M] = I_m 

... where I_m is the identity map with the same dimensions as M.

There are two problems:

  • M is not an isomorphism, therefore the above will leave “holes” in N at N [i, j ::] for any [i, j] not among the values ​​in M.
  • M values ​​are floating point coordinates [i, j], not integer coordinates. We cannot just assign a value to the bilinear interpolated value N (i, j, :) for float-valued i, j. To achieve an equivalent effect, we should instead set the values ​​[i, j] of the four surrounding angles N [floor (i), floor (j) ,:], N [floor (i), ceil (j) ,:], N [ ceil (i), floor (j) ,:], N [ceil (i), ceil (j) ,:] such that the interpolated value N (i, j, :) is equal to the desired value [k, l], for all pixel mappings [i, j] → [k, l] in M.

Decision

Construct an empty N as a three-dimensional tensor of floats:

 N = zeros(size=(A.shape[0], A.shape[1], 2)) 

For each coordinate [i, j] in the A-coordinate space do:

  • Find the 2x2 A-coordinate grid in M ​​that [i, j] is inside. Compute the homography matrix H, which maps these A-coordinates to the corresponding B-coordinates (given by 2x2 grid pixel indices).
  • Set N [i, j ,:] = matmul (H, [i, j])

A potentially expensive step here would be to search in step 1 for a 2x2 grid of A-coordinates in M ​​that surrounds [i, j]. The brute force search will do this whole algorithm O (n * m), where n is the number of pixels in A and m is the number of pixels in B.

To reduce this to O (n), instead, one could run a scanning line algorithm in each A-coordinate tetrahedron to identify all the integer values ​​[i, j] that it contains. This could be pre-computed as a hash map that maps the integer values ​​of A-coords [i, j] to the upper left corner of its circumferential four-sided B-coords [k, l].

0
source share

All Articles