Project 2d points to camera 1 image to camera 2 image after stereo calibration

I am doing stereo calibration of two cameras (name them L and R) with opencv. I use 20 pairs of chessboard images and compute the transformation R relative to L. What I want to do is use a new pair of images, calculate the 2-dimensional angles of the chessboard in image L, transform these points according to my calibration and draw the corresponding transformed points on image R with the hope that they will coincide with the corners of the chessboard in this image.

I tried the naive way to convert 2d points from [x, y] to [x, y, 1], multiply by a 3x3 rotation matrix, add a rotation vector and then divide by z, but the result is wrong, so I guess it's not so simple (? )

Edit (to clarify some things):

The reason I want to do this is because I want to check the stereo calibration on a new pair of images. So, I do not want to receive a new 2d conversion between two images, I want to check the correctness of the found 3d conversion.

This is my setup:

setup

I have a rotation and translation related to two cameras (E), but I do not have rotation and movement of an object relative to each camera (E_R, E_L).

Ideally, what I would like to do:

  1. Select 2-dimensional angles of the image from camera L (in pixels, for example, [100,200], etc.).
  2. Do some kind of 2d point conversion based on the matrix E that I found.
  3. Get the corresponding 2d points in the image from camera R, draw them, and hopefully they correspond to the actual angles!

The more I think about it, the more I am convinced that this is wrong / cannot be done.

What I'm probably trying now:

  1. Using the internal parameters of the cameras (say I_R and I_L), solve 2 least squares systems to find E_R and E_L
  2. Select 2d angles in the image from camera L.
  3. Project these angles onto their corresponding 3D points (3d_points_L).
  4. Do: 3d_points_R = (E_L) .inverse * E * E_R * 3d_points_L
  5. Get 2d_points_R from 3d_points_R and draw them.

I will update when I have something new

+4
source share
1 answer

This is actually easy to do, but you are making a few mistakes. Remember that after stereo calibration of R and L, the position and orientation of the second camera are correlated with the first camera in the 3D coordinate system of the first camera. And also do not forget to find the three-dimensional position of the point using a pair of cameras that need to be triangulated. By setting the z component to 1, you make two mistakes. Firstly, most likely you used the OpenCV common calibration code and gave the distance between the corners of the chessboard in cm. Therefore, z = 1 means 1 cm from the center of the camera, which is super close to the camera. Secondly, setting the same z for all the points you say, the control board is perpendicular to the main axis (like an optical axis or main beam), although most likely this is not the case in your image. Thus, you first transform the virtual 3D points into the second coordinate system of the camera, and then project them onto the image plane.

If you want to convert only flat points, you can find the homography between the two cameras (the OpenCV function has a function) and use this.

+2
source

All Articles