I am doing stereo calibration of two cameras (name them L and R) with opencv. I use 20 pairs of chessboard images and compute the transformation R relative to L. What I want to do is use a new pair of images, calculate the 2-dimensional angles of the chessboard in image L, transform these points according to my calibration and draw the corresponding transformed points on image R with the hope that they will coincide with the corners of the chessboard in this image.
I tried the naive way to convert 2d points from [x, y] to [x, y, 1], multiply by a 3x3 rotation matrix, add a rotation vector and then divide by z, but the result is wrong, so I guess it's not so simple (? )
Edit (to clarify some things):
The reason I want to do this is because I want to check the stereo calibration on a new pair of images. So, I do not want to receive a new 2d conversion between two images, I want to check the correctness of the found 3d conversion.
This is my setup:

I have a rotation and translation related to two cameras (E), but I do not have rotation and movement of an object relative to each camera (E_R, E_L).
Ideally, what I would like to do:
- Select 2-dimensional angles of the image from camera L (in pixels, for example, [100,200], etc.).
- Do some kind of 2d point conversion based on the matrix E that I found.
- Get the corresponding 2d points in the image from camera R, draw them, and hopefully they correspond to the actual angles!
The more I think about it, the more I am convinced that this is wrong / cannot be done.
What I'm probably trying now:
- Using the internal parameters of the cameras (say I_R and I_L), solve 2 least squares systems to find E_R and E_L
- Select 2d angles in the image from camera L.
- Project these angles onto their corresponding 3D points (3d_points_L).
- Do: 3d_points_R = (E_L) .inverse * E * E_R * 3d_points_L
- Get 2d_points_R from 3d_points_R and draw them.
I will update when I have something new
source share