Get 3D coordinates from a pixel of a 2D image if external and internal parameters are known

I am doing camera calibration from tsai algo. I got the inner and outer matrix, but how can I recover the 3D coordinates from this interval?

enter image description here

1) I can use a Gaussian exception to search for X, Y, Z, W, and then the points will be X / W, Y / W, Z / W as a homogeneous system.

2) I can use the OpenCV approach :

enter image description here

as I know u , v , R , t , I can calculate X,Y,Z

However, both methods lead to incorrect results.

What am I doing wrong?

+32
c ++ opencv camera-calibration homogenous-transformation pose-assessment
Oct 20 2018-11-12T00:
source share
1 answer

If you have external parameters, you will get everything. This means that you can have Homography from external (also called CameraPose). Pose - 3x4 matrix, homography - 3x3 matrix, H , defined as

  H = K*[r1, r2, t], //eqn 8.1, Hartley and Zisserman 

with K being the internal matrix of the chamber, r1 and r2 being the first two columns of the rotation matrix, R ; t is a translation vector.

Then we normalize the division of everything by t3 .

What happens to r3 column, are we going to use it? No, because it is redundant, since it is a cross product of the first two columns of the pose.

Now that you have homography, project your glasses. Your 2d points are x, y. Add them z = 1, so they are now 3d. Project them as follows:

  p = [xy 1]; projection = H * p; //project projnorm = projection / p(z); //normalize 

Hope this helps.

+28
May 25 '12 at 7:56 a.m.
source share



All Articles