Restore a 3D scene from two 2D images

This is the first time I am processing an image. Therefore, I have many questions: I have two photos that are taken from different positions, one on the left and the other on the right, as in the figure below. [! [Enter a description of the image here] [1]] [1]

Step 1 Read the images using the imread function.

I1 = imread('DSC01063.jpg'); I2 = imread('DSC01064.jpg'); 

Step 2 Using the camera calibrator application in Matlab to obtain camera parameters.

  load cameraParams.mat 

Step 3 : remove lens distortion using the undistortImage function

  [I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same'); [I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same'); 

Step 4 Detecting function points using the detectSURFFeatures function

  imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600); imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600); 

Step 5 Retrieve function descriptors using the extractFeatures function.

  features1 = extractFeatures(rgb2gray(I1), imagePoints1); features2 = extractFeatures(rgb2gray(I2), imagePoints2); 

Step 6 : Combine Functions Using MatchFeatures

  indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1); matchedPoints1 = imagePoints1(indexPairs(:, 1)); matchedPoints2 = imagePoints2(indexPairs(:, 2)); 

From there, how can I build a cloud of 3D dots? In step 2, I used a chessboard, as in the picture, to calibrate the camera [! [enter image description here] [2]] [2]

The square size is 23 mm and out of the camera. Params.mat I know the internal matrix (or camera calibration matrix K), which has the form K = [alphax 0 x0; 0 alphay y0; 0 0 1].

I need to calculate the main matrix F, the main matrix E to calculate the matrix of the camera P1 and P2, on the right ???

After that, when I have the camera matrices P1 and P2, I use linear triangulation methods to evaluate the cloud of 3D points. Is it correct?

I appreciate if you have any suggestions for me?

Thanks!

+5
source share
2 answers

To triangulate points, you need the so-called โ€œcamera matricesโ€ and points in 2D in each of the images (which you already have).

In Matlab, you have a triangulate function that does the job for you.

If you calibrated the cameras, you already have this information. In any case , you have an example on how to create the stereoParams object needed for triangulation.

+1
source

Yes, that is the right way. Now that you have consistent points, you can use estimateFundamentalMatrix to calculate the fundamental matrix F. Then you get the essential matrix E by multiplying F by the outer ones. Be careful with the order of multiplication, because the inner matrix in cameraParameters transposed relative to what you see in most tutorials.

Now you have to decompose E into a rotation and translation from which you can build a camera matrix for the second camera using cameraMatrix . You also need a camera matrix for the first camera, for which the rotation will be a 3x3 matrix, and the translation will be a 3-element vector.

Edit: now in MATLAB there is now a cameraPose function that calculates the relative pose ("R" and "t") to the value specified using the fundamental matrix and camera parameters.

0
source

All Articles