This is the first time I am processing an image. Therefore, I have many questions: I have two photos that are taken from different positions, one on the left and the other on the right, as in the figure below. [! [Enter a description of the image here] [1]] [1]
Step 1 Read the images using the imread function.
I1 = imread('DSC01063.jpg'); I2 = imread('DSC01064.jpg');
Step 2 Using the camera calibrator application in Matlab to obtain camera parameters.
load cameraParams.mat
Step 3 : remove lens distortion using the undistortImage function
[I1, newOrigin1] = undistortImage(I1, cameraParams, 'OutputView', 'same'); [I2, newOrigin2] = undistortImage(I2, cameraParams, 'OutputView', 'same');
Step 4 Detecting function points using the detectSURFFeatures function
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600); imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
Step 5 Retrieve function descriptors using the extractFeatures function.
features1 = extractFeatures(rgb2gray(I1), imagePoints1); features2 = extractFeatures(rgb2gray(I2), imagePoints2);
Step 6 : Combine Functions Using MatchFeatures
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 1); matchedPoints1 = imagePoints1(indexPairs(:, 1)); matchedPoints2 = imagePoints2(indexPairs(:, 2));
From there, how can I build a cloud of 3D dots? In step 2, I used a chessboard, as in the picture, to calibrate the camera [! [enter image description here] [2]] [2]
The square size is 23 mm and out of the camera. Params.mat I know the internal matrix (or camera calibration matrix K), which has the form K = [alphax 0 x0; 0 alphay y0; 0 0 1].
I need to calculate the main matrix F, the main matrix E to calculate the matrix of the camera P1 and P2, on the right ???
After that, when I have the camera matrices P1 and P2, I use linear triangulation methods to evaluate the cloud of 3D points. Is it correct?
I appreciate if you have any suggestions for me?
Thanks!