I am trying to calibrate a Kinect camera and an external camera using Emgu / OpenCV. I am stuck and I would really appreciate any help.
I decided to do this with the help of a fundamental matrix, that is, epipolar geometry. But the result is not what I expected. Images of the results are black or do not make sense at all. Typically, the points Mapx and mapy are infinite or infinite, or all are 0.00, and rarely have regular values.
Here's how I tried to make the fix:
1.) Find image points get two arrays of image points (one for each camera) from a set of images. I did this using the checkerboard function and FindChessboardCorners.
2.) Find the fundamental matrix
CvInvoke.cvFindFundamentalMat(points1Matrix, points2Matrix,
_fundamentalMatrix.Ptr, CV_FM.CV_FM_RANSAC,1.0, 0.99, IntPtr.Zero);
Am I transferring all the collected points from the entire set of images or just from two images trying to fix?
3.) Find homography matrices
CvInvoke.cvStereoRectifyUncalibrated(points11Matrix, points21Matrix,
_fundamentalMatrix.Ptr, Size, h1.Ptr, h2.Ptr, threshold);
4.) Get mapx and mapy
double scale = 0.02;
CvInvoke.cvInvert(_M1.Ptr, _iM.Ptr, SOLVE_METHOD.CV_LU);
CvInvoke.cvMul(_H1.Ptr, _M1.Ptr, _R1.Ptr,scale);
CvInvoke.cvMul(_iM.Ptr, _R1.Ptr, _R1.Ptr, scale);
CvInvoke.cvInvert(_M2.Ptr, _iM.Ptr, SOLVE_METHOD.CV_LU);
CvInvoke.cvMul(_H2.Ptr, _M2.Ptr, _R2.Ptr, scale);
CvInvoke.cvMul(_iM.Ptr, _R2.Ptr, _R2.Ptr, scale);
CvInvoke.cvInitUndistortRectifyMap(_M1.Ptr,_D1.Ptr, _R1.Ptr, _M1.Ptr,
mapxLeft.Ptr, mapyLeft.Ptr) ;
I have a problem here ... since I am not using calibrated images, what is my camera matrix and distortion factors? How can I get this from fundamental matrix or homographic matrices?
5.) Remap
CvInvoke.cvRemap(src.Ptr, destRight.Ptr, mapxRight, mapyRight,
(int)INTER.CV_INTER_LINEAR, new MCvScalar(255));
And this does not return a good result. I would be grateful if someone would tell me what I am doing wrong.
I have a set of 25 pairs of images and a 9x6 checkerboard template size.