The second approach that you mentioned is the most popular because it is very easy to use.
Say you have the following checkerboard (9,6) , where the side of the square has a length a :
opencv chess calibration http://docs.opencv.org/_images/fileListImage.jpg
Then you simply define your object points as follows:
// 3D coordinates of chessboard points std::vector<cv::Point3f> objectPoints; for(int y=0; y<6; ++y) { for(int x=0; x<9; ++x) objectPoints.push_back(cv::Point3f(x*a,y*a,0)); } // One vector of chessboard points for each chessboard image std::vector<std::vector<cv::Point3f>> arrayObjectPoints; for(int n=0; n<number_images; ++n) arrayObjectPoints.push_back(objectPoints);
Basically, since you can choose a 3D coordinate system of your choice, you can use a checkerboard coordinate system, which makes object points very easy to define. Then the calibrateCamera function will take care of evaluating one R, t (relative orientation and translation relative to the selected coordinate system) for each image, as well as one internal matrix K and distortion coefficients D common to all images.
In addition, take care to use the same order for two-dimensional points.
source share