I am running a simple test to assess the position of an OpenCV camera. Having a photo and the same photo enlarged (enlarged), I use them to detect functions, calculate the necessary matrix and restore the camera pose.
Mat inliers;
Mat E = findEssentialMat(queryPoints, trainPoints, cameraMatrix1, cameraMatrix2,
FM_RANSAC, 0.9, MAX_PIXEL_OFFSET, inliers);
size_t inliersCount =
recoverPose(E, queryGoodPoints, trainGoodPoints, cameraMatrix1, cameraMatrix2, R, T, inliers);
So, when I specify the original image as the first, and the enlarged image as the second, I get the translation T close to [0; 0; -1]. However, the second camera (enlarged) is almost closer to the object than the first. Therefore, if the Z axis goes from the image plane to the scene, the second camera should have a positive offset along the Z axis. For the result, I get the Z axis from the image plane to the camera, which among the other axes (X goes to the right, Y goes down) forms left coordinate system. It's true? Why is this result different from the described coordinate system here ?
source
share