If you use cameraCalibrate (), you should get mtx, rvecs and tvecs. R is 3x1 to be converted to 3x3 using the Rodriguez opencv method. Thus, the final code will look something like this:
rotation_mat = np.zeros(shape=(3, 3)) R = cv2.Rodrigues(rvecs[0], rotation_mat)[0] P = np.column_stack((np.matmul(mtx,R), tvecs[0]))
Assuming you used multiple images to calibrate the camera, here I use only the first to get the P-matrix for the first image. For any other image, you can use rvecs [IMAGE_NUMBER], tvecs [IMAGE_NUMBER] for the corresponding matrix P
sam
source share