For my augmented reality project, I have a 3D model viewed using a VTK camera and a real model object viewed using a real camera.
I used EPnP to evaluate the external matrix of a real camera (this camera was already calibrated before starting work, so I know the internal parameters), providing 3D points from VTK and corresponding 2D points from a real camera image and internal parameters of a real camera for the algorithm to work EPnP.
After that, I got a rotation and translation matrix with elements → R1, R2, R3, ....., R9 and t1, t2 and t3.
So my external matrix of a real camera looks like this (let me call it extrinsicReal)
R1 R2 R3 T1 R4 R5 R6 T2 R7 R8 R9 T3 0 0 0 1
After that, I evaluate the external matrix of my VTK camera using the following code:
vtkSmartPointer<vtkMatrix4x4> extrinsicVTK = vtkSmartPointer<vtkMatrix4x4>::New(); extrinsicVTK->DeepCopy(renderer->GetActiveCamera()->GetViewTransformMatrix());
To connect the 3D model of the VTK camera to a real camera, the VTK camera must be set to the same position as the real camera position, and the focal length of the VTK camera must be the same as the real camera. Another important step is to apply the same external matrix of the real camera to the VTK camera. How to do it?
What I did was that I took the inverse of extrinsicReal and multiplied it by extrinsicVTK to get a new 4 * 4 matrix (let it name newMatrix). I applied this matrix to convert a VTK camera.
vtkSmartPointer<vtkMatrix4x4> newMatrix = vtkSmartPointer<vtkMatrix4x4>::New(); vtkMatrix4x4::Multiply4x4(extrinsicRealInvert,extrinsicVTK,newMatrix); vtkSmartPointer<vtkTransform> transform = vtkSmartPointer<vtkTransform>::New(); transform->SetMatrix(NewM); transform->Update(); renderer->GetActiveCamera()->ApplyTransform(transform);
I am not sure if this is the correct method. But I checked the real camera position (which I got after EPnP) and the VTK camera position (after applying the conversion above), and they are both exactly the same. In addition, the orientation of the real camera and the projection direction of the VTK camera are also the same.
The problem is that even after the above parameters correspond to both the VTK and the real camera, the 3D VTK model does not seem to fit perfectly with the real video camera. Can someone help me step by step to debug the problem?