OpenCV triangulatePoints at different distances

I use the OpenCV triangulatePoints function to determine the 3D coordinates of the point displayed by the stereo camera.

I feel that this function gives me a different distance to the same point depending on the angle of the camera to that point.

Here is the video: https://www.youtube.com/watch?v=FrYBhLJGiE4

In this video, we track the "X". In the upper left corner, information about the point that is being tracked is displayed. (Youtube reduced quality, video is usually much sharper (2x1280) x 720)

In the video, the left camera is the source of the three-dimensional coordinate system and looks in the positive Z direction. The left camera is undergoing some translation, but not as much as the triangulatePoints function suggests. (More information in the video description.)

The metric unit is mm, so the point was initially triangulated at a distance of ~ 1.94 m from the left camera.

I know that inaccurate calibration can lead to this behavior. I performed three independent calibrations using a checkerboard pattern. The resulting parameters change too much to my liking. (Approximately + 10% for estimating focal length).

As you can see, the video is not much distorted. Straight lines seem pretty straight everywhere. Therefore, the optimal camera settings should be close to those that I already use.

My question is: is there anything else that can cause this?

Can the convergence angle between two stereo cameras have this effect? Or the wrong baseline length?

Of course, there are always errors in function detection. Since I use the optical stream to track the “X” mark, I get subpixel accuracy that can be wrong ... I don’t know ... + -0.2 px?

I am using a Stereolabs ZED stereo camera. I do not access video frames directly using OpenCV. Instead, I should use the special SDK that I purchased when I purchased the camera. It seemed to me that this SDK with which I use may be making some undistorted own ones.

So now I wonder ... If the SDK distorts the image using the wrong distortion coefficients, can this create an image that is not distorted with either a barrel or a distortion strut, but something completely different?

+7
c ++ opencv
source share
2 answers

The SDK supplied with the ZED camera performs image distortion and correction. The geometry is based on the same as openCV:

  • internal parameters and distortion parameters for left and right cameras.
  • external parameters for rotation / translation between right and left.

Through one of the ZED tools (ZED settings application) you can enter your own internal matrix for left / right and distortion factors, as well as basic / convergence.

In order to get accurate three-dimensional triangulation, you may need to adjust these parameters, as they have a big impact on the mismatch that you will evaluate before moving on to depth.

OpenCV provides a good module for calibrating 3D cameras. It does: -Mono calibration (camera calibration) for left and right, and then stereo calibration (cv :: StereoCalibrate ()). It will give out internal parameters (focal, optical center (very important)) and external (Baseline = T [0], Convergence = R [1], if R is a 3x1 matrix). RMS (return value of stereoCalibrate ()) is a good way to verify that the calibration is correct.

The important thing is that you need to perform this calibration on raw images, not using the images that come with the ZED SDK. Since ZED is a standard UVC camera, you can use opencv to get side side images (cv :: videoCapture with the correct device number) and extract your own Left and RIght images.

You can then enter these calibration parameters in the tool. The ZED SDK will then perform the malfunction / correction and provide the corrected images. The new camera matrix is ​​provided in getParameters (). You must take these values ​​when you triangulate, because the images are adjusted as if they were taken from this “ideal” camera.

hope this helps. / Ob /

+2
source share

There are 3 points that I can think of and probably can help you.

  • This is probably the least important, but from your description, you separately calibrated the cameras, and then the stereo system. Performing general optimization should improve recovery accuracy, as some “less accurate” parameters compensate for other “less accurate” parameters.

  • If reconstruction accuracy is important to you, you need to have a systematic approach to reducing it. Building a model of uncertainty, thanks to a mathematical model, is easy and can write a few lines of code to build it for you. Suppose you want to see if a 3D point is at a distance of 2 meters, at a certain angle to the camera system, and you have a certain uncertainty in the 2d projections of a three-dimensional point, it is easy to return the uncertainty back to the 3D space around your 3d -dot. By adding uncertainty to other system parameters, you can see which ones are more important and that they should have lower uncertainty.

  • This inaccuracy is inherent in the problem and the method you use.

    • First, if you model uncertainty, you will see that the reconstructed 3d points farther from the center of the cameras have much higher uncertainty. The reason is that the angle <left-camera, 3d-point, right-camera> narrower. I remember that the MVG book had a good description of this drawing.
    • Secondly, if you look at the implementation of triangulatePoints , you will see that the pseudo-inverse method is implemented using SVD to build a 3d point. This can lead to many questions that you probably remember from linear algebra.

Update:

But I constantly get a greater distance near the edges and several times the amount of uncertainty caused by the angle.

This is the result of using the pseudo-inverse, numerical method. You can replace this with the geometric method. One simple way is to reverse engineer 2d projections to produce two rays in three-dimensional space. Then you want to find where the intersection that does not occur due to inaccuracies. Instead, you want to find the point at which 2 rays have the smallest distance. Without taking into account uncertainty, you will consistently maintain a point out of many possible solutions. That is why with pseudo-treatment you do not see any hesitation, but a blunder.

Regarding general optimization, yes, you can run iterative LM optimization for all parameters. This is the method used in applications such as SLAM for autonomous vehicles, where accuracy is very important. You can find some search articles in googling bundle adjustment slam .

+1
source share

All Articles