Camera calibration

I use OpenCV, a beginner for everything.

I have a script, I'm projecting onto a wall, I'm building a kind of robot that has a camera. I wanted to know how I can process the image to get the real values โ€‹โ€‹of the coordinates of the droplets tracked by my camera?

+4
source share
1 answer

First of all, you need to calibrate the built-in camera. To do this, use the checkerboard templates printed on cardboard, for this there are methods in OpenCV, although there are ready-made tools for this. To get an idea, I wrote python code for calibration from a streaming video stream, moving cardboard along the camera at different angles and distances. Have a look here: http://svn.ioctl.eu/pub/opencv/py-camera_intrinsic/

Then you need to calibrate the external image of the camera, i.e. the camera position by. your world coordinates. You can put some markers on the wall, determine the 3D position of these markers and let OpenCV calibrate the external one for this (cvFindExtrinsicCameraParams2). In my code example, I am calculating an external value. a chessboard so that I can make a kettle in the correct perspective of the camera. You must customize this for your needs.

I assume that you only project onto a flat surface. You must know the geometry to get the 3D coordinates of your detected blob. Then you can find the drops in the image of your camera and know the internal, external and geometry, you can throw rays for each drop from the camera according to your internal / external and calculate the intersection of each such beam with your known geometry. Then the intersection is your 3D point in the world space where the blob is projected.

+8
source

All Articles