Brief introduction:
We are developing a positioning system that works as follows. Our camera is located on the robot and is directed up (looks at the ceiling). On the ceiling, we have something like a guideline, thanks to which we can calculate the position of the robot. It looks like this:

Our problem:
The camera tilts a little (I think, 0-4 degrees), because the surface of the robot is not quite smooth. This means that when the robot rotates, but remains in the same coordinates, the camera looks at a different position on the ceiling, and therefore our positioning program gives a different position for the robot, although it only rotates and does not move the bit.
Our current (hard-coded) solution:
We took a test photo from the camera, rotating it around the axis of the lens. From the photographs we deduced that it is tilted approx. 4 degrees in the "upward" image. Using some simple geometric transformations, we were able to reduce the tilt effect and find the real camera position. In the following pictures, the gray dot indicates the center of the image, the black dot is the real place on the ceiling, under which the camera is located. The black dot was converted from a gray dot (its position was calculated by correcting the position of the gray dots). As you can easily see, the gray dots form a circle on the ceiling, and the black dot is in the center of this circle.




The problem with our solution:
Our approach is completely incapable. If we move the camera to a new robot, the angle and direction of tilt should be completely re-calibrated. Therefore, we wanted to leave the calibration phase for the user, which would require taking some images, evaluating the tilt parameters, and then setting them in the program. My question is for you: can you come up with some better (more automatic) solution for calculating the tilt parameters or tilt correction in the pictures?
source share