I have computer vision configured with two cameras. One of these cameras is the flight time of the camera. This gives me the depth of the scene at every pixel. Another camera is a standard camera giving me a color image of the scene.
We would like to use depth information to remove some areas from the color image. We plan to track objects, people, and hands in a color image and want to remove a remote background pixel using the cameraβs flight time. Not sure yet that cameras can be aligned in parallel setup.
We could use OpenCv or Matlab for calculations.
I read a lot about fixing, Epipolargeometry, etc., but I still have problems to see the steps that I have to take to calculate the correspondence for each pixel.
Which approach would you use, what features can you use. What steps do you share the problem in? Is there a tutorial or sample code available somewhere?
Update . We plan to automatically calibrate using well-known markers located in the scene.
source share