OpenCV Object Detection and Real-Time Comparison

We create an autonomous robot (in college) that follows certain signs and directions and passes through a directional route. The robot will have a camera on its head. He will follow the signs on the road ahead, or the walls, and, accordingly, make a decision. The signs will be GREEN ARROWS (for GO signal) or RED T as a stop sign. The robot scans these characters in real time and performs the necessary actions. These signs can be on the wall directly in front or painted on the way in front.

I tried to look for the necessary algorithms or methods for converting images, but we are completely new to this area. I ask you to help how to solve this problem and find the necessary code that can help us (suppose we start).

I studied the following topics, but I'm confused: - OpenCV Object Detection - Center Point - How to recognize the rectangles in this image? - http://www.chrisevansdev.com/computer-vision-opensurf.html (I can't use it)

One of the hints for the event was that we can simulate the arrows as a rectangle and a triangle combined, how to find if the center of the triangle is to the right of this rectangle (which means going to the right) or otherwise. Similarly for T.

Thanks!:)

+7
source share
2 answers

If the signs are previously known, you can use the "recognize objects by detecting signs" method.

The idea is that you have an image of a sign (arrow or T), and you follow these training steps, autonomously :

1 - Function detection (using, SURF, FAST, ...)

2 - Retrieving the descriptor (from functions) using SIFT, FREAK, etc.

Then it happens in real time. For each frame, you need to perform function detection and retrieval of the descriptor, but then you need to perform a comparison with the training images to find out which object you have. An example that will work in real time:

cv::FAST detector; cv::FREAK descriptor; BFMatcher matcher = BFMatcher(NORM_HAMMING,false); detector.detect(frame,keypoints_frame); descriptor.compute(frame, keypoints_frame,descriptors_frame); matcher.match(descriptors_trainning, descriptors_frame); 

This will be the first approach for comparison, then you need to refine and remove outliers. Some methods

  • Ratio test

  • Cross check

  • RANSAC + homography

Here you will get a complete example .

+8
source

I assume that you can get the signs before the event: take the arrow sign and get the "screening descriptors" from it and save them in your robot.

Then, in each frame that the robot tries to find the color of the sign, when you see something similar to the sign, take the screening descriptors and try to register between the saved descriptors and the new ones. if you manage to calculate the rotation and translation matrices between the original saved icon and the sign that you found in the image.

To read about sifting, I would recommend on this site: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/ Once you understand the basics of sifting, I recommend downloading some implementation instead of it Implementing it yourself is a very tedious job and has many pitfalls.

BTW Despite the fact that sift is a “large-scale invariant transformation of objects”, I’m sure that it will work in your case, even if you have a “perspective transformation”.

Hope this helps

+3
source

All Articles