I am trying to implement a traffic sign recognizer using the OpenCV and SURF method. My problem is that I get random results (sometimes very accurate, sometimes obviously wrong), and I can’t understand why. This is how I implemented the comparison:
- First I find outlines in my image.
- Then on each circuit I use SURF to find out if the road sign is inside and which road sign
Path detection works fine: with gaussain and canny edge blur, I manage to find a path like this:

Then I extract the image corresponding to this contour and compare this image with the image of the road sign template, for example:


cvExtractSURF returns 189 descriptors for the outline image. Then I use the naiveNearestNeighbor method to find out the similarities between my outline image and each template image.
Here are my results:
6/189 for the first template (which I expect to find)
92/189 for the second template (which, obviously, is very different from the outline image)
I really don't understand these results ...
Here is a list of the steps I follow:
- Rotate a grayscale outline image
- Rotate the grayscale pattern image
- Equals the histogram of the outline image (cvEqualizeHist)
- Resize the template image to fit the outline image.
- Blur Image Template (cvSmooth)
- Edge Blur (cvSmooth)
- Make cvExtractSURF on the template image
- Make cvExtractSURF on the outline image
- For each descriptor o, the outline image I make naiveNearestNeighbor
- I keep the number of "good" points
To assess the similarity between the two images, I use the ratio:
number of goog points / total number of descriptors
PS: For information, I follow this guide: http://www.emgu.com/wiki/index.php/Traffic_Sign_Detection_in_CSharp
And used the find_obj OpenCV sample to adapt it to C.
opencv surf
vdaubry
source share