I am looking for a useful metric for SURF. For example, how a good image matches another on a scale, say from 0 to 1, where 0 means no similarity, and 1 means the same image.
SURF provides the following data:
- points of interest (and their descriptors) in the request image (Q set)
- points of interest (and their descriptors) in the target image (set T)
- using the nearest neighboring pairs of algorithms, you can create from two sets on top
I tried to do something, but nothing worked too well:
using the size of different sets: d = N / min (size (Q), size (T)), where N is the number of matching points of interest. This produces fairly similar images of rather poor quality, for example. 0.32, even when 70 percent were compared with about 600 in Q and 200 in T. I think 70 is a really good result. I was thinking about using some kind of logarithmic scaling, so only very low numbers will get low results, but can't find the right equation. With d = log(9*d0+1) I get a result of 0.59, which is pretty good, but nonetheless it seems to destroy the power of SURF.
using the distances between the pairs: I did something like finding the best match for K and added their distances. The smallest distance, similar to two images. The problem is that I do not know what the maximum and minimum values ββare for the descriptor element of the point of interest from which the remote is calculated, so I can only relatively find the result (from many inputs, which is the best). As I said, I would like to set the metric exactly between 0 and 1. I need this to compare SURF with other image metrics.
The biggest problem with these two is to exclude the other. The number of matches with a different distance between matches is not taken into account. I am lost.
EDIT . For the first equation, the equation log (x * 10 ^ k) / k, where k is 3 or 4, gives a good result most of the time, min is not good, it can make d more than 1, in some rare cases, without a small result.
math algorithm image-processing computer-vision surf
SinistraD
source share