Metric for Surf

I am looking for a useful metric for SURF. For example, how a good image matches another on a scale, say from 0 to 1, where 0 means no similarity, and 1 means the same image.

SURF provides the following data:

  • points of interest (and their descriptors) in the request image (Q set)
  • points of interest (and their descriptors) in the target image (set T)
  • using the nearest neighboring pairs of algorithms, you can create from two sets on top

I tried to do something, but nothing worked too well:

  • using the size of different sets: d = N / min (size (Q), size (T)), where N is the number of matching points of interest. This produces fairly similar images of rather poor quality, for example. 0.32, even when 70 percent were compared with about 600 in Q and 200 in T. I think 70 is a really good result. I was thinking about using some kind of logarithmic scaling, so only very low numbers will get low results, but can't find the right equation. With d = log(9*d0+1) I get a result of 0.59, which is pretty good, but nonetheless it seems to destroy the power of SURF.

  • using the distances between the pairs: I did something like finding the best match for K and added their distances. The smallest distance, similar to two images. The problem is that I do not know what the maximum and minimum values ​​are for the descriptor element of the point of interest from which the remote is calculated, so I can only relatively find the result (from many inputs, which is the best). As I said, I would like to set the metric exactly between 0 and 1. I need this to compare SURF with other image metrics.

The biggest problem with these two is to exclude the other. The number of matches with a different distance between matches is not taken into account. I am lost.

EDIT . For the first equation, the equation log (x * 10 ^ k) / k, where k is 3 or 4, gives a good result most of the time, min is not good, it can make d more than 1, in some rare cases, without a small result.

+8
math algorithm image-processing computer-vision surf
source share
1 answer

You can easily create a metric that is a weighted sum of both metrics. Use machine learning techniques to find out the appropriate weights.

What you describe is closely related to the content-based content search area, which is a very rich and diverse area. Googling, which will bring you many hits. While SURF is an excellent feature detector with a low level of general average performance, this is not enough. SURF and SIFT (from which SURF was derived) are great for detecting duplicates or almost duplicates, but not so good for perceiving similarities of perception.

The most effective CBIR systems typically use an ensemble of functions optimally combined through a set of workouts. Some interesting detectors to try include GIST (a fast and cheap detector that is most effective for detecting artificial and natural environments) and Object Bank (the histogram-based detector itself is made up of 100 outputs of object detectors).

+6
source share

All Articles