Is it possible to conduct an SVM training with detected false positives iteratively?

I am working on the problem of machine learning in image processing. I want to get the location of an object in an image using a histogram of oriented gradients (HOG) and a support vector machine (SVM). I read several articles and tutorials on training SVM. The setup is pretty standard. I have identified positive images of learning and now I have to create a set of negative learning patterns.

In the literature, an approach to creating negative training samples by randomly choosing a position is very common. I also saw some approaches where, in the sequential step of selecting random negative samples, false positives are again used as negative patterns for training. However, I wonder if this approach can be used at all from the very beginning. Thus, you randomly generate only one false training pattern, start the detection and put the false positives into the negative training set again. This seems like a pretty obvious strategy to me, but I wonder if something is missing.

+4
source share
3 answers

The theory underlying this method is outlined in Object Detection with discriminantly trained partial models by P. Felsenswalb, R. Girschik, D. McAllester, D. Ramanan in their PAMI document. In fact, your initial negative set does not matter, you will always converge to the same classifier if iteratively add hard selections (with an SVM edge> -1). Starting with one negative value, this convergence will be slower.

+3
source

It seems to me that you want to embed the SVM classifier online / incrementally, that is, update the classifier with new samples. Such methods are usually used only if new data arrives over time. In your case, it seems that you can create a whole set of negative patterns for training, so there is no need to train it gradually. I am inclined to say that training the classifier in one pass will be better than doing it gradually (as larsmans hint at).

+3
source

(Again, I'm not an image processing specialist, so grab this with salt.)

I am wondering if this approach can be used at all from the very beginning.

You will need a way to detect false positives from the classification run. To do this, you need a basic truth, that is, you need a person in a cycle. In fact, you will be doing active training . If this is what you want to do, you can also start with a bunch of negative examples with manual notation.

Alternatively, you can set this as a PU learning problem. I don't know if this works well with images, but it sometimes works to classify text.

0
source

All Articles