OpenCV Iterative Random Forest Prep

I use the random forest algorithm as a classifier for my dissertation project. The training set consists of thousands of images, and for each image about 2000 pixels are sampled. For every pixel, I have a hundred thousand functions. With my current hardware limitations (8 GB RAM, possibly expandable to 16G). I can only place one image for placement in the memory of samples (i.e. attributes per pixel). my questions: is it possible to call the train method several times, each time with other image samples and automatically get the statistical model updated with every call? I am particularly interested in the variable value, because after I train the full set of training with the whole set of functions, my idea is to reduce the number of attributes from one hundred thousand to 2000, keeping only the most important ones.

Thanks for any advice, Daniele

+6
source share
2 answers

I don't think the algorithm supports incremental learning. You might consider reducing the size of your descriptors before training, using a different reduction method. Or evaluate the variable significance on a random subset of pixels, taken among all your training images, as much as you can write in your memory ...

+2
source

See my answer to this post . There are incremental versions of random forests, and they will allow you to train with much more data.

0
source

All Articles