I have to do almost anything in Effective Background Subtraction with OpenCV (subtracting the background from the foreground with color, with the exception of the camera, not the video file). The problem is that in this topic there is no explanation of the background subtraction phase itself.
I watched the official openCV book on the Internet, and a simple Frame Differencing is not enough for what I need. I tried to understand the more complex Averaging background method , but I get lost after the cvAcc frames to get the average value: /
If anyone could help me, I would really appreciate it.
Thanks!
EDIT with the code I have:
Amount
cvCvtScale( currentFrame, currentFloat, 1, 0 ); if(totalFrames == 0) cvCopy(currentFloat, sum); else cvAcc(currentFloat, sum);
average
cvConvertScale( sum, imgBG, (double)(1.0/totalFrames) );
adapted background (with alpha 0.05 in #define)
cvRunningAvg(currentFrame, imgBG, alpha);
Creating the final image with foregrond only (far from ideal!)
void createForeground(IplImage* imgDif,IplImage * currentFrame) { cvCvtColor(imgDif, grayFinal, CV_RGB2GRAY); cvSmooth(grayFinal, grayFinal); cvThreshold(grayFinal, grayFinal, 40, 255, CV_THRESH_BINARY); unsigned char *greyData= reinterpret_cast<unsigned char *>(grayFinal->imageData); unsigned char *currentData= reinterpret_cast<unsigned char *>(currentFrame->imageData); unsigned char *fgData= reinterpret_cast<unsigned char *>(currentFrame->imageData); int i=0; for(int j=0 ; j<(grayFinal->width*grayFinal->height) ; j++) { if(greyData[j]==0) { fgData[i]=0; fgData[i+1]=0; fgData[i+2]=0; i=i+3; } else { fgData[i]= currentData[i]; fgData[i+1]= currentData[i+1]; fgData[i+2]= currentData[i+2]; i=i+3; } } cvSetData( imgFG , fgData , imgFG->width*imgFG->nChannels); }
PROBLEM NOW!
The biggest problem now is that when I have a small arrow somewhere in the picture, after I hold my hand βon topβ of it for several seconds, when I remove it, the light keeps in the foreground a lot of time .. any help on this?