Blob Tracking Algorithm

I am trying to create simple blob tracking using OpenCV. I found drops using findcontours. I would like to give these blocks a constant identifier.

I put together a blob list in the previous frame and current frame. Then I took the distance between each blob in the previous frame and the current frame. I would like to know what else is needed to track the drops and give them an identifier. I just took the distance between the previous and current frame blocks, but how can I assign a consistent identifier to the blocks using the measured distance between the blocks?

+8
c ++ opencv computer-vision tracking
source share
3 answers

In the first frame, you can assign id in any way, 1 for the first one you find, 2 for the second ... or just give them an ID according to their position in the collection.

Then on the next frame you will need to use the best match. Find the drops, calculate all the distances between the current drops and all the drops of the previous image and assign each previous identifier to the nearest blob. Blobs that simply enter a field will receive new identifiers.

Now that you have two frames, you can make motion prediction for the next. Just calculate deltaX and deltaY between the previous and current blob position. You can use this information to guess the future position. Match against this future position.

This should work if you do not have a lot of overlapping drops, and if the movement is not too fast and unstable between frames.

It is possible to be more accurate using a scoring system across multiple images:
Get positions for the first 3 or 5 images. For any frame of frame one, find the closest one on frame 2, calculate the speed (deltaX deltaY), look for the closest to the predicted position for frame 3, 4, 5 ... Sum all the distances between the predicted positin and the nearest blob, this will be an estimate. Do the same using the second nearest on frame 2 (it will look in the other direction). The lower the score, the more likely it is a good drop.

If you have a lot of blobs, you should use the quadrant to speed up the process. Compare the square of the distance; this avoids a lot of sqrt calculations.

It is important to know how your blob typically moves in order to customize its algotritis.

+6
source share

Here's a sample OpenCV blob tracking code:

#include "stdafx.h" #include <opencv2\opencv.hpp> IplImage* GetThresholdedImage(IplImage* img) { // Convert the image into an HSV image IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3); cvCvtColor(img, imgHSV, CV_BGR2HSV); IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1); // Values 20,100,100 to 30,255,255 working perfect for yellow at around 6pm cvInRangeS(imgHSV, cvScalar(112, 100, 100), cvScalar(124, 255, 255), imgThreshed); cvReleaseImage(&imgHSV); return imgThreshed; } int main() { // Initialize capturing live feed from the camera CvCapture* capture = 0; capture = cvCaptureFromCAM(0); // Couldn't get a device? Throw an error and quit if(!capture) { printf("Could not initialize capturing...\n"); return -1; } // The two windows we'll be using cvNamedWindow("video"); cvNamedWindow("thresh"); // This image holds the "scribble" data... // the tracked positions of the ball IplImage* imgScribble = NULL; // An infinite loop while(true) { // Will hold a frame captured from the camera IplImage* frame = 0; frame = cvQueryFrame(capture); // If we couldn't grab a frame... quit if(!frame) break; // If this is the first frame, we need to initialize it if(imgScribble == NULL) { imgScribble = cvCreateImage(cvGetSize(frame), 8, 3); } // Holds the yellow thresholded image (yellow = white, rest = black) IplImage* imgYellowThresh = GetThresholdedImage(frame); // Calculate the moments to estimate the position of the ball CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments)); cvMoments(imgYellowThresh, moments, 1); // The actual moment values double moment10 = cvGetSpatialMoment(moments, 1, 0); double moment01 = cvGetSpatialMoment(moments, 0, 1); double area = cvGetCentralMoment(moments, 0, 0); // Holding the last and current ball positions static int posX = 0; static int posY = 0; int lastX = posX; int lastY = posY; posX = moment10/area; posY = moment01/area; // Print it out for debugging purposes printf("position (%d,%d)\n", posX, posY); // We want to draw a line only if its a valid position if(lastX>0 && lastY>0 && posX>0 && posY>0) { // Draw a yellow line from the previous point to the current point cvLine(imgScribble, cvPoint(posX, posY), cvPoint(lastX, lastY), cvScalar(0,255,255), 5); } // Add the scribbling image and the frame... and we get a combination of the two cvAdd(frame, imgScribble, frame); cvShowImage("thresh", imgYellowThresh); cvShowImage("video", frame); // Wait for a keypress int c = cvWaitKey(10); if(c!=-1) { // If pressed, break out of the loop break; } // Release the thresholded image... we need no memory leaks.. please cvReleaseImage(&imgYellowThresh); delete moments; } // We're done using the camera. Other applications can now use it cvReleaseCapture(&capture); return 0; } 
+3
source share

u can use cvblobslib library to detect blob ...

  • if your inter-frame motion of the frame is less than the interval between blocks. This blob offset is less than the distance between the blocks, then you can create a list and continue adding blob to each current frame that falls next to the drops in the previous frame ...
  • If your drops have some constant functions, such as ellipticity ... aspect ratio (after setting the bounding box), you can group drops with these functions in a list.
+1
source share

All Articles