So, I have a two-dimensional array representing a coordinate plane, an image. In this image, I search for “red” pixels and search (hopefully) for the location of the red LED target based on all the red pixels found by my camera. Currently, I'm just slapping my crosshairs at the center of gravity of all the red pixels:
// pseudo-code
for(cycle_through_pixels)
{
if( is_red(pixel[x][y]) )
{
vals++; // total number of red pixels
cx+=x; // sum the x's
cy+=y; // sum the y's
}
}
cx/=vals; // divide by total to get average x
cy/=vals; // divide by total to get average y
draw_crosshairs_at(pixel[cx][cy]); // found the centroid
The problem with this method is that although this algorithm naturally puts the centroid closer to the largest blob (the area with the reddest pixels), I still see my crosshairs jumping off the target when a little red side flickers due to glare or other minor interference.
My question is:
? , , , .