Here's how I do it:
- Create a kernel, it will determine the neighborhood of pixels.
- Create a new image by expanding the image with this kernel. This extended image contains the maximum neighborhood value for each point.
- Compare the equalities between the two arrays. Wherever they are equal, this is a valid maximum proximity and
255
set in the comparison array. - Multiply the comparison array and the original array (scaling accordingly).
- This is your last array containing only the maximum values.
This is illustrated by these enlarged images:
9 pixels by 9 pixels of the original image:
After processing by a 5 by 5 pixel core, only local neighborhood maxima remain (i.e., the maximum values ββare divided by more than 2 pixels from a pixel with a large value):
There is one caveat. If two close maxima have the same value, they will both be present in the final image.
Here is the Python code that does this, it is very easy to convert to C ++:
import cv im = cv.LoadImage('fish2.png',cv.CV_LOAD_IMAGE_GRAYSCALE) maxed = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1) comp = cv.CreateImage((im.width, im.height), cv.IPL_DEPTH_8U, 1)
I did not know until now, but this is what @sansuiso suggested in his / her answer.
This may be better illustrated by this image before:
after processing with a 5 by 5 core:
solid regions are determined by the values ββof common local maxima.
source share