The dominant "color" of the image

I have the following image:

enter image description here

What I want to do is the "id" of the individual stripes based on their dominant color. What is the best way to do this?

What I did is use the image value (HSV) and make a distribution over that value. The problem is that for strip0 values [27=32191, 28=5433, others=8] strip1 values [26=7107, 27=23111, others=22] . I can not get the final difference.

The main goal of the project is to compare the actual yellow paper with the stripes and determine which strip is the most similar.

+6
source share
4 answers

You can scan all colors and use a hash table to track how many pixels of each color are.

Take these numbers and, remembering what colors they correspond to, sort them in descending order.

Look at the sorted list of numbers and find the difference between each consecutive pair of numbers. Keep track of the indices in the list of two numbers that led to each difference. Sort this list of differences.

Look at the maximum number in the difference list. You now have the biggest decline between two sets of pixels. Find how big it was. Everything with this number of pixels and above is the dominant color. All below is the sub-dominant color. Now you know how many dominant colors you have and what they are.

It should be pretty easy from there to do what you want to do.

The only time this did not work is if some noise was the same color as the strip, so much so that it distorted your data.

In this case, you would use a different approach, which you can also use in the first case - looking at the runs. Go through the pixels, and each time you find a new color, see how many of the following pixels are the same color.

Use the method described earlier to group colors into dominant and non-dominant for the same result.

In both cases, if you know that the image has vertical stripes, you can limit the number of horizontal color lines that you are looking at to speed things up.

+2
source

Firstly, since you know the borders of each strip in the reference image, the only problem maybe is that your link image is noisy. A relatively redundant processing method is to cluster the colors in each strip and accept the clusters as a representative color of the strip. To get a more meaningful answer here, consider the CIELAB color space for this step. Having done this and converting the results back to RGB, for the first strip I get the rgb triplet (0.949375, 0.879872, 0.147898) , and for the second strip (0.945324, 0.857322, 0.129756) (each channel in the range [0, 1]).

When you get a new image, you perform the same operation. But there are many problems. For example, how do you handle white balance in this input image? Supposing you don’t have such a problem, now it’s just a matter of finding the closest color to the one you just found in the same process. To find the closest color, you should also use a meaningful color space for such a thing, and CIELAB is recommended again, since it has well-defined Delta-E functions defined on it. See http://en.wikipedia.org/wiki/Color_difference for some of these metrics, the simplest of which is the Euclidean distance in CIELAB.

+5
source

Calibrate your equipment. If you do not calibrate your equipment, you will have arbitrary errors between the test sample and the link. Lighting is part of your equipment.

Use edge detection and your knowledge of the geometry of the reference strip (the strip is equal to the width) to determine the areas of the sample. For each sample area, extract the inner patch.

For the test strip, calculate an image where each pixel is the maximum difference in the selection window (for example, 5x5). This will allow you to identify a relatively uniform area that is different from the outside area (i.e. Paper). Remove the patch.

Use downsampling to find the integrated color for each patch for svnpenn recommendations. You can look at other calculation methods later, but this should work quite well.

For the weights wh, ws, wv, calculate the similarity = whabs (h0-h1) + wsabs (s0-s1) + wv * abs (v0-v1) between the test color and each reference color. You can look at other remote measures later, but this should work well. Start with equal weights. One of the advantages of this method is that it behaves well regardless of the size or combination of sizes at which the reference strip changes.

Sort the results to find the most similar and second most similar matches. Note that the similarity is set up so that zero is an exact match, and a large number is a bad match. Use the ratio of these two results to evaluate the quality of the most similar match - if the first two matches are very close, this is probably not very suitable.

+3
source

You can split the image into sections, and then resize each section by one pixel. This is an example of using the whole image.

 $ convert Y82IirS.jpg -resize 1x1 txt: # ImageMagick pixel enumeration: 1,1,255,srgb 0,0: (220,176, 44) #DCB02C srgb(220,176,44) 

Medium Image Color

+1
source

All Articles