Anti-Clustering Clustering

I made a photomosaic script (PHP). This script has one image and changes it to a photograph of small images. From a distance it looks like a real picture, when you get closer, you see that all the small pictures. I take a square of a fixed number of pixels and determine the average color of that square. Then I compare this with my database, which contains the average color of several thousand images. I determine the color distance with all available images. But it takes several minutes to run this script completely.

The bottleneck corresponds to the best image with part of the main image. I searched online to reduce this, and the cross "Antipole Clustering" appeared. Of course, I tried to find some information on how to use this method myself, but I cannot figure out what to do.

There are two steps. 1. Obtaining a database and 2. Creating a photomosaic. Let's start with the first step when everything becomes clear. Perhaps I myself understand step 2.

Step 1:

  • divide each database image into 9 equal rectangles located in a 3x3 grid.

  • calculate RGB averages for each rectangle

  • build an x ​​vector made up of 27 components (three RGB components for each rectangle)

  • x is the image function vector in the data structure

Well, points 1 and 2 are easy, but what should I do at point 3. How to make an X vector from 27 components (average value 9 * R, average value G, B means).

And when I manage to make a vector, what will be the next step that I have to do with this vector.

Peter

0
source share
3 answers

This is how I think the function vector is computed:

You have 3 x 3 = 9 rectangles.

Each pixel represents essentially 3 numbers, 1 for each of the channels of red, green, and blue.

For each rectangle, you calculate the average of the red, green, and blue colors for all the pixels in that rectangle. This gives you 3 numbers for each rectangle.

In total, you have 9 (rectangles) x 3 (average for R, G, B) = 27 numbers.

Just combine these 27 numbers into one 27 on 1 (often written as 27 x 1) vector. These are 27 numbers grouped together. This 27-digit vector is a sign of the vector X, which represents the color statistics of your photo. In the code, if you use C ++, it will probably be an array of 27 numbers, or perhaps even an instance of the class (exactly named). You can think of this vector of functions as some form of "summary" of what color looks like in a photograph. Roughly speaking, everything looks like this: [R1, G1, B1, R2, G2, B2, ..., R9, G9, B9], where R1 is the middle / middle red pixel in the first rectangle, etc.

I believe that step 2 involves some form of comparison of these feature vectors, so that those who have similar feature vectors (and therefore a similar color) will be put together. The comparison is likely to be related to the use of the Euclidean distance (see here ) or some other indicator to compare how similar the feature vectors (and therefore the color of the photographs) are to each other.

Finally, as Anoni-Mousse suggested, it would be preferable to convert your pixels from RGB to HSB / HSV color. If you use OpenCV or have access to it, this is just one liner code. Otherwise wiki HSV etc. Will give you a mathematical formula to perform the conversion.

Hope this helps.

+1
source

Instead of using RGB, you can use HSB space. This gives the best results for a wide variety of uses. Put more weight on the Hue to get the best color matching for photos or brightness when composing high-contrast images (logos, etc.).

I have never heard of clustering antipoles. But the obvious next step would be to put all the images that you have in a large index. Say R-Tree. Possibly load via STR. Then you can quickly find matches.

0
source

Perhaps this means vector quantization (vq). In vq, the image is not subdivided into rectangles, but in the density region. Then you can take the midpoint of this cluster. First of all, you need to select all the colors and pixels and transfer them to the vector with the XY coordinate. Then you can use a density cluster similar to voronoi cells and get a midpoint. This item can be compared with other images in the database. Read here about VQ: http://www.gamasutra.com/view/feature/3090/image_compression_with_vector_.php .

How to build a vector from an adjacent pixel:

  d (x) = I (x + 1, y) - I (x, y)
 d (y) = I (x, y + 1) - I (x, y)

Here's another link: http://www.leptonica.com/color-quantization.html .

Update. When you have already calculated the average color of your sketch, you can continue and sort all the colors of the funds on the rgb map and use the formula that I give you to calculate the vector x. Now that you have the vector of all your sketches, you can use the antipole tree to search for thumbnails. This is possible because the antipole tree is something like a kd tree and subdivides 2d space. Read here about the antipole: http://matt.eifelle.com/2012/01/17/qtmosaic-0-2-faster-mosaics/ . Maybe you can ask the author and download the source code?

0
source

All Articles