Distributed two-dimensional point distribution algorithm

In an array of 2D pixels, I need an efficient algorithm that will select p% of the pixels that are most common.

This can be done adaptively by selecting points, and then re-adjusting the position of the points that are too close to each other. But this is not effective, since it will require a lot of iterations and distance calculations.

It does not have to be perfect, it just needs to avoid clusters of points as far as this can be done effectively.

+6
geometry 2d distribution point
source share
10 answers

You want a Poisson distribution, but it's complicated. The search includes many scientific articles on how to do this effectively: http://people.csail.mit.edu/thouis/JonesPoissonPreprint.pdf

+1
source share

Thanks everyone for the answers!

The best solution seems to use "pre-built building blocks": nxn arrays with cells already selected and covering an array of pixels with them.

For example, a 4 x 4 array with a coverage of 12.5%:

0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 

With a coverage of 6.3%:

 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 

To get% coverage between them, simply alternate these blocks according to the current total of the actual% coverage. To cover a width that is not a multiple of 4, use some 3 x 3 blocks. To cover a larger area more efficiently, simply use larger blocks.

This effectively spans the entire array without calculating distance or floating point arithmetic.

+1
source share

The “most common” pixel choice is the set whose Delaunay triangulation consists of equilateral triangles. Many points that lead to this triangulation can be found by breaking the matrix of pixels into a set of boxes, where each square is sqrt (3) longer than the width. Each box contributes 5 pixels to a finite set of pixels (one at each corner plus the center node in the center of the field). The trick is to find how many rows and columns of boxes will give you a ratio of 1: sqrt (3). Without going through the output, here is how you understand it:

 std::vector<PixelIndices> PickPixels(int width, int height, float percent) { int total_pixels = width*height; int desired_pixel_count = (int)total_pixels*percent; // split the region up into "boxes" with 4 corner nodes and a center node. // each box is sqrt(3) times taller than it is wide. // calculate how many columns of boxes float a = 1.155*height/(float)width; float b = .577*height/(float)width + 1; float c = 1 - desired_pixel_count; int num_columns = (int)((-b + sqrt(b*b -4*a*c))/(2*a)); // Now calculate how many rows int num_rows = .577*height*num_columns/(float)width; // total number of pixels int actual_pixel_count = 2*num_rows*num_columns + num_rows + num_columns + 1; std::cout << " Total pixels: " << total_pixels << std::endl; std::cout << " Percent: " << percent << std::endl; std::cout << "Desired pixels: " << desired_pixel_count << std::endl; std::cout << " Actual pixels: " << actual_pixel_count << std::endl; std::cout << " Number Rows: " << num_rows << std::endl; std::cout << "Number Columns: " << num_columns << std::endl; // Pre-allocate space for the pixels std::vector<PixelIndices> results; results.reserve(actual_pixel_count); // Now get the pixels, my integer math is probably wrong here, didn't test // (didn't even finish, ran out of time) for (int row = 0; row <= num_rows; row++) { int row_index = row*height/num_rows; // Top of box for (int col = 0; col <= num_columns; col++) { int col_index = col*width/num_columns; results.push_back(PixelIndices(row_index, col_index)); } // Middle of box if (row != num_columns) { for (int col = 0; col < num_columns; col++) { // I'll leave it to you to get this, I gotta go! } } } return results; } 

Instead of using integer division to search for indices, you can speed it up by finding the distance between each point in a row / column and simply adding by offset.

+1
source share

You can try Wang tiles:
http://en.wikipedia.org/wiki/Wang_tile
(See Pages related to Cohen’s paper and Kopf’s paper. I am a new user, so I can’t post all the links).

They combine the idea of ​​both a finished tile and a uniformly distributed requirement, usually solved using Poisson's patterns. Van tiles can avoid periodic effects, which are almost certainly a problem with more direct use of pre-built tiles.

+1
source share

Pretty old, but noteworthy, because the answers skipped an important method and are focused on optimal solutions that don't interest you. But the method that I propose may or may not suit your needs.

You can use quasi-random sequences that are designed for such problems. The most common are Sobol sequences , for which you can find canned packages for almost any language. They are incredibly fast: only bitwise arithmetic.

They will most likely create some clusters, but this can be avoided by selecting a “seed” for use in the x and y measurements in advance and checking with the naked eye.

It depends on what you want to do with the dots: if "visual distribution" is important, maybe this is not what you want. If you need points that “fill the plane” almost evenly, they do the job perfectly. They are especially useful for averaging something in an image, since it requires fewer points than with "normal" random generation. Experiment with different sizes and see.

See also this link for experimenting with examples and images.

+1
source share

How about this:

  • Open the sum of the distances from each point to each other's point. Thus, point A has the total distance dist (A, B) + dist (A, C) + dist (A, D) + ...
  • Sort these summed distances.
  • Delete the points with the smallest sum of distances until you reach the desired percentage.

This may be accurate enough, but if not, you can always replace step 3:

"Delete the point with the smallest amount, and if you need to remove more points to reach the desired percentage, return to step 1."

Wait. Now I'm curious. Are you trying to find the points that are most common from a given set of points ... or are you trying to find the points that will be most common from a specific array? This is completely different ... and still very hard.

0
source share

How about calculating the density value for each pixel, starting with its proximity to all other pixels. Then repeatedly delete the “densest” pixel until you stay below p% remaining in the list.

You will need to do a distance calculation to determine the density between any two points no more than two times. The first time will be when you create the source list - each pixel will need to be compared with each other's pixel. Secondly, when you remove a pixel from the list, you will need to calculate the deleted file for each of the remaining in the list. This should take into account the change in density values ​​when each pixel is deleted - for example, 2 pixels directly next to each other will have a very very high value, but as soon as one is deleted, the remaining one can have a very low value.

Some quick pseudo codes (note that in this example, areas with higher density have a small amount)

 For I = 0 to MaxPixel For J = I to MaxPixel PixelDensity[I], PixelDensity[J] += DistanceBetween(Pixels[I], Pixels[J]) While PixelDensity.Count > TargetCount RemovePixel = IndexOfSmallest(PixelDensity) ForEach I in PixelDensity PixelDensity[I] -= DistanceBetween(Pixels[I], Pixels[RemovePixel]) PixelDensity.Remove(RemovePixel) 

If memory is less of a concern than computation time, you can also save the distance between any two points in a simple 2d array. In addition, instead of a simple distance, it would be useful to make an exponential calculation of the distance - this will avoid something like two points located almost one above the other, but away from everything else and passing both.

0
source share

An iterative approach to the sweeping bay would be just visualized.

  • For each cell, find the two closest points and write the product of these two distances.
  • Those cells with the highest products are those that are attached to the farthest points.
0
source share

Ooh! How about this!

(In very manual mode, since I don’t know the square matrix or something else ... I assume that is so.)

Let's say you have an array of 1000x1000 on which you want to place 47 points (I choose 47, so this is an unusual number that will not fit "beautifully").

You take ceil (sqrt (47)) ... which will give you the value (7). So we create a 7x7 square, fill it with 47 pixels (some are empty) and imagine putting it in the corner of the array.

Now move each of these pixels to a new location based on where they are in the small array (7x7), on the large array (1000x1000). A simple equation should do it for you ... for the X coordinate, for example:

 xBigArrayIndex = xSmallArrayIndex * 1000 / 7; 

Then your pixels will be super spread! And it's nice and fast.

The only drawback is that it works fine when your square is ideally positioned to start with ... if you fill it naively (starting from the top left corner, going over, etc.), you end up with a little sub- ideal spread ... since the translated pixels do not quite reach the lower right corner of the large array. But maybe this is good enough? And if not, maybe this is a smaller subset of the problem that is easier to handle?

0
source share

You can use the convex hull algorithm and exclude points that this algorithm will calculate and repeat as long as they meet your criteria p% or

follow the steps of the convex hull algorithm, verify that the points included in and inside the hull meet the criteria of 100% - p%

some demonstrations of the convex hull here http://www.cs.unc.edu/~snoeyink/demos/

and here you got more information http://softsurfer.com/Archive/algorithm_0109/algorithm_0109.htm

0
source share

All Articles