Launching optimal flood filling on the grid, albeit limited by disjoint squares

I need to optimize the grid by taking the number of “elements” in it and minimizing it as much as possible. When I say an element, I mean the section in this grid. Here, essentially, what the “input” may look like in the visual sense:

enter image description here

The first solution that comes to mind will be the fill fill algorithm, however I have one limitation: all elements must have 4 sides, so all elements must be rectangular.

My first, limited approach simply involved looping through the grid element over the element and checking to see if the last newly created element was the same color and had the same alpha as the element that was supposed to be created then - if therefore instead of creating a new element, he simply resizes the latter to continue by 1 block.

Here is an example of the pseudocode of what I am doing:

element output_array(); element last_element = null; for (int x = 0; x < grid_width; x++) { for (int y = 0; y < grid_height; y++) { color current_input_color = input_grid(x, y); if (last_element && last_element.x === x && last_element.color === current_input_color) { last_element.span_y++; } else { last_element = create_element( x, // element.x (the x coordinate of the elements top left most grid space) y, // element.y (the y coordinate of the elements top left most grid space) 1, // element.span_x (the number of elements to span on the x axis) 1, // element.span_y (the number of elements to span on the y axis) curent_input_color // element.color ); output_array.append(last_element); } } } 

As a result, I get this (assuming I entered the previous grid into it):

enter image description here

So, in this particular case, I reduced the number of elements from 64 to 20.

This is good, but my input grids are usually not 8x8. An example of a more realistic grid as input leads to 10201 elements before optimization (with my current method) and 957 after.

Since this method obviously depends to a large extent on the structure of the grid itself, these numbers can vary greatly. My hopes are to at least minimize the elements as much as I can for any given input grid.

Now I am approaching it from one direction (optimizing it vertically), but I would also like to optimize it horizontally. The result of such an operation should not be perfect, but here is the fact that I represent the most optimal finite mesh for the first input mesh:

enter image description here

In this case, the number of elements decreases from 20 to only 14 , which on my large grids can be very useful.

I just can’t imagine how to use the flood algorithm in such a way that I can take into account each element space in the input grid and keep all the resulting elements rectangular / 4-sided.

I suggested that I could probably overdo it, and while processor load / speed is not the biggest problem, I have to run it on very large grids with thousands and thousands of elements in order to waste resources, trying to transfer something to such a scale is simply not realistic - i don't think so.

+8
optimization algorithm flood-fill
source share
2 answers

Gareth Rees posted a very good answer to this question, which expands on David Epstein's answer in Math Overflow, citing several authors. In the proposal, an algorithm that gives optimal solutions must first cut the maximum disjoint set of lines between concave vertices (found in polynomial time by the maximum independent set in a bipartite graph), and then greedily distribute these contractions so that the remaining faces are rectangles.

Searching for MIS in a bipartite graph requires a maximum matching algorithm. If this is too much work, then only the greedy step, where a vertical cut is made from each concave vertex, is a 2-approximation.

+6
source share

Depending on the application, you can solve this with bursts. Think of your 2D array as a grayscale image, the goal is to compress it by decomposing it into rectangular functions (such as Haar wavelets), and then discarding the functions used to represent small details. Considering the data that you have shown us so far (i.e., Noise or texture), you will not actually have to discard anything after you have entered the wavelet transform.

In python you can use http://www.pybytes.com/pywavelets/ ,

 import pywt import numpy as np import matplotlib.pyplot as plt import Image img = Image.open('Desktop/b12nI.png') plt.imshow(img, cmap='gray') 

enter image description here

Take a single-stage discrete wavelet transform:

 coeffs = pywt.dwt2(img, 'haar') cA, (cH, cV, cD) = coeffs 

cA contain the Haar wavelet coefficients used for approximation. The approximation is accurate in your data, we can check by inverse transformation on approximate coefficients:

 recon = pywt.idwt2(coeffs,'haar') np.max(np.abs(recon - img)) 

produces 1.4210854715202004e-14

For comparison, if we tried to approximate a Gaussian noise using Haar wavelets:

 noise = np.random.randn(512,512) cA, (cH, cV, cD) = pywt.dwt2(noise, 'haar') recon = pywt.idwt2(coeffs,'haar') np.max(np.abs(noise-recon)) 

gives: 213.31090340487393

Computationally, wavelet transforms are O (n).

The Java code for wavelet transforms is here: https://en.wikipedia.org/wiki/Discrete_wavelet_transform

Additional information: http://gtwavelet.bme.gatech.edu/wp/kidsA.pdf

+1
source share

All Articles