I have a very specific problem:
I have uniformly random values scattered across a 15x50 grid, and the pattern I want to use corresponds to a 5x5 square of cells centered around any possible grid position.
Thus, the number of samples can vary from 25 (far from the borders, in most cases) to 20, 15 (near the border) to a minimum of 9 (in the corner).
So, although the cell values are random, the location introduces a deterministic change in the length of the sequence.
The size of the hash table is a small number, usually from 50 to 20.
The function will work with a large set of randomly generated grids (several hundred / thousand) and can be called several thousand times for each grid. The positions on the grid can be considered random.
I need a function that could distribute possible 15x50 samples as evenly as possible.
I tried the following pseudo code:
int32 hash = 0;
int i = 0;
foreach (value in block)
{
hash ^= (value << (i%28))
i++
}
hash %= table_size
but the results, although not very unbalanced, do not seem to me very smooth. Maybe because the sample is too small, but circumstances make it difficult to run the code on a larger sample, and I would prefer not to write a full test harness if some kind of computer approach has an answer ready for me :).
I'm not sure that pairing values two by one and using a general purpose byte hashing strategy would be the best solution, especially since the number of values may be odd.
17- , , , ( " " ).
, (, , ).