I am going to offer three methods, with different pros and cons, which I will describe.
Hash Code This is an obvious “solution”, although it has been correctly stated that it will not be unique. However, it is unlikely that any two arrays will have the same value.
Weighted Amount Your items seem limited; perhaps they range from a minimum of 0 to a maximum of 1. If so, you can multiply the first number by N ^ 0, the second by N ^ 1, the third by N ^ 2, etc., where N is some large number ( ideally, reverse accuracy). This is easy to implement, especially if you use the matrix package and are very fast. We can make it unique if we choose.
Euclidean distance from the average Subtract the average of your arrays from each array, round the results, sum the squares. If you have the expected average, you can use this. Again, not uniquely, there will be collisions, but you (almost) cannot avoid it.
Difficulty of uniqueness
It has already been explained that hashing will not give you a unique solution. Unique number perhaps in theory using a weighted sum, but we must use very large numbers. Let them say that your numbers are 64 bits in memory. This means that there are 2 ^ 64 possible numbers that they can represent (slightly less using a floating point). Eighteen such numbers in an array could represent 2 ^ (64 * 18) different numbers. This is huge. If you use something less, you cannot guarantee uniqueness due to the dove principle.
Let's look at a trivial example. If you have four letters: a, b, c and d, and you must use them each time with their numbers with numbers from 1 to 3, you cannot. This is the principle of pigeon. You have 2 ^ (18 * 64) possible numbers. You cannot unambiguously designate them with numbers less than 2 ^ (18 * 64), and hashing does not give you this.
If you use BigDecimal, you can represent (almost) arbitrarily large numbers. If the largest element you can get is 1 and the smallest 0, you can set N = 1 / (precision) and apply the weighted sum mentioned above. This guarantees uniqueness. Doubling accuracy in Java is Double.MIN_VALUE. Note that the array of weights must be stored in _Big Decimal_s!
This corresponds to this part of your question:
create a computational value for each array that is unique to it based on the values inside it
However, there is a problem:
1 and 2 suck for K Remedy
I assume from your discussion with Marco 13 that you are clustering on single values, not 18 arrays. As Marco already mentioned, Hashing sucks for K.'s funds. The whole idea is that the smallest change in the data will lead to a large change in the hash values. This means that two images that are similar, create two very similar arrays, produce two completely different "unique" numbers. The similarity is not preserved. The result will be pseudo-random !!!
Weighted amounts are better, but still bad. It will ignore all elements except the last if the last element is not the same. Only then will he look at the next to the last and so on. The similarity is not really preserved.
The Euclidean distance from the middle (or at least some point), at least, unites things in some reasonable way. The direction will be ignored, but at least things that are far from average will not be grouped with loved ones. The similarity of one function is preserved, other functions are lost.
Finally
1 is very simple, but not unique , and does not preserve similarities.
2 easy, can be unique , but does not preserve the similarities.
3 is easy, but not unique , but retains some similarities.
The implementation of the weighted amount. Not verified.
public class Array2UniqueID { private final double min; private final double max; private final double prec; private final int length; public Array2UniqueID(double min, double max, double prec, int length) { this.min = min; this.max = max; this.prec = prec; this.length = length; } public Array2UniqueID(int length) { this(-Double.MAX_VALUE, Double.MAX_VALUE, Double.MIN_VALUE, length); } public BigDecimal createUniqueID(double[] array) {