I want to find massive data grouped by several keys in the fastest way. I have a file with this information, but I want to load it into memory. Memory capacity is not a problem.
key1 | key2 | key3 | key4 | value1 | value2 -----|------|------|------|--------|-------- 1 | 1 | 1 | 1 | str | 20 1 | 1 | 1 | 2 | str | 20 1 | 1 | 1 | 3 | str | 20 1 | 1 | 2 | 1 | str | 20 2 | 1 | 1 | 1 | str | 20
I looked through some collections, but I'm still not sure:
http://blog.bodurov.com/Performance-SortedList-SortedDictionary-Dictionary-Hashtable
Perhaps a multilingual dictionary would be better because it avoids a lot of redundancy in keys.
public class MultiKeyDictionary<T1, T2, T3> : Dictionary<T1, Dictionary<T2, T3>> key1 | key2 | key3 | key4 | value1 | value2 -----|------|------|------|--------|-------- 1 | 1 | 1 | 1 | str | 20 | | | 2 | str | 20 | | | 3 | str | 20 | | 2 | 1 | str | 20 2 | 1 | 1 | 1 | str | 20
I will not look for keys, but maybe 50% of them. I am open even to crazy offers.
source share