I was among the "everyone who missed the obvious" here.
Just use any quick key / value search that is available to you. And look at all your possible meanings. This is a small set, and it does not take much time. Everything that is still not enough to store your data 6 times will be slower.
If you had a great vocabulary possible, then my previous answer would be appropriate.
Here is my old (and bad) answer.
I would bind them to a database with multiple concatenated indexes. How many of you are up to you.
At least I would have 2. I would have an index on (col1, col2, col3, col4, col5, col6) and (col4, col5, col6, col1, col2, col3) . This would mean that no matter which column was missing, there would be a way to get your data and view only at most 1/1000 records. If you want, you can instead index (col1, col2, col3, col4, col5, col6) , (col3, col4, col5, col6, col1, col2) and (col5, col6, col1, col2, col3, col4) to limit the search to 1/10000. It uses twice as much memory, but 10 times faster. (Warning, I will not guarantee that MySQL will successfully determine which index it should use. I would hope that other databases would get it right, but would not check it.)
If you do not want to use a database, you can use balanced binary trees in the same way as I suggested using the indexes above. For any given search, select a tree that has the missing item as deep as possible. Do a range search. Filter the returned data only for the rows of interest. Actually this is what a good database should do above with these indexes.