First you want to consider the keys for which data should be available. It is these keys that you want to use in the hash - and if you know the exact keys that you want to access, you can hash them to determine which server to request - eliminating the need to request each server.
It gets harder and harder if you donβt know the exact keys (as I suspect this is your situation) - LSH generates a general order for your records - where such records most likely (but not guaranteed) have the same hash, I think about it, for example, on comparing hyperplanes with the length of their normal vector from the origin ... therefore, for example, when searching for a similar (but non-identical) hyperplane accurate to 4-5 units from the origin, a good place to start the search is among other hyperplanes from 4 up to 5 units from ist chnika. Therefore, if this "distance from the source" is your location-sensitive hash function, you can shard use your data, and at the same time, you can reduce the load (with increasing delays in the worst case) by searching only for a fragment with the corresponding "distance from the source "LCH. With this particular LCH, where the similarity linearly correlates with the hash, one could get the final result when accessing only a subset of the distributed servers. This does not apply to all LSH features.
IMHO, it all depends on your LSH function - and this choice depends on the specifics of your application.
source share