Better than O (n)? Only if you follow the radix sort path or save locations using hash keys that represent the general location in which they are located.
For example, you can divide the globe with latitude and longitude into minutes, list the resulting areas and make a hash for the location of its area. Therefore, when the time comes to get the nearest place, you only need to check no more than 9 hash keys - you can check in advance if the neighboring grid can provide a close location than the best found so far, thereby reducing the number of places to calculate the distance before. It is still O (n), but with a much smaller constant factor. Properly implemented, you will not even notice it.
Or, if the data is in memory or accidentally accessed by chance, you can keep it sorted by latitude and longitude. Then you use binary search to find the nearest latitude and longitude in the corresponding datasets. Then you continue to read places with increasing latitude or longitude (i.e., in the previous and subsequent places) until it becomes impossible to find a closer location.
You know that you cannot find a near location when the latitude of the next location on either side of the data sorted by latitude will not be closer than the best case found so far, even if they belonged to the same longitude as the point with which calculates the distance. A similar test is used for data sorted by longitude.
It really gives you better than O (n) - closer to O (logN), I think, but it requires random rather than sequential access to the data and duplication of all data (or keys to the data, at least).
source share