In the context of hardware cache, where these concepts usually arise, analysis is usually not done based on the memory address, so to speak. Locality is analyzed by access to memory blocks that are transferred between the cache and main memory.
If you think so, your code has temporal and spatial locality. When your code reads some_array[0] , if its address is not found in the cache, it is read from the main memory, and the entire block that contains it is copied to the cache. It replaces another unit, following a specific policy: for example, MRU.
Then, when you quickly reach some_array[1] , its block is already in the cache, so the read time will be less. Please note that you have gained access to the same block for a short period of time. Thus, you have spatial and temporal locality.
Cache uses spatial and temporal locality to provide faster access to memory. On the other hand, whether your code can take advantage of this is a completely different problem. However, the compiler will do most of the optimizations for you, so you only need to take care of this after finding a bottleneck in the profile session. On Linux environments, Cachegrind is great for this.
dario_ramos
source share