Spatial and temporal locality

I understand the definitions of terms, but I am having problems using their concepts for coding. For the exercise, we are invited to describe whether the following code is spatial or temporal:

for (int i=0; i<10; i++) { printf(some_array[i]); } 

I feel this is spatial locality because when one of the array addresses is accessed, the next index memory cell will be available as soon as the loop iterates. Is this right to watch? What determines if the code is temporary or spatial? More examples would be great.

+7
source share
4 answers

This is a bit stupid exercise, really. The code is not temporary or spatial.

But temporal locality implies that you will access the same address several times, relatively close to time. You do not do this here (except for access to i , I think), so by eliminating it you can conclude that it should be spatial locality.

More precisely, you get access to some_array[0] , then some_array[1] , etc. etc. They are close to each other in the address space, so yes, it can be "relying" on spatial locality.

+11
source

In the context of hardware cache, where these concepts usually arise, analysis is usually not done based on the memory address, so to speak. Locality is analyzed by access to memory blocks that are transferred between the cache and main memory.

If you think so, your code has temporal and spatial locality. When your code reads some_array[0] , if its address is not found in the cache, it is read from the main memory, and the entire block that contains it is copied to the cache. It replaces another unit, following a specific policy: for example, MRU.

Then, when you quickly reach some_array[1] , its block is already in the cache, so the read time will be less. Please note that you have gained access to the same block for a short period of time. Thus, you have spatial and temporal locality.

Cache uses spatial and temporal locality to provide faster access to memory. On the other hand, whether your code can take advantage of this is a completely different problem. However, the compiler will do most of the optimizations for you, so you only need to take care of this after finding a bottleneck in the profile session. On Linux environments, Cachegrind is great for this.

+5
source

This code has a temporary locality in the instruction cache, because you repeat the code with each loop (if your optimizer does not expand the loop). It also has spatial locality in the data cache, because if you access the element i of the array, you will soon get access to the elements i + 1, i + 2, etc. If the size of the data cache line is 16 bytes, and your array is 32-bit integers, then your data cache also loaded elements 1, 2, and 3 when you requested element 0 (assuming our array is running at the boundary of the cache line).

+2
source

The code has only spatial locality, but not temporary locality - in the context of cache access.

When loading data into the cache, an entire line / block is loaded, so subsequent accesses to the exact location of the memory, as well as to addresses that are also part of the same block in the cache, will have fast access times.

There are ways to optimize your code in such a way that the same amount of reading comes from the cache, and not directly from the main memory: 1. If you can access all neighboring memory addresses simply by using the first cache pass, and before this block exits the cache, you use spatial locality. 2. If you access the same memory location as many times as your program requires before the block in the cache is displayed, you can use the temporary locality.

Examples, such as matrix multiplication, would have temporal and spatial locality.

+1
source

All Articles