Running LRU in production code

I have C ++ code where I need to implement cache replacement using the LRU technique.
So far, I know two methods for performing LRU replacements:

  • Using timeStamp for each access time for cached data and finally comparing time elements during replacement.
  • Using a stack of cached items and moving them up if they have been accessed recently, so finally the LRU candidate will be at the bottom.

So which one is better to use in production code?
Do they have other better methods?

+28
c ++ algorithm lru
Jan 13
source share
5 answers

I recently implemented an LRU cache using a linked list distributed across a hash map.

/// Typedef for URL/Entry pair typedef std::pair< std::string, Entry > EntryPair; /// Typedef for Cache list typedef std::list< EntryPair > CacheList; /// Typedef for URL-indexed map into the CacheList typedef boost::unordered_map< std::string, CacheList::iterator > CacheMap; /// Cache LRU list CacheList mCacheList; /// Cache map into the list CacheMap mCacheMap; 

The advantage is to be O (1) for all important operations.

Insertion Algorithm:

 // create new entry Entry iEntry( ... ); // push it to the front; mCacheList.push_front( std::make_pair( aURL, iEntry ) ); // add it to the cache map mCacheMap[ aURL ] = mCacheList.begin(); // increase count of entries mEntries++; // check if it time to remove the last element if ( mEntries > mMaxEntries ) { // erease from the map the last cache list element mCacheMap.erase( mCacheList.back().first ); // erase it from the list mCacheList.pop_back(); // decrease count mEntries--; } 
+39
Jan 13 '10 at 14:50
source share

Here is a very simple LRU cache implementation

https://github.com/lamerman/cpp-lru-cache .

It is easy to use and understands how it works. The total code size is about 50 lines.

+8
Jun 21. '13 at 6:45
source share

For simplicity, you might want to consider using a Boost MultiIndex card. If we separate the key from the data, we support multiple sets of keys on the same data.

From [ http://old.nabble.com/realization-of-Last-Recently-Used-cache-with-boost%3A%3Amulti_index-td22326432.html ]:

"... use two indexes: 1) hash the key value for searching 2) sequentially to track the last recently used elements (get function put item as the last element in the sequence). If we need to delete some elements from the cache, we can delete them from the beginning of the sequence).

Note that the "project" operator allows the programmer to efficiently navigate between different indexes of the same multi_index_container.

+5
Jul 20 '10 at 2:36
source share

In our production environment, we use a C ++ double linked list, which is similar to a Linux kernel linked list . The beauty is that you can add an object to as many linked lists as possible, and the list is quick and easy.

+2
Jan 13
source share



All Articles