Caffeine does not implement LRU as a cache eviction policy. Instead, Caffeine uses a policy called TinyLFU . The Caffeine documentation includes an Efficiency page that discusses the rationale for choosing this design. Quoting this page:
TinyLfu relies on a frequency sketch to probabilistically evaluate the historical use of the recording.
Since Caffeine does not actually implement LRUs, I donβt think you can reliably expect it to exhibit strict LRU behavior when checking cache entries.
If you absolutely must have LRU behavior, then the standard LinkedHashMap JDK is a good, easy choice. You will need to subclass it and redefine removeEldestEntry with logic to signal when the cache is bigger than you want it. If multithreaded usage is required, you need to perform operations with the appropriate synchronization.
Caffeine was heavily inspired by Guava Cache , which similarly provides concurrent access and has an approximate LRU behavior. A quick test of your code against the Guava cache shows similar results. I do not know a single standard library that would provide predictable, externally observable LRU results and true simultaneous access without coarse blocking.
You can reconsider whether this requirement does have rigorous, externally observable LRU results. By its nature, cache is a fast temporary storage for optimized searches. I would not expect the behavior of the program to change much depending on whether the cache implements strict LRU, approximation of LRU, LFU, or some other eviction policy.
This previous question also perfectly discusses LRU cache options in Java.
How to implement LRU cache in Java?
source share