Preventing query blocking by caching and automatically recovering fresh caches
We can easily create a Rails cache and set an expiration time like this
Rails.cache.fetch(cache_key, expires_in: 1.minute) do `fetch_data_from_mongoDB_with_complex_query` end
Somehow, when a new request arrives, the validity period expires and the request is blocked. My question is: how can I avoid this situation? Basically, I want to provide the previous cache to the client request, while Rails does the cache.
As shown in the expected behavior diagram, the second request will receive cache 1 but not cache 2 , although Rails does for cache 2 . Thus, the user does not have to spend a lot of time creating a new cache. So, how can I automatically restore all caches without prompting the user to invoke it?
Expected Behavior

Cache snippet
cache_key = "#{__callee__}" Rails.cache.fetch(cache_key, expires_in: 1.hour) do all.order_by(updated_at: -1).limit(max_rtn_count) end
Update
How can I get all cached keys in a command?
Since a cached request can be generated by the composition start_date , end_date , depature_at , arrive_at .
Unable to invalidate all cached keys manually.
How can I get all cache keys and then update in Rake task
source share