If redis is already on the stack, why is Memcached still used with Redis?

Redis can do anything Memcached provides (LRU cache, item expiration and now clustering in version 3.x +, currently in beta) or with tools like twemproxy. Performance is similar. Morever, Redis adds constancy, due to which you do not need to do caching in case of a server restart.

Link to some old answers that compare Redis and Memcache, some of which favor Redis as a replacement for Memcache (if they are already on the stack):

  • Memcached vs. Redis?

  • Is memcached a dinosaur compared to Redis?

  • Redis and Memcache or Redis Only?

Despite this, while studying the stacks of large web companies such as Instagram, Pinterest, Twitter, etc., I found that they use both Memcached and Redis for different purposes, without using Redis for primary caching. The main cache is still Memcached, and Redis is used for logical caching based on data structures.

As of 2014, why is memcached still worth the pain that needs to be added as an additional component to your stack when you already have a Redis component that can do everything memcached can do? What are the good times that incline architects / engineers to still include memcached besides the existing Redis?

Update:

For our platforms, we completely canceled Memcached and used redis for simple and logical caching requirements. Highly efficient, flexible and reliable.

Some examples of scenarios:

  • List all cached keys by a specific pattern and read or delete their values. Very easy in redis, not feasible (easy) in memcached.
  • Keeping a payload of more than 1 MB is easy to do in redis, you need to adjust the size of the slab in memcached, which has its own performance side effects.
  • Light snapshots of current cache contents
  • The Redis cluster is also ready for release along with language drivers, so clustered deployment is also easy.
+69
caching memcached redis
May 12 '14 at 5:28
source share
2 answers

The main reason I see it as a precedent for memcached over Redis today is the superior memory efficiency you should get with simple caching of HTML fragments (or similar applications). If you need to store different fields of your objects in different memcached keys, Redis hashes will be more memory efficient, but when you have a large number of keys β†’ simple_string pairs, memcached should be able to give you more items per megabyte.

Other things that are good points for memcached are:

  • This is a very simple piece of code, so if you just need the functionality that it provides, this is a reasonable alternative, I think, but I never used it in production.
  • It is multi-threaded, so if you need to scale in a single-box setup, this is good, and you only need to talk to one instance.

I believe that Redis as a cache becomes more and more tangible when people move toward intelligent caching or when they try to maintain a cached data structure through Redis data structures.

Comparison between Redis LRU and memcached LRU.

Both memcached and Redis do not perform real LRU evictions, but only an approximation of this.

Memcache eviction is a size class and depends on the implementation details of its slab allocator. For example, if you want to add an item that fits in a given size class, memcached will try to remove expired / recently used items in that class, instead try to make a global attempt to figure out what the object is, regardless of its size, which is the best candidate .

Redis instead tries to select a good object as a candidate for eviction when the maxmemory limit maxmemory reached by looking at all objects, regardless of size class, but can only provide about a good object, and not the best object with a long downtime.

As Redis does, by fetching multiple objects, choosing one that has been idle (not available) for the longest time. Starting with Redis 3.0 (currently in beta), the algorithm has been improved, and candidate pools have been removed during the eviction, so the approximation has been improved. In the Redis documentation, you can find descriptions and graphs with details on how this works .

Why memcached has a better memory area than Redis for simple strings -> string charts.

Redis is a more complex piece of software, so Redis values ​​are stored in a way that looks more like objects in a high-level programming language: they have an associated type, encoding, reference counting for memory management. This makes the Redis internal structure nice and manageable, but has an overhead compared to memcached, which deals only with strings.

When Redis Starts Working Better

Redis can store small aggregate data types in a special memory saving mode. For example, a small Redis Hash representing an object is stored inside not with a hash table, but as a unique binary blob. Thus, setting multiple fields for one object in a hash is more efficient than storing N shared keys in memcached.

In fact, you can store the object in memcached as a single JSON (or binary encoded) blob, but contrary to Redis, this will not allow you to receive or update independent fields.

The advantage of Redis in the context of smart caching.

Due to Redis data structures, the usual template used with memcached to destroy objects when the cache is invalid, to recreate it from the database later, is a primitive way to use Redis.

For example, imagine that you need to cache the latest N news posted on Hacker News to fill out the "Newest" section of the site. What you do with Redis is to take a list (stocked with M-elements) with the latest news embedded. If you use another repository for your data and Redis as a cache, then you must populate both views (Redis and DB) when publishing a new item. The cache is invalid.

However, an application can always have logic, so if the Redis list is empty, for example, after starting, the original view can be recreated from the database.

Using smart caching, you can cache using Redis in a more efficient way than memcached, but not all problems are suitable for this template. For example, caching HTML fragments may not use this technique.

+105
May 14 '14 at 9:07
source share

Habits are hard to break :)

Seriously, though, in my opinion, there are two main reasons: why Memcached is still in use:

  • Legacy - there are developers who are comfortable and familiar with Memcached, as well as applications that support it. It also means that it is a mature and well-tested technology.
  • Scaling - the standard Memcached scales easily horizontally, while Redis (before and after the release of v3 released soon) requires more work for this purpose (i.e. sharding).

However:

  • Re. (data structure, teams, constancy ...), customers in all conceivable languages ​​are actively developing - new applications usually developed with it.
  • Re scaling - in addition to the upcoming v3, there are solutions that can greatly simplify scaling. For example, Redis Cloud offers seamless scaling without data loss or interruption of service. Another popular Redis scaling / splinter approach is twemproxy .
+13
May 12 '14 at 7:28
source share



All Articles