Using ehcache before memcached

We have a web application that loads a User object from a database. This is a large application with thousands of concurrent users, so we are considering ways to cache User objects to minimize database load.

Ehcache is currently being used, but they are looking at memcached to lower application memory requirements and make it more scalable.

The problem we are currently encountering with memcached is CPU usage, which results in serialization of the user instance. We are considering ways to speed up serialization, but also considering using the smaller ehcache cache supported by memcached.

Does anyone have experience using memcached-supported ehcache (i.e. first glance at ehcache, if the user is not there, look at memcache if it does not look in the database)?

Any flaws in this approach?

+6
java java-ee caching memcached ehcache
source share
6 answers

If you want to move away from Ehcache, you can consider Infinispan , which now includes integration with memcache . This is a bit more to work than Ehcache, but not too much.

Starting with version 4.1, the Infinispan distribution contains a server module that implements the memcached text protocol. This allows memcached clients to talk to one or more Memcached servers supported by Infinispan. These servers can either work autonomously, like memcached, where each server acts independently and does not interact with the others, or they can be clustered where the servers replicate or distribute their contents to other memcached servers supported by Infinispan, thereby providing fault tolerant clients opportunities.

+2
source share

It makes sense to do what you offer. We encountered the same problem with memcached that the overhead for serializing objects back and forth should not be used for a large-volume application. Having a local cache reduces the load on the application side, while memcached reduces the load on the database side. The disadvantage is the added complexity of writing two cache levels and maintaining cache consistency. I will try to minimize where you need to use it.

+1
source share

Infinispan can store objects as instances and minimize serialization costs, and instead of replicating data on each node, it can distribute data to make better use of your memory, or you can limit the number of records to store in memory. You can also simply send invalidation messages to other nodes when updating the value instead of sending serialized values.

In addition, for when it still needs to serialize, it uses a very efficient Marshaller instead of Java serialization, and from version 5 you can connect your custom external elements to adjust the format of some types of wires to give it an extra push (not needed at all, but nice to have).

In case you look at memcached for other reasons, remember that Infinispan also "speaks" the memcached text protocol, so if you have other clients, you can still integrate with it.

+1
source share

You could just overwrite net.sf.ehcache.Cache.createDiskStore()

 new Cache(..) { protected Store createDiskStore() { if (isDiskStore()) { // default: return DiskStore.create(this, diskStorePath); MemcachedStore store = new MemcachedStore(..); getCacheConfiguration().addConfigurationListener(store); return store; } else { return null; } } } 

MemcachedStore is the usual implementation of net.sf.ehcache.store.Store , which you will have to do yourself. This is not trivial, but again, starting with DiskStore should not be too complicated.

0
source share

You cannot replace DiskStore in ehcache because its final. You can implement the new OffHeapStore and connect it like this. That's how BigMemory works. There is an Apache project called DirectMemory that does the same.

See my post here for more details:

http://forums.terracotta.org/forums/posts/list/0/8833.page#40635

0
source share

This article describes how we can use the cache in the process before the distributed cache in the spring application by defining our own MultiTieredCacheManager and MultiTieredCache:

Layered caching - using the cache in the process before the distributed cache

0
source share

All Articles