What does the Terracotta server do when it is used as a backend for EHCache with Hibernate?


My DAL is implemented using Hibernate , and I want to use EHCache as my second level cache with its distributed capabilities (for scalability and HA).
After seeing that EHCache provides distributed caching with Terracotta , my question is, what is the role of the Terracotta server instance? Does it also contain data? Does it only support distribution coordination between the partitioned parts of the cache?
My confusion comes mainly from this explanation regarding the TSA, which says that the server stores data, but I think that maybe in my scenario the cache and Terracotta server are as if merged. Am I right? If the server really stores data, then why shouldn't the bottleneck just go from db to the Terracotta server?

Update: Affe's answer answered the second part of my question, which was an important part, but just in case someone comes by looking at the first part. I will say that the TC server must store all the data stored in the EHCache memory, and therefore, if you need a distributed cache (not replicated), then the L2 server (TC server) must also contain all the objects.

Thanks in advance,
Ephaya

+6
java hibernate ehcache distributed-caching terracotta
source share
2 answers

The idea is that it is still much faster to connect to the terracotta cluster using the terracotta driver and do what happens mostly with the card than to get the database connection and execute the SQL statement. Even if this becomes an application decay point, the overall throughput will still be significantly higher than the Joke point for connecting to SQL + SQL. Open connections and open cursors are big resource pigs in the database, an open socket for a terracotta cluster - no!

+4
source share

You can get an ehcache cluster without using terracotta. They have documentation for this through RMI, JGroups, and JMS. We use JMS because we have a significant JMS infrastructure for processing communications. I don't know how well it will scale in the long run, but our current problem is just HA.

+3
source share

All Articles