Ehcache is not replicated in the lifecycle cluster

I have the following setting

1.liferay cluster with 2 machines on AWS

2.unicast clustering replication using JGroups over tcp

I have the following options in portal -ext.properties

#Setup hibernate net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml #Setup distributed ehcache ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml # # Clustering settings # cluster.link.enabled=true ehcache.cluster.link.replication.enabled=true cluster.link.channel.properties.control=tcp.xml cluster.link.channel.properties.transport.0=tcp.xml lucene.replicate.write=true #In order to make use of jgroups ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/myehcache/tcp.xml ehcache.multi.vm.config.location.peerProviderProperties=file=/myehcache/tcp.xml cluster.executor.debug.enabled=true ehcache.statistics.enabled=true 

I cannot get cluster cache replication to work. Can someone point me in the right direction? If necessary, I can send more detailed information. I also tried changing hibernate-clustered.xml and liferay-multi-vm-clustered.xml, but nothing works.

+7
source share
2 answers

After spending several days reading countless blog posts, forum topics, and, of course, SO questions, I would like to summarize here how we finally managed to configure cache replication in the Liferay 6.2 cluster using unicast TCP for Amazon EC2.

JGroups configuration

Before you configure Liferay to cache replication, you should understand that Liferay relies on JGroups pipes. Basically, JGroups allow you to discover and communicate with remote instances. By default (at least in Liferay), it uses multicast UDP to achieve these goals. See the JGroups website for more details .

To enable unicast TCP, you must first get the JGroups TCP configuration file from jgroups.jar in the Liferay webapp (something like $LIFERAY_HOME/tomcat-7.0.42/webapps/ROOT/WEB-INF/lib/jgroups.jar ). Extract this file to a location accessible by the Liferay class webapps classpath. Say $LIFERAY_HOME/tomcat-7.0.42/webapps/ROOT/WEB-INF/classes/custom_jgroups/tcp.xml . Pay attention to this way.

For this configuration to work in a Liferay cluster, you just need to add the singleton_name="liferay" attribute to the TCP tag:

 <config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd"> <TCP singleton_name="liferay" bind_port="7800" loopback="false" ... 

You may have noticed that:

but. this configuration file does not indicate the binding address for listening, but

C. that the original cluster hosts must be installed through the system property.

In fact, you need to change $LIFERAY_HOME/tomcat-7.0.42/bin/setenv.sh to add the following JVM system properties:

 -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=192.168.0.1 -Djgroups.tcpping.initial_hosts=192.168.0.1[7800],80.200.230.2[7800] 

The bind address determines which network interface to listen on (the JGroups port is set to 7800 in the TCP configuration file). The property of the initial hosts must contain each individual instance of the cluster (for more details, see TCPPING and MERGE2 in the JGroups docs ) together with its ports for listening. Remote instances can refer to their hostnames, local addresses, or public addresses.

( Tip: if you are setting up a Liferay cluster on Amazon EC2, most likely the local IP address and host name of your instances are different after each reboot. To get around this, you can replace the local address in setenv.sh with the result of the hostname: `hostname` command - pay attention to backlinks)

( Tip: if you use security groups in EC2, you must also open port 7800 for all instances in the same security group)

Liferay Configuration

JGroups replication is included in Liferay by adding the following properties to your portal -ext.properties:

 # Tells Liferay to enable Cluster Link. This sets up JGroups control and transport channels (necessary for indexes and cache replication) cluster.link.enabled=true # This external address is used to determine which network interface must be used. This typically points to the database shared between the instances. cluster.link.autodetect.address=shareddatabase.eu-west-1.rds.amazonaws.com:5432 

Setting up JGroups for unicast TCP is just an indication of the correct file:

 # Configures JGroups control channel for unicast TCP cluster.link.channel.properties.control=/custom_jgroups/tcp.xml # Configures JGroups transport channel for unicast TCP cluster.link.channel.properties.transport.0=/custom_jgroups/tcp.xml 

In the same file, Lucene index replication requires this single property:

 # Enable Lucene indexes replication through Cluster Link lucene.replicate.write=true 

EhCache cache replication is more subtle. You must configure JGroups for both the Hibernate cache and the Liferays internal caches. To understand this configuration, you should know that with Liferay 6.2 , the default configuration files for EhCache are β€œclustered” ( do not set these properties ):

 # Default hibernate cache configuration file net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml # Default internal cache configuration file ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml 

These configuration files rely on EhCache plants that must be configured to enable JGroups:

 # Enable EhCache caches replication through JGroups ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory 

JGroups factory peer caching provider expects a file parameter containing JGroups configuration. Specify the TCP unicast configuration file:

 # Configure hibernate cache replication for unicast TCP net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/custom_jgroups/tcp.xml # Configure internal caches replication for unicast TCP ehcache.multi.vm.config.location.peerProviderProperties=file=/custom_jgroups/tcp.xml 

( Tip: if in doubt, you should refer to the property definitions and default values: https://docs.liferay.com/portal/6.2/propertiesdoc/portal.properties.html )

Debugging

Alternatively, you can enable debug tracing with:

 cluster.executor.debug.enabled=true 

You can even say that Liferay displays on all pages the name of the node that processed the request:

 web.server.display.node=true 

Finally, JGroups channels provide a diagnostic service available through a search tool .

Final note

Please keep in mind that this applies only to indexes and cache replication. When configuring a Liferay cluster, you should also consider configuring:

  • Shared Database (RDS on AWS),
  • General DocumentLibrary (S3 or RDS on AWS),
  • Session replication on Tomcat,
  • And maybe more depends on how you use Liferay.
+8
source

I spent many hours for Liferay 6.1.1 CE to work on AWS.

Here are my "portal -ext.properties" with slight differences from yours

 ## ## JDBC ## # Tomcat datasource jdbc.default.jndi.name=jdbc/LiferayPool ## ## Mail ## # Tomcat mail session mail.session.jndi.name=mail/MailSession ## ## Document Library Portlet ## # NFS shared folder dl.store.file.system.root.dir=/opt/document_library/ ## ## Cluster Link ## # Cluster Link over JGroups TCP unicast cluster.link.enabled=true cluster.link.channel.properties.control=custom_cache/tcp.xml cluster.link.channel.properties.transport.0=custom_cache/tcp.xml # Any VPC internal IP useful to detect local eth interface cluster.link.autodetect.address=10.0.0.19:22 ## ## Lucene Search ## # Lucene index replication over Cluster Link lucene.replicate.write=true ## ## Hibernate ## # Second Level cache distributed with Ehcache over JGroups TCP unicast net.sf.ehcache.configurationResourceName=/custom_cache/hibernate-clustered.xml net.sf.ehcache.configurationResourceName.peerProviderProperties=file=custom_cache/tcp.xml ## ## Ehcache ## # Liferay cache distributed with Ehcache over JGroups TCP unicast ehcache.multi.vm.config.location=/custom_cache/liferay-multi-vm-clustered.xml ehcache.multi.vm.config.location.peerProviderProperties=file=custom_cache/tcp.xml ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory 

I added the following attribute

 singleton_name="custom_cache" 

into the TCP shell configuration "custom_cache / tcp.xml" JGroups.

In the end, I added the following parameters to run the Liferay script for node NODE_1

 JAVA_OPTS="$JAVA_OPTS -Djgroups.bind_addr=NODE_1 -Djgroups.tcpping.initial_hosts=NODE_1[7800],NODE_2[7800] -Djava.net.preferIPv4Stack=true" 

and NODE_2

 JAVA_OPTS="$JAVA_OPTS -Djgroups.bind_addr=NODE_2 -Djgroups.tcpping.initial_hosts=NODE_1[7800],NODE_2[7800] -Djava.net.preferIPv4Stack=true" 

Hope this saves you some time.

+3
source

All Articles