I had a similar configuration 2 years ago.
I decided to use amazon VPC , by default in my project there were two instances of RabbitMQ that were always executed and configured in the cluster (called master nodes). The rabbitmq cluster was behind the internal load balancing of the Amazon .
I created an AMI with RabbitMQ and a control plugin configured (called "master-AMI"), and then I set up the autoscaling rules.
if an auto-hazard alarm occurs, a new AMI master starts. This AMI executes the following script for the first time:
The scripts use the HTTP API http://internal-myloadbalamcer-xxx.com:15672/api/nodes "to find the nodes, and then select one and bind the new AMI to the cluster.
As an HA policy, I decided to use this:
rabbitmqctl set_policy ha-two "^two\." ^ "{""ha-mode"":""exactly"",""ha-params"":2,"ha-sync-mode":"automatic"}"
Well, connecting is βprettyβ easy, the problem is that you can remove the node from the cluster.
It is not possible to delete a node based on an autoscale rule because you can receive messages in the queue that you must consume.
I decided to periodically execute the script for two instances of the master node, which:
- checks the number of messages through the API http: // node: 15672 / api / queues
- If the number of messages for the entire queue is zero, I can remove the instance from load balancing and then from the rabbitmq cluster.
This is basically what I did, hope this helps.
[EDIT]
I edited the answer as there is this plugin that can help:
I suggest looking at the following: https://github.com/rabbitmq/rabbitmq-autocluster
The plugin has been moved to the official RabbitMQ repository and can easily solve such problems.
Gabriele
source share