Changing the host name terminates Rabbitmq on startup on Kubernetes

I am trying to run Rabbitmq using Kubernetes on AWS. I use the official Rabbitmq docker container . Each time the pod is reloaded, the rabbitmq container gets a new host name. I installed a service (such as LoadBalancer) for a module with a resolvable DNS name.

But when I use EBS to make the / messsage / rabbit queue configuration persistent between restarts, it breaks down into:

exception exit: {{failed_to_cluster_with, [' rabbitmq@rabbitmq-deployment-2901855891-nord3 '], "Mnesia could not connect to any nodes."}, {rabbit,start,[normal,[]]}} in function application_master:init/4 (application_master.erl, line 134) 

rabbitmq-deployment-2901855891-nord3 is the previous rabbitmq hostname container. It almost looks like Mnesia kept the old hostname: - /

The container information is as follows:

  Starting broker... =INFO REPORT==== 25-Apr-2016::12:42:42 === node : rabbitmq@rabbitmq-deployment-2770204827-cboj8 home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : XXXXXXXXXXXXXXXX log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq 

I can only set the first part of the node name in rabbitmq using the environment variable RABBITMQ_NODENAME .

Setting RABBITMQ_NODENAME to a resolvable DNS name is split into:

Can't set short node name!\nPlease check your configuration\n"

Setting RABBITMQ_USE_LONGNAME to true is interrupted:

Can't set long node name!\nPlease check your configuration\n"

Update:

  • Setting RABBITMQ_NODENAME on rabbitmq @ localhost works, but it negates the possibility of cluster instances.

      Starting broker... =INFO REPORT==== 26-Apr-2016::11:53:19 === node : rabbitmq@localhost home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : 9WtXr5XgK4KXE/soTc6Lag== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/ rabbitmq@localhost 
  • Setting the RABBITMQ_NODENAME service name, in this case rabbitmq-service , so rabbitmq @ rabbitmq-service also works because the names of the kubernet services are internally resolvable via DNS.

      Starting broker... =INFO REPORT==== 26-Apr-2016::11:53:19 === node : rabbitmq@rabbitmq-service home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : 9WtXr5XgK4KXE/soTc6Lag== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/ rabbitmq@rabbitmq-service 

Is it correct? Will I still be able to group multiple instances if the node names match?

+7
amazon-ec2 dns rabbitmq hostname kubernetes
source share
2 answers

The idea is to use a different “service” and “deployment” for each of the node that you want to create.

As you said, you need to create a custom NODENAME for each ie:

 RABBITMQ_NODENAME=rabbit@rabbitmq-1 

Also rabbitmq-1,rabbitmq-2,rabbitmq-3 should be allowed from each node. You can use kubedns for this. /etc/resolv.conf will look like this:

 search rmq.svc.cluster.local 

and /etc/hosts should contain:

 127.0.0.1 rabbitmq-1 # or rabbitmq-2 on node 2... 

Services are here to create a stable network identity for each node.

 rabbitmq-1.svc.cluster.local rabbitmq-2.svc.cluster.local rabbitmq-3.svc.cluster.local 

Different deployments resources allow you to set up a different volume on each node.

I am working on a deployment tool to simplify these steps: I did a demo on how I scale and deploy rabbitmq from 1 to 3 nodes on the kubernet: https://asciinema.org/a/2ktj7kr2d2m3w25xrpz7mjkbu?speed=1.5

More generally, the complexity faced by deploying a clustered application is addressed in the "pet offer" sentence: https://github.com/kubernetes/kubernetes/pull/18016

+3
source share

In addition to @ ant31's first answer:

Now Kubernetes allows you to configure the host name, for example. in yaml:

 template: metadata: annotations: "pod.beta.kubernetes.io/hostname": rabbit-rc1 

See https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns # A Records and hostname Based on Pod Annotations - A Beta Feature in Kubernetes v1.2

It seems that the whole configuration is alive repeatedly restarted or redistributed. I have not configured the cluster, but I will follow the tutorial for mongodb, see https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes

This approach will probably be almost the same in terms of kubernetes.

+2
source share

All Articles