Leader unavailable to Kafka in console producer

I am trying to use Kafka. All settings are correct, but when I try to create a message from the console, I get the following error

WARN Error while fetching metadata with correlation id 39 : {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) 

kafka version: 2.11-0.9.0.0

+141
apache-kafka producer
Mar 04 '16 at 5:32
source share
22 answers

It may be related to setting advertised.host.name in your server.properties .

What can happen as your producer tries to find out who is the leader for this section, calculates its advertised.host.name and advertised.port and tries to connect. If these settings are not configured correctly, then it may seem that the leader is unavailable.

+77
Mar 15 '16 at 9:33
source share

I tried all the recommendations listed here. For me it worked to go to server.properties and add:

 port = 9092 advertised.host.name = localhost 

Leave the listeners and the advertised_listeners commented out.

+66
Nov 22 '16 at 0:58
source share

I had kafka working as a Docker container, and similar messages flooded the journal.
And KAFKA_ADVERTISED_HOST_NAME was set as "kafka".

In my case, the cause of the error was a lost /etc/hosts entry for "kafka" in the "kafka" container itself.
So, for example, starting ping kafka inside the container "kafka" failed with ping: bad address 'kafka'

In terms of Docker, this problem is solved by specifying the hostname for the container.

Parameters for achieving it:

+37
Aug 03 '16 at 11:39 on
source share

What decided for me was to set up such listeners:

 advertised.listeners = PLAINTEXT://my.public.ip:9092 listeners = PLAINTEXT://0.0.0.0:9092 

This allows KAFKA brokers to listen on all interfaces.

+34
Oct. 12 '17 at 9:58 on
source share

I am using kafka_2.12-0.10.2.1:

vi config / server.properties

add line below:

 listeners=PLAINTEXT://localhost:9092 
  • There is no need to modify advertised users, since it takes the value from the std listener property.

The host name and port broker will advertise manufacturers and consumers. If not installed,

  • it uses the value for "listeners" if configured

. Otherwise, it will use the value returned from java.net.InetAddress.getCanonicalHostName ().

stop the Kafka broker:

 bin/kafka-server-stop.sh 

restart broker:

 bin/kafka-server-start.sh -daemon config/server.properties 

and now you should not see any problems.

+19
Jun 22 '17 at 6:25
source share

I have been observing this same problem in the last 2 weeks working with Kafka, and since then I have been reading this Stackoverflow article.

After 2 weeks of analysis, I came to the conclusion that in my case this happens when I try to create messages for a topic that does not exist .

The result in my case is that Kafka sends the error message back, but at the same time creates a theme that did not exist before. Therefore, if I try to send a message to this topic again after this event, the error will no longer appear as the topic that was created.

PLEASE NOTE: It is possible that my specific Kafka installation was set to automatically create a theme if it does not exist, which explains why in my case I see the problem only once at the very beginning: your configuration may be different in this case you will continue to have the same error again and again.

With respect,

Luca Tampellini

+16
Sep 05 '18 at 15:51
source share

We tend to receive this message when we try to subscribe to a topic that has not yet been created. We usually rely on topics that should be created a priori in our deployed environments, but we have component tests that work against the pre-re-Kerfed kafka instance that runs every time.

In this case, we use AdminUtils in our test setup to check if this theme exists and create it if not. See This is another stack overflow for more information on configuring AdminUtils.

+12
Aug 01 '16 at 20:11
source share

Another possibility for this warning (in version 0.10.2.1) is that you are trying to interrogate a topic that has just been created, and the leader for this section of the topic is not yet available, you are in the middle of the leadership choices.

Waiting for a second between topic creation and polling is a workaround.

+8
Jun 07 '17 at 17:06 on
source share

For those who are trying to run kafa on the kubernetes and encounter this error, this is what finally resolved this for me:

You must either:

  • Add hostname to the pod specification so kafka can find itself.

or

  1. If you use hostPort you need hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet

The reason for this is that Kafka needs to talk to himself, and he decides to use the "advertised" listener / hostname to find himself, rather than using localhost. Even if you have a Service that indicates the advertised hostname on the container, it is not visible from inside the container. I really don't know why this is, but at least there is a workaround.

 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: zookeeper-cluster1 namespace: default labels: app: zookeeper-cluster1 spec: replicas: 1 selector: matchLabels: app: zookeeper-cluster1 template: metadata: labels: name: zookeeper-cluster1 app: zookeeper-cluster1 spec: hostname: zookeeper-cluster1 containers: - name: zookeeper-cluster1 image: wurstmeister/zookeeper:latest imagePullPolicy: IfNotPresent ports: - containerPort: 2181 - containerPort: 2888 - containerPort: 3888 --- apiVersion: v1 kind: Service metadata: name: zookeeper-cluster1 namespace: default labels: app: zookeeper-cluster1 spec: type: NodePort selector: app: zookeeper-cluster1 ports: - name: zookeeper-cluster1 protocol: TCP port: 2181 targetPort: 2181 - name: zookeeper-follower-cluster1 protocol: TCP port: 2888 targetPort: 2888 - name: zookeeper-leader-cluster1 protocol: TCP port: 3888 targetPort: 3888 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kafka-cluster namespace: default labels: app: kafka-cluster spec: replicas: 1 selector: matchLabels: app: kafka-cluster template: metadata: labels: name: kafka-cluster app: kafka-cluster spec: hostname: kafka-cluster containers: - name: kafka-cluster image: wurstmeister/kafka:latest imagePullPolicy: IfNotPresent env: - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-cluster:9092 - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-cluster1:2181 ports: - containerPort: 9092 --- apiVersion: v1 kind: Service metadata: name: kafka-cluster namespace: default labels: app: kafka-cluster spec: type: NodePort selector: app: kafka-cluster ports: - name: kafka-cluster protocol: TCP port: 9092 targetPort: 9092 
+6
Jul 15 '17 at 13:22
source share

Adding this as it may help others. A common problem may be the incorrect configuration of advertised.host.name . In Docker using docker-compose, setting the service name inside KAFKA_ADVERTISED_HOST_NAME will work unless you have KAFKA_ADVERTISED_HOST_NAME and the host name. docker-compose.yml :

  kafka: image: wurstmeister/kafka ports: - "9092:9092" hostname: kafka environment: KAFKA_ADVERTISED_HOST_NAME: kafka KAFKA_CREATE_TOPICS: "test:1:1" KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 volumes: - /var/run/docker.sock:/var/run/docker.sock 

Above without hostname: kafka may LEADER_NOT_AVAILABLE when trying to connect. You can find an example of a working docker-compose configuration here

+6
May 25 '18 at 9:04
source share

In my case, it worked fine at home, but it did not work in the office as soon as I connected to the office network.

This is how the settings config / server.properties listeners = PLAINTEXT: //: 9092 for listeners = PLAINTEXT: // localhost: 9092 changed

In my case, I have already described a group of consumers

+4
Sep 21 '18 at 22:11
source share

I am using docker-compose to create a Kafka container using the wurstmeister/kafka . Adding the KAFKA_ADVERTISED_PORT: 9092 property to my docker-compose file resolved this error for me.

+3
Jun 14 '17 at 10:18
source share

If you are using kafka on your local computer, try updating $ KAFKA_DIR / config / server.properties with the following line: listeners=PLAINTEXT://localhost:9092 and then restarting kafka.

+3
Nov 30 '18 at 1:03
source share

Since I wanted my kafka broker to connect with remote producers and consumers, I do not want advertised.listener be commented. In my case, (by running kafka on the kubernet), I found out that my kafka module is not assigned an IP cluster. By removing the clusterIP: None from services.yml, the kubernete assigns an internal-ip for kafka pod. This resolved my LEADER_NOT_AVAILABLE problem as well as the remote connection of kafka producers / consumers.

+2
Jan 16 '18 at 17:56
source share

When the LEADER_NOT_AVAILABLE error is thrown, just restart the kafka broker:

 /bin/kafka-server-stop.sh 

followed by

 /bin/kafka-server-start.sh config/server.properties 

(Note: Zookeeper should be running by this time, if you do otherwise, as usual)

+2
Feb 21 '18 at 8:31
source share

In the line below, I added to config/server.properties , which solved my problem similar to the one described above. Hope this helps, its pretty well documented in server.properties file, try to read and understand before changing this. advertised.listeners=PLAINTEXT://<your_kafka_server_ip>:9092

+2
May 23 '18 at 15:27
source share

For everyone who is struggling with setting up Kafka ssl and sees this LEADER_NOT_AVAILABLE error. One of the reasons that could be violated is the keystore and trust store. In the keystore, you must have the private key of the server certificate + signed server. In the client power of attorney, you need to have an intermediate CA certificate so that the client can authenticate the kafka server. If you will use ssl for inter-block communication, you will need this power of attorney, also installed in brokers' server.properties, so that they can authenticate each other.

This last part, which I mistakenly missed, caused me many painful hours, learning what this LEADER_NOT_AVAILABLE error could mean. Hope this can help someone.

+1
Oct. 27 '17 at 8:45
source share

The problem is resolved after adding the listener settings to the server.properties file located in the configuration directory. listeners = PLAINTEXT: // localhost (or your server): 9092 Restart kafka after this change. Used version 2.11

+1
Jul 11 '18 at 14:10
source share

For me, this was due to the configuration of the passes
Docker Port (9093)
Kafka command port "bin / kafka-console-producer.sh --broker-list localhost: 9092 --topic Topic name"
I checked my configuration to match the port, and now everything is fine.

0
Apr 26 '18 at 8:44
source share

For me, the reason was the use of a special Zookeeper, which was not part of the Kafka package. This Zookeeper has already been installed on the machine for other purposes. Apparently, Kafka does not work with any Zookeeper. Switching to Zookeeper, supplied with Kafka, decided this for me. In order not to conflict with an existing Zookeeper, I had to change my configuration so that Zookeeper listened to another port:

 [root@host /opt/kafka/config]# grep 2182 * server.properties:zookeeper.connect=localhost:2182 zookeeper.properties:clientPort=2182 
0
Jun 04 '19 at 15:01
source share

The advertised listeners mentioned in the answers above may be one of the reasons. Other possible causes:

  1. The topic may not have been created. You can verify this using bin/kafka-topics --list --zookeeper <zookeeper_ip>:<zookeeper_port>
  2. Check the boot servers that you gave the manufacturer for metadata. If the bootstrap server does not contain the latest metadata on the topic (for example, when it lost its zookeeper request). You must add more than one boot server.

Also, make sure that the advertised listener is set to IP:9092 instead of localhost:9092 . The latter means that the broker is only available through the local host.

When I encountered an error, I remember that I used PLAINTEXT://<ip>:<PORT> in the list of boot servers (or the list of brokers), and this worked, oddly enough.

 bin/kafka-console-producer --topic sample --broker-list PLAINTEXT://<IP>:<PORT> 
0
Jun 09 '19 at 7:00
source share

Today I had the same problem. What I did to get around this error was to make a small modification in the /etc/hosts :

Change the line 127.0.0.1 localhost localhost.localdomain to 10.0.11.12 localhost localhost.localdomain

(Assume 10.0.11.12 is one of your host IPs that the Kafka server is listening on)

-one
Mar 15 '16 at 9:09
source share



All Articles