BrokerNotAvailableError: Could not find leader Exception during Spark Streaming

I wrote producer Kafka in NodeJS and Kafka Consumer in Java Maven. My theme is a โ€œtestโ€ that was created by the following command:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test 

Manufacturer at NodeJS:

 var kafka = require('kafka-node'); var Producer = kafka.Producer; var Client = kafka.Client; var client = new Client('localhost:2181'); var producer = new Producer(client); producer.on('ready', function () { producer.send([ { topic: 'test', partition: 0, messages: ["This is the zero message I am sending from Kafka to Spark"], attributes: 0}, { topic: 'test', partition: 1, messages: ["This is the first message I am sending from Kafka to Spark"], attributes: 0}, { topic: 'test', partition: 2, messages: ["This is the second message I am sending from Kafka to Spark"], attributes: 0} ], function (err, result) { console.log(err || result); process.exit(); }); }); 

When I send two messages from the NodeJS manufacturer, it is successfully consumed by the Java consumer. But when I send three or more messages from the NodeJS manufacturer, this leads to the following error:

{[BrokerNotAvailableError: Could not find a leader] message: "Could not find a leader"}

I want to ask how can I install LEADER on any message in the "test" topic. Or what should be the solution to the problem.

+5
source share
3 answers

The topic was created with 1 section, however on the manufacturerโ€™s side you are trying to send messages to 3 sections, logically Kafka should not find a leader for other sections and should throw this exception.

+1
source

There is a bug that may cause this in the current version of kafka-node

https://github.com/SOHU-Co/kafka-node/issues/354

HighLevelProducer with KeyedPartitioner fails on first send # 354 When using KeyedParitioner with HighLevelProducer first send fails with BrokerNotAvailableError: leader could not be found Sequential send works fine.

Also see https://www.npmjs.com/package/kafka-node#highlevelproducer-with-keyedpartitioner-errors-on-first-send

What recommends

Call client.refreshMetadata () before sending the first message.

This is how i did it

  // Refresh metadata required for the first message to go through // https://github.com/SOHU-Co/kafka-node/pull/378 client.refreshMetadata([topic], (err) => { if (err) { console.warn('Error refreshing kafka metadata', err); } }); 
+1
source

Instead of a section, use plural sections.

eg:

 producer.on('ready', function () { producer.send([ { topic: 'test', partitions: 0, messages: ["This is the zero message I am sending from Kafka to Spark"], attributes: 0}, { topic: 'test', partitions: 1, messages: ["This is the first message I am sending from Kafka to Spark"], attributes: 0}, { topic: 'test', partitions: 2, messages: ["This is the second message I am sending from Kafka to Spark"], attributes: 0} ], function (err, result) { console.log(err || result); process.exit(); }); 

});

0
source

All Articles