Kafka Connect Deployment Errors

I read a tutorial here: http://kafka.apache.org/documentation.html#introduction

When I go to "Step 7: use Kafka Connect to import / export data" and try to run two connectors, I get the following errors:

ERROR Failed to flush WorkerSourceTask{id=local-file-source-0}, timed out while waiting for producer to flush outstanding messages, 1 left ERROR Failed to commit offsets for WorkerSourceTask 

Here is part of the tutorial:

Then we will launch two connectors working in standalone mode, which means that they start in one, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration of the Kafka Connect process, containing a common configuration, such as Kafka brokers for connecting and a serialization format for data. The remaining configuration files specify the connector to create. These files include the unique name of the connector, the instantiation connector class, and any other configuration required by the connector. bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties

I spent some time looking for a solution, but could not find anything useful. Any help is appreciated.

Thanks!

+7
java apache-kafka
source share
4 answers

The reason I was getting this error was because the first server that I created using config / server.properties was not running. I assume that since this is the main theme of the topic, messages cannot be discarded and offsets cannot be committed.

As soon as I started the kafka server using server properties (config / server.properties), this problem was resolved.

+6
source share

Before starting Kafka Connect, you need to start the Kafka server and Zookeeper. You need to run the cmds command in "Step 2: Start the server" below:

bin / zookeeper-server-start.sh config / zookeeper.properties
bin / kafka-server-start.sh config / server.properties

from here: https://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/% 3CCAK0BMEpgWmL93wgm2jVCKbUT5rAZiawzOroTFc_A6Q=GaXQgfQ@mail.gmail .com% 3E

+4
source share

If you configure your Kafka Broker with a host name, for example my.sandbox.com , make sure you change config/connect-standalone.properties :

 bootstrap.servers=my.sandbox.com:9092 

In Hortonworks HDP, the default port is 6667, so the setting

 bootstrap.servers=my.sandbox.com:6667 

If Kerberos is enabled, you will also need the following settings (without SSL):

 security.protocol=PLAINTEXTSASL producer.security.protocol=PLAINTEXTSASL producer.sasl.kerberos.service.name=kafka consumer.security.protocol=PLAINTEXTSASL consumer.sasl.kerberos.service.name=kafka 
0
source share

Before starting this line, you need to start the zookeeper and kafka server.

run zookeeper

 bin/zookeeper-server-start.sh config/zookeeper.properties 

run multiple kafka servers

 bin/kafka-server-start.sh config/server.properties bin/kafka-server-start.sh config/server-1.properties bin/kafka-server-start.sh config/server-2.properties 

trigger connectors

 bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties 

Then you will see that some lines are written in test.sink.txt :

 foo bar 

And you can run the consumer to test it:

 bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning {"schema":{"type":"string","optional":false},"payload":"foo"} {"schema":{"type":"string","optional":false},"payload":"bar"} 
0
source share

All Articles