Zookeeper continues to receive WARN: "caught the end of the stream exception"

Now I am using a CDH-5.3.1 cluster with three zookeeper instances located in three ips:

133.0.127.40 n1
133.0.127.42 n2
133.0.127.44 n3

Everything works fine when it starts, but these days I notice that node n2 keeps getting WARN:

caught end of stream exception

EndOfStreamException: Unable to read additional data from client sessionid **0x0**, likely client has closed socket
    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:722)

this happens every second, and only on n2, while n1 and n3 are exact. I can still use the HBase shell to scan my table and the Solr WEB user interface to execute queries. But I can not start Flume agents, the process stops at this stage:

Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog

jetty-6.1.26.cloudera.4

Started SelectChannelConnector@0.0.0.0:41414.

And after a few minutes I get a warning from the Cloudera manager that the Flume agent exceeds the file descriptor threshold.

Does anyone know what is going wrong? Thanks in advance.

+4
1

, ZK ( , Flume). , , , node / . zoo.cfg:

  • autopurge.snapRetainCount, . 10
  • autopurge.purgeInterval, , 2 ()

ZK (Flume?) znodes / ZK, Java jute.maxbuffer JVM (-) , , . , 1M. - , !

+2

All Articles