Java.io.EOFException: premature EOF: length prefix not available in Spark on Hadoop

I get this weird exception. I use Spark 1.6.0 on Hadoop 2.6.4 and send Spark to the YARN cluster.

16/07/23 20:05:21 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-532134798-128.110.152.143-1469321545728:blk_1073741865_1041 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2203) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:867) 16/07/23 20:49:09 ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=4719626006875125240, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=81 cap=81]}} to ms0440.utah.cloudlab.us/128.110.152.175:58944; closing connection java.nio.channels.ClosedChannelException 

I got this error when starting on Hadoop 2.6.0 and thought that this exception might be something like this , but after changing it to Hadoop 2.6.4. I get the same error. There are no memory problems, my cluster is good with HDFS and memory. I went through this and this , but no luck.

Note: 1. I use Apache Hadoop and Spark, not CDH / HDP. 2. I can copy data to HDFS and even perform another task in this cluster.

+5
source share

All Articles