Hadoop error - all data nodes are interrupted

I am using Hadoop version 2.3.0. Sometimes, when I do a work to reduce the map, the following errors are displayed.

14/08/10 12:14:59 INFO mapreduce.Job: Task Id : attempt_1407694955806_0002_m_000780_0, Status : FAILED
Error: java.io.IOException: All datanodes 192.168.30.2:50010 are bad. Aborting...
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)


When I try to check the log files for these failed tasks, the log folder for this task will be empty.

I can not understand the cause of this error. Can someone please let me know how to solve this problem. Thank you for your help.

+4
source share
2 answers

spark.shuffle.service.enabled true .

spark.dynamicAllocation.enabled Spark . spark.shuffle.service.enabled false spark.shuffle.service.enabled , . ,

java.io.IOException: .

.

0

All Articles