Hadoop job: task does not report status for 601 seconds

When an adoop job starts on a pseudo tone, the task is not executed and is killed. Error: task_try did not report status for 601 seconds.

But the same program works through Eclipse (local job).

Task: there are about 25 thousand keywords, the output will be the whole possible combination (two at a time), that is, about 25 thousand * 25 thousand makes

What could be the problem?

+5
source share
3 answers

For some reason, the task running on your nickname is not progressing. You can increase the setting of "mapred.task.timeout" in mapred-site.xml. The default value for mapred-default.xml is:

 <property>
    <name>mapred.task.timeout</name>
    <value>600000</value>
    <description>The number of milliseconds before a task will be 
                 terminated if it neither reads an input, writes 
                 an output, nor updates its status string.
    </description>
 </property>
+7
source

/ - , , , , .

+2

Hadoop Provides a reporting API for it. If you do not report progress in the hadoop task within 10 minutes, which is 600 seconds, he considers the task shot and kills the task. API Link Here

+2
source

All Articles