Why is the YARN error in java memory?

I want to try installing memory in YARN, so I will try to configure some parameter on yarn-site.xml and mapred-site.xml. By the way, I am using hasoop 2.6.0. But I get an error while doing mapreduce job. He says this:

15/03/12 10:57:23 INFO mapreduce.Job: Task Id : attempt_1426132548565_0001_m_000002_0, Status : FAILED Error: Java heap space Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 

I think I configured it correctly, I give map.java.opts and reduce.java.opts a small size = 64 MB. I tried to configure some parameter, such as changing map.java.opts and reduce.java.opts to mapred-site.xml, and I still get this error. I think I really don't understand how YARN memory works. For this, I am trying to use a single computer node.

+8
java heap mapreduce hadoop yarn
source share
2 answers

Yarn handles resource management and also services batch workloads that MapReduce and real-time workloads can use.

There are memory settings that can be set at the level of the yarn container, as well as at the level of the cartographer and gearbox. Memory is requested as the size of the yarn container increases. The crankcase and gearbox tasks are performed inside the container.

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

the above parameters describe the upper memory limit for the map reduction task, and if the memory signed by this task exceeds this limit, the corresponding container will be killed.

These parameters determine the maximum amount of memory that can be assigned to the cartographer and, accordingly, reduce tasks. Let's look at an example: Mapper is bound by an upper limit for memory, which is defined in the mapreduce.map.memory.mb configuration parameter.

However, if the value of yarn.scheduler.minimum-allocation-mb is greater than this value of mapreduce.map.memory.mb , then yarn.scheduler.minimum-allocation-mb , and containers of this size are returned .

This parameter must be set carefully, and if it is not set correctly, this can lead to errors or OutOfMemory errors.

mapreduce.reduce.java.opts and mapreduce.map.java.opts

This property value must be less than the upper bound of the map / reduce task, as defined in mapreduce.map.memory.mb / mapreduce.reduce.memory.mb , since it must correspond to the memory allocation for the map / reduce task.

+8
source share

What @Gaurav said right. I had a similar problem, I tried something like below. Include below properties in yarn-site.xml and restart VM

 <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> <description>Whether virtual memory limits will be enforced for containers</description> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4</value> <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description> </property> 
+2
source share

All Articles