Hadoop: Going Out of Virtual Memory Showing Huge Numbers

I run the MapReduce Pipes program, and I set the memory limits as follows:

in yarn-site.xml file:

<property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>3072</value>
</property>
<property>
            <name>yarn.scheduler.minimum-allocation-mb</name>
            <value>256</value>
</property>

In mapred-site.xml:

<property>
            <name>mapreduce.map.memory.mb</name>
            <value>512</value>
</property>
<property>
            <name>mapreduce.reduce.memory.mb</name>
            <value>512</value>
</property>
<property>
            <name>mapreduce.map.java.opts</name>
            <value>-Xmx384m</value>
</property>
<property>
            <name>mapreduce.reduce.java.opts</name>
            <value>-Xmx384m</value>
</property>

Now I am running one node in pseudo-distributed mode. Before killing the container, I get the following error:

2015-04-11 12:47:49,594 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1428741438743_0001_m_000000_0: Container [pid=8140,containerID=container_1428741438743_0001_01_000002] is running beyond virtual memory limits. Current usage: 304.1 MB of 1 GB physical memory used; 1.0 TB of 2.1 GB virtual memory used. Killing container.

The main thing for me is 1.0 TB of virtual memory used, the application that I run is far from consuming this amount of memory, it does not even consume 1 GB of memory.

Does this mean that there is a memory leak in my code or there may be incorrect memory configurations?

Thank.

Hi,

+4
source share
1

, : mappers lmdb. lmdb , 1 , Hadoop , , .

, yarn.nodemanager.vmem-check-enabled false yarn-site.xml, Hadoop , , , , Hadoop . , , .

+9

All Articles