Yarn does not honor yarn .nodemanager.resource.cpu-vcores

I use Hadoop-2.4.0 , and my system configurations are 24 cores, 96 GB of RAM.

I use the following configs

 mapreduce.map.cpu.vcores=1 yarn.nodemanager.resource.cpu-vcores=10 yarn.scheduler.minimum-allocation-vcores=1 yarn.scheduler.maximum-allocation-vcores=4 yarn.app.mapreduce.am.resource.cpu-vcores=1 yarn.nodemanager.resource.memory-mb=88064 mapreduce.map.memory.mb=3072 mapreduce.map.java.opts=-Xmx2048m 

Performance Scheduler Configurations

 queue.default.capacity=50 queue.default.maximum_capacity=100 yarn.scheduler.capacity.root.default.user-limit-factor=2 

With the above configurations, I expect that the yarn will not run more than 10 cards on a node, but it will run 28 cards on a node. Am I doing something wrong?

+11
mapreduce hadoop yarn hadoop2 cloudera
source share
1 answer

YARN runs more containers than dedicated cores, because DefaultResourceCalculator is used by default . This only takes into account memory.

 public int computeAvailableContainers(Resource available, Resource required) { // Only consider memory return available.getMemory() / required.getMemory(); } 

Use the DominantResourceCalculator, it uses both the processor and memory.

Set below config in Capacity-scheduler.xml

 yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator 

Learn more about Dominant Resource Calculator

+30
source share

All Articles