Launching a spark offline client on a cluster of 10 nodes using Spark-2.1.0-SNAPSHOT.
9 nodes - workers, 10 - master and driver. Each 256 GB of memory. I have difficulty fully utilizing my cluster.
I set the memory limit for artists and drivers to 200 GB using the following settings for the spark shell:
spark-shell
When my application starts, I see that these values are set as expected both on the console and on the user interface tab /environment/ . But when I go to the /executors/ tab, I see that my nodes only received 114.3 GB of memory, see the screen below.

The total memory shown here is then 1.1 TB, while I expect 2 TB. I double-checked that other processes are not using memory.
Any idea what is the source of this inconsistency? Am I missing some settings? Is this a bug in the /executors/ tab or spark engine?
source share