Failed to start Spark job in yarn cluster - retry connecting to server

I am setting up my yarn cluster as well as my spark cluster on the same machines, but now I need to start the spark job using yarn using client mode.

Here is my sample configuration for my work:

SparkConf sparkConf = new SparkConf(true).setAppName("SparkQueryApp") .setMaster("yarn-client")// "yarn-cluster" or "yarn-client" .set("es.nodes", "10.0.0.207") .set("es.nodes.discovery", "false") .set("es.cluster", "wp-es-reporting-prod") .set("es.scroll.size", "5000") .setJars(JavaSparkContext.jarOfClass(Demo.class)) .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") .set("spark.default.parallelism", String.valueOf(cpus * 2)) .set("spark.executor.memory", "10g") .set("spark.num.executors", "40") .set("spark.dynamicAllocation.enabled", "true") .set("spark.dynamicAllocation.minExecutors", "10") .set("spark.dynamicAllocation.maxExecutors", "50") .set("spark.logConf", "true"); 

This does not work when I tried to start my work Spark java -jar spark-test-job.jar"

I got this exception

 405472 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 406473 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) ... 

Any help?

+6
source share

All Articles