I have work with a spark stream that is read in data from Kafka and performs some operations on it. I am working on a yarn cluster, Spark 1.4.1, which has two nodes with 16 GB of RAM each and 16 cores each.
I have this conf passed in spark-submit:
- master yarn-cluster -num-performers 3 -driver-memory 4g -executor-memory 2g -executor-core 3
The task returns this error and ends after a short run:
INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 11, (reason: Max number of executor failures reached) ..... ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Stopped by driver
Updated:
These magazines were also found:
INFO yarn.YarnAllocator: Received 3 containers from YARN, launching executors on 3 of them..... INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. .... INFO yarn.YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. INFO yarn.ExecutorRunnable: Starting Executor Container..... INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down... INFO yarn.YarnAllocator: Completed container container_e10_1453801197604_0104_01_000006 (state: COMPLETE, exit status: 1) INFO yarn.YarnAllocator: Container marked as failed: container_e10_1453801197604_0104_01_000006. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_e10_1453801197604_0104_01_000006 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:576) at org.apache.hadoop.util.Shell.run(Shell.java:487) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1
What could be the reason for this? Appreciate some help.
thanks
yarn apache-spark apache-kafka spark-streaming
void
source share