Why does the sparker get SIGTERM?

I use the Spark API (the Spark core API, not Stream, SQL, etc.). I often see such an error in a log with a curved flush: Spark environment: 1.3.1 client thread

ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL 15: SIGTERM 
  • Who calls SIGTERM. YaRN, Spark or me?
  • Will the sparkling performer interrupt this signal? If not, wow will not affect the spark program.

I press Ctrl + c, but it will be SIGINT. If YARN kills the performer, it will be SIGKILL.

+8
signals apache-spark
source share
1 answer

You will probably find the reason in yarn magazines. If you activated log aggregation, you can enter

yarn logs -applicationId [app_id]

and search for exceptions.

+2
source share

All Articles