You can use shell script for this.
Deployment script:
#!/bin/bash spark-submit --class "xx.xx.xx" \ --deploy-mode cluster \ --supervise \ --executor-memory 6G hdfs:
and you will get the result as follows:
16/06/23 08:37:21 INFO rest.RestSubmissionClient: Submitting a request to launch an application in spark://node-1:6066. 16/06/23 08:37:22 INFO rest.RestSubmissionClient: Submission successfully created as driver-20160623083722-0026. Polling submission state... 16/06/23 08:37:22 INFO rest.RestSubmissionClient: Submitting a request for the status of submission driver-20160623083722-0026 in spark://node-1:6066. 16/06/23 08:37:22 INFO rest.RestSubmissionClient: State of driver driver-20160623083722-0026 is now RUNNING. 16/06/23 08:37:22 INFO rest.RestSubmissionClient: Driver is running on worker worker-20160621162532-192.168.1.200-7078 at 192.168.1.200:7078. 16/06/23 08:37:22 INFO rest.RestSubmissionClient: Server responded with CreateSubmissionResponse: { "action" : "CreateSubmissionResponse", "message" : "Driver successfully submitted as driver-20160623083722-0026", "serverSparkVersion" : "1.6.0", "submissionId" : "driver-20160623083722-0026", "success" : true }
And based on this, create your kill script driver
#!/bin/bash driverid=`cat output | grep submissionId | grep -Po 'driver-\d+-\d+'` spark-submit --master spark://node-1:6066 --kill $driverid
Make sure that permission is granted to execute the script using chmod +x
source share