I want to set up a series of spark steps on an EMP spark cluster and stop the current step if it takes too long. However, when I ssh in the master node and run the hasoop -list work tasks, the master node seems to think there are no jobs. I do not want to break the cluster, because it will force me to buy a whole new hour of any cluster in which I work. Can someone help me stop the spark phase in EMR without completing the entire cluster?
amazon-web-services hadoop emr apache-spark
Daniel Imberman
source share