I had the same problem and followed all the steps mentioned in the stream (basically adding -o UserKnownHostsFile = / dev / null to your spark_ec2.py script), but still it hung saying
Waiting for all instances in cluster to enter 'ssh-ready' state
Short answer:
Change the permission of the private key file and restart the spark-ec2 script
[ spar@673d356d ]/tmp/spark-1.2.1-bin-hadoop2.4/ec2% chmod 0400 /tmp/mykey.pem
Long answer:
To fix the problems, I modified spark_ec2.py and registered the used ssh command and tried to execute it on the command line, it was a bad key permission:
[ spar@673d356d ]/tmp/spark-1.2.1-bin-hadoop2.4/ec2% ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /tmp/mykey.pem -o ConnectTimeout=3 uroot@52.1.208.72 Warning: Permanently added '52.1.208.72' (RSA) to the list of known hosts. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for '/tmp/mykey.pem' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /tmp/mykey.pem Permission denied (publickey).
source share