I have a virtual machine in which spark-2.0.0-bin-hadoop2.7 is installed offline.
I ran ./sbin/start-all.sh to start the master and subordinate.
When I do ./bin/spark-shell --master spark://192.168.43.27:7077 --driver-memory 600m --executor-memory 600m --executor-cores 1 in the computer itself, the task state is RUNNING , and I I can calculate the code in the spark shell.

When I execute the exact same command from another computer on the network, the status "WORKS" again, but the spark shell throws WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources . I think the problem is not directly related to resources, because the same team works on the virtual machine itself, but not when it comes from other machines.

I checked most of the topics related to this error, and none of them solved my problem. I even turned off the firewall using sudo ufw disable to make sure, but did not succeed (based on this link ), which suggests:
Disable client firewall: this was the solution that worked for me. Since I was working on a prototype of the internal code, I turned off the firewall on the node client. For some reason, the work nodes could not talk to the client for me. For production purposes, you would like to open a certain number of required ports.
source share