I also had this problem.
The reason (for me) was that the IP of my local system was not accessible from my local system. I know this statement does not make sense, but please read the following.
My system name (uname -s) indicates that my system is called a sparkmaster. In my / etc / hosts file, I assigned a fixed IP address for the sparkmaster system as "192.168.1.70". There were additional fixed IP addresses for sparknode01 and sparknode02 at ... 1.71 and ... 1.72 respectively.
Due to some other problems that I had, I had to change all network adapters to DHCP. This meant that they received addresses like 192.168.90.123. DHCP addresses were not on the same network as the range ... 1.70, and the route was not configured.
When spark starts, it looks like it is trying to connect to the host specified in uname (i.e., in my case, sparkmaster). It was IP 192.168.1.70, but there was no way to connect to it, because this address was on an inaccessible network.
My solution was to change one of my Ethernet adapters back to a fixed static address (i.e. 192.168.1.70) and voila - the problem is resolved.
Thus, the problem is that when spark starts in "local mode", it tries to connect to the system with the name of your system (and not to the local host). I think this makes sense if you want to set up a cluster (as I did), but this can lead to the confusing message above. Perhaps placing the system host name in the 127.0.0.1 entry in / etc / hosts may also solve this problem, but I have not tried.
source share