Spark send a message "Service" Driver "failed to bind to port"

I used the following command to run the java wordcount spark example: -

time spark-submit --deploy-mode cluster --master spark://192.168.0.7:6066 --class org.apache.spark.examples.JavaWordCount /home/pi/Desktop/example/new/target/javaword.jar /books_50.txt 

When I run it, the following is the output: -

 Running Spark using the REST application submission protocol. 16/07/18 03:55:41 INFO rest.RestSubmissionClient: Submitting a request to launch an application in spark://192.168.0.7:6066. 16/07/18 03:55:44 INFO rest.RestSubmissionClient: Submission successfully created as driver-20160718035543-0000. Polling submission state... 16/07/18 03:55:44 INFO rest.RestSubmissionClient: Submitting a request for the status of submission driver-20160718035543-0000 in spark://192.168.0.7:6066. 16/07/18 03:55:44 INFO rest.RestSubmissionClient: State of driver driver-20160718035543-0000 is now RUNNING. 16/07/18 03:55:44 INFO rest.RestSubmissionClient: Driver is running on worker worker-20160718041005-192.168.0.12-42405 at 192.168.0.12:42405. 16/07/18 03:55:44 INFO rest.RestSubmissionClient: Server responded with CreateSubmissionResponse: { "action" : "CreateSubmissionResponse", "message" : "Driver successfully submitted as driver-20160718035543-0000", "serverSparkVersion" : "1.6.2", "submissionId" : "driver-20160718035543-0000", "success" : true } 

I checked a specific worker (192.168.0.12) for my journal and it says: -

 Launch Command: "/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/lib/spark-assembly-1.6.2-hadoop2.6.0.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/lib/datanucleus-core-3.2.10.jar:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" "-Xms1024M" "-Xmx1024M" "-Dspark.driver.supervise=false" "-Dspark.app.name=org.apache.spark.examples.JavaWordCount" "-Dspark.submit.deployMode=cluster" "-Dspark.jars=file:/home/pi/Desktop/example/new/target/javaword.jar" "-Dspark.master=spark://192.168.0.7:7077" "-Dspark.executor.memory=10M" "org.apache.spark.deploy.worker.DriverWrapper" "spark:// Worker@192.168.0.12 :42405" "/opt/spark/work/driver-20160718035543-0000/javaword.jar" "org.apache.spark.examples.JavaWordCount" "/books_50.txt" ======================================== log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Using Spark default log4j profile: org/apache/spark/log4j-defaults.properties 16/07/18 04:10:58 INFO SecurityManager: Changing view acls to: pi 16/07/18 04:10:58 INFO SecurityManager: Changing modify acls to: pi 16/07/18 04:10:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(pi); users with modify permissions: Set(pi) 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. 16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1. Exception in thread "main" java.net.BindException: Cannot assign requested address: Service 'Driver' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'Driver' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries. at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485) at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089) at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430) at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 

The spark-env.sh file (for the wizard) contains: -

 export SPARK_MASTER_WEBUI_PORT="8080" export SPARK_MASTER_IP="192.168.0.7" export SPARK_EXECUTOR_MEMORY="10M" 

The spark-env.sh file (for the worker) contains: -

 export SPARK_WORKER_WEBUI_PORT="8080" export SPARK_MASTER_IP="192.168.0.7" export SPARK_EXECUTOR_MEMORY="10M" 

Please, help...!!

+6
source share
8 answers

I had the same problem when trying to start a shell, and I was able to get this working by setting the environment variable SPARK_LOCAL_IP. You can assign this from the command line at shell startup:

SPARK_LOCAL_IP=127.0.0.1 ./bin/spark-shell

For a more permanent solution, create the spark-env.sh file in the conf directory of your Spark root. Add the following line:

SPARK_LOCAL_IP=127.0.0.1

Grant execute permissions for the script using chmod +x ./conf/spark-env.sh , and by default this environment variable will be set.

+16
source

I use Maven / SBT for dependency management, and the core of the spark is contained in the jar file.

You can override SPARK_LOCAL_IP at run time by setting "spark.driver.bindAddress" (here in Scala):

 val config = new SparkConf() config.setMaster("local[*]") config.setAppName("Test App") config.set("spark.driver.bindAddress", "127.0.0.1") val sc = new SparkContext(config) 
+9
source

I also had this problem.

The reason (for me) was that the IP of my local system was not accessible from my local system. I know this statement does not make sense, but please read the following.

My system name (uname -s) indicates that my system is called a sparkmaster. In my / etc / hosts file, I assigned a fixed IP address for the sparkmaster system as "192.168.1.70". There were additional fixed IP addresses for sparknode01 and sparknode02 at ... 1.71 and ... 1.72 respectively.

Due to some other problems that I had, I had to change all network adapters to DHCP. This meant that they received addresses like 192.168.90.123. DHCP addresses were not on the same network as the range ... 1.70, and the route was not configured.

When spark starts, it looks like it is trying to connect to the host specified in uname (i.e., in my case, sparkmaster). It was IP 192.168.1.70, but there was no way to connect to it, because this address was on an inaccessible network.

My solution was to change one of my Ethernet adapters back to a fixed static address (i.e. 192.168.1.70) and voila - the problem is resolved.

Thus, the problem is that when spark starts in "local mode", it tries to connect to the system with the name of your system (and not to the local host). I think this makes sense if you want to set up a cluster (as I did), but this can lead to the confusing message above. Perhaps placing the system host name in the 127.0.0.1 entry in / etc / hosts may also solve this problem, but I have not tried.

+2
source

You need to enter the host name in your /etc/hosts . Sort of:

 127.0.0.1 localhost "hostname" 
+1
source

This is possibly a duplicate Sparker 1.2.1 offline cluster spark-submit does not work

I tried the same steps but could complete the task. Please inform, if possible, the full version of spark-env.sh and the default spark.

0
source

I had this problem and this is because of changing the real IP with my IP in / etc / hosts.

0
source

This issue is only related to the IP address. Error messages in the log file are not informative. check the following 3 steps:

  1. check your IP address - you can check with ifconfig or ip commands. If your service is not a public service. IP addresses from 192.168 should be good enough. 127.0.0.1 cannot be used if you are planning a cluster.

  2. check your environment variable SPARK_MASTER_HOST - check for typos in the variable name or the actual IP address.

    env | grep SPARK_

  3. verify that the port you plan to use for sparkMaster is free with the netstat command. Do not use a port below 1024. For example:

    netstat -a | 9123

After starting your sparkmaster, if you cannot see webui from another computer, open the webui port using the iptables command.

0
source

I solved this problem by changing slave file.its spark-2.4.0-bin-hadoop2.7 / conf / slave, please check your settings。

-2
source

All Articles