When I start hasoopnode1 using start-all.sh
, it successfully starts the services on the master and slave (see the output of the jps command for the slave). But when I try to see live nodes in the admin screen, the slave node does not appear. Even when I run the hadoop fs -ls /
command from the wizard, it works fine, but an error message is displayed from salve
@hadoopnode2:~/hadoop-0.20.2/conf$ hadoop fs -ls / 12/05/28 01:14:20 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 0 time(s). 12/05/28 01:14:21 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 1 time(s). 12/05/28 01:14:22 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 2 time(s). 12/05/28 01:14:23 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 3 time(s). . . . 12/05/28 01:14:29 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 10 time(s).
It looks like slave (hadoopnode2) cannot find / connect the node master (hadoopnode1)
Please indicate to me what I am missing?
The following are the settings from the Master and Slave nodes - Postscript - The master and slave working with the same version of Linux and Hadoop and SSH work fine, because I can run the slave from the master node
Also the same settings for core-site.xml, hdfs-site.xml and mapred-site.xml for master (hadooopnode1) and slave (hadoopnode2)
OS - Ubuntu 10 Hadoop Version -
oop@hadoopnode1 :~/hadoop-0.20.2/conf$ hadoop version Hadoop 0.20.2 Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707 Compiled by chrisdo on Fri Feb 19 08:07:34 UTC 2010
- Master (hadoopnode1)
hadoop@hadoopnode1 :~/hadoop-0.20.2/conf$ uname -a Linux hadoopnode1 2.6.35-32-generic
- Slave (hadoopnode2)
hadoop@hadoopnode2 :~/hadoop-0.20.2/conf$ uname -a Linux hadoopnode2 2.6.35-32-generic #67-Ubuntu SMP Mon Mar 5 19:35:26 UTC 2012 i686 GNU/Linux hadoop@hadoopnode2 :~/hadoop-0.20.2/conf$ jps 1959 DataNode 2631 Jps 2108 TaskTracker masters - hadoopnode1 core-site.xml hadoop@hadoopnode2 :~/hadoop-0.20.2/conf$ cat core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/var/tmp/hadoop/hadoop-${user.name}</value> <description>A base for other temp directories</description> </property> <property> <name>fs.default.name</name> <value>hdfs://hadoopnode1:8020</value> <description>The name of the default file system</description> </property> </configuration> hadoop@hadoopnode2 :~/hadoop-0.20.2/conf$ cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapred.job.tracker</name> <value>hadoopnode1:8021</value> <description>The host and port that the MapReduce job tracker runs at.If "local", then jobs are run in process as a single map</description> </property> </configuration> hadoop@hadoopnode2 :~/hadoop-0.20.2/conf$ cat hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>2</value> <description>Default block replication</description> </property> </configuration>
source share