Failed to start secondary name CDH4 node: invalid URI for NameNode

I am trying to install the hadoop CDH4 installation. I have 12 machines, ledoop01 - hadoop12, and the name, job tracker and all data nodes started working normally. I can look at dfshealth.jsp and see that it has found all the data nodes.

However, when I try to run the secondary name node, it gives an exception:

Starting Hadoop secondarynamenode: [ OK ] starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-hadoop02.dev.terapeak.com.out Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority. at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:324) at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:312) at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:305) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:222) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:186) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:578) 

This is my hdfs-site.xml file by the middle name node:

 <configuration> <property> <name>dfs.name.dir</name> <value>/data/1/dfs/nn</value> </property> <property> <name>dfs.namenode.http-address</name> <value>10.100.20.168:50070</value> <description> The address and the base port on which the dfs NameNode Web UI will listen. If the port is 0, the server will start on a free port. </description> </property> <property> <name>dfs.namenode.checkpoint.check.period</name> <value>3600</value> </property> <property> <name>dfs.namenode.checkpoint.txns</name> <value>40000</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.namenode.checkpoint.edits.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.namenode.num.checkpoints.retained</name> <value>1</value> </property> <property> <name>mapreduce.jobtracker.restart.recover</name> <value>true</value> </property> </configuration> 

It would seem that something is wrong with the value given by the dfs.namenode.http address, but I'm not sure what. Should I start with http: // or hdfs: //? I tried to call 10.100.20.168►0070 in lynx and it displayed the page. Any ideas?

+6
source share
2 answers

It looks like I was missing the core-site.xml configuration on the secondary name node. This is added, and the process started correctly.

core-site.xml:

 <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://10.100.20.168/</value> </property> </configuration> 
+7
source

If you are using a single node cluster, make sure you correctly specify the HADOOP_PREFIX variable, as indicated in the link: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

Even I encountered the same problem as yours and it is fixed by setting this variable

+1
source

All Articles