Errors while working hadoop

haduser@user-laptop :/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/input /user/haduser/input 11/12/14 14:21:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s). 11/12/14 14:21:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s). 11/12/14 14:21:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s). 11/12/14 14:21:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s). 11/12/14 14:21:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s). 11/12/14 14:21:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s). 11/12/14 14:21:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s). 11/12/14 14:21:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. -Already tried 7 time(s). 11/12/14 14:21:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s). 11/12/14 14:21:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s). Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused 

I get the above errors when I try to copy files from /tmp/input to /user/haduser/input , although the /etc/hosts contains an entry for localhost. When jps command is jps command , TaskTracker and namenode not specified.

What could be the problem? Please help me with this.

+7
source share
4 answers

I had similar problems. In fact, Hadoop was tied to IPv6. Then I added - " export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true " in $HADOOP_HOME/conf/hadoop-env.sh

Hadoop was bound to IPv6, even when I disabled IPv6 on my system. As soon as I added it to env, it started working fine.

Hope this helps someone.

+9
source

Try making ssh on your local system using IP, in this case:

$ ssh 127.0.0.1

Once you can successfully execute ssh. Run the command below to see a list of open ports

~ $ lsof -i

find the listening jack named localhost: <PORTNAME> (LISTEN)

copy this & lt; PORTNAME> and replace the existing port number value in the fs.default.name property tag in your core-site.xml file in the confoop conf folder.

save the core-site.xml file, this should fix the problem.

+3
source

NameNode (NN) supports the namespace for HDFS and must work for the file system to work on HDFS. Check the logs why NN is not running. TaskTracker is not required for operations with HDFS, only NN and DN are enough. Check out the http://goo.gl/8ogSk and http://goo.gl/NIWoK tutorials on how to configure Hadoop on one and more nodes.

+1
source

All files in the bin are exectuables. Just copy the command and paste it into the terminal. Make sure the address is right, i.e. The user must be replaced by something. That would do the trick.

+1
source

All Articles