Hadoop pseudo-distributed operation error: protocol message tag had invalid wire type

I am creating a Hadoop 2.6.0Single Node cluster . I am following the hadoop-common / SingleCluster documentation . I am working on Ubuntu 14.04. So far, I have been able to successfully perform an offline operation.

When I tried to perform the Pseudo-Distributed Operation, I encountered an error. I managed to start the DataNode daemon and the DataNode daemon. jps oputut:

martakarass@marta-komputer:/usr/local/hadoop$ jps
4963 SecondaryNameNode
4785 DataNode
8400 Jps
martakarass@marta-komputer:/usr/local/hadoop$ 

But when I try to create the HDFS directories needed to complete MapReduce jobs, I get the following error:

martakarass@marta-komputer:/usr/local/hadoop$ bin/hdfs dfs -mkdir /user
15/05/01 20:36:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "marta-komputer/127.0.0.1"; destination host is: "localhost":9000; 
martakarass@marta-komputer:/usr/local/hadoop$ 

(I believe that at this point I can ignore the warning WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform....)


Hadoop , , . :

../Hadoop/-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

< > .. /Hadoop/HDFS -site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

localhost:

martakarass@marta-komputer:~$ ssh localhost
martakarass@localhost password: 
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-45-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

Last login: Fri May  1 20:28:58 2015 from localhost

:

martakarass@marta-komputer:/usr/local/hadoop$  bin/hdfs namenode -format
15/05/01 20:30:21 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = marta-komputer/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
(...)
15/05/01 20:30:24 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at marta-komputer/127.0.0.1
************************************************************/

/ ../

127.0.0.1       localhost
127.0.0.1       marta-komputer

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

../ :

marta-komputer
+4
4

, Ubuntu , 2.7.1, ( ).

1) /etc/hosts:

    127.0.0.1    localhost   <computer-name>
    # 127.0.1.1    <computer-name>
    <ip-address>    <computer-name>

    # Rest of file with no changes

2) *.xml ( <configuration>):

  • core-site.xml:

        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost/</value>
        </property>
        <!-- set value to a directory you want with an absolute path -->
        <property>
            <name>hadoop.tmp.dir</name>
            <value>"set/a/directory/on/your/machine/"</value>
            <description>A base for other temporary directories</description>
        </property>
    
  • hdfs-site.xml:

        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
    
  • yarn-site.xml:

        <property>
            <name>yarn.recourcemanager.hostname</name>
            <value>localhost</value>
        </property>
    
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
    
  • mapred-site.xml:

        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    

3) $HADOOP_CONF_DIR:

, . , .xml, script hadoop_env.sh , $HADOOP_CONF_DIR .

4) :

NameNode 50070 8020 , DataNode 50010, 50020, 50075 43758. sudo lsof -i, , - .

5) :

, hadoop.tmp.dir, NameNode hdfs namenode -format. , tmp, ( /tmp/):

6) :

/sbin/ node start-dfs.sh script start-yarn.sh jps:

    ./start-dfs.sh   
    ./start-yarn.sh

, NameNode, DataNode, NodeManager ResourceManager, !

- , , .

+2

127.0.0.1 localhost /etc/hosts core-site.xml :

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://marta-komputer:9000</value>
    </property>
</configuration>

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform...

+1

/etc/hosts:

1. :

127.0.0.1    localhost

127.0.0.1    localhost    marta-komputer

2. :

127.0.0.1    marta-komputer

3. :

your-system-ip    marta-komputer

IP- ,

ifconfig

( IP-) :

ifdata -pa eth0

/etc/hosts :

127.0.0.1       localhost       marta-komputer
your-system-ip       marta-komputer

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

hdfs-site.xml:

1. :

hdfs://localhost:9000

to

hdfs://marta-komputer:9000

hasoop.

jps :

Namenode
Datanode
TaskTracker
SecondaryNameNode

, .

UPDATE:

  • , , .

II:

  • namenode datanode:

sudo mkdir -p /usr/local/hdfs/namenode

sudo mkdir -p /usr/local/hdfs/datanode

sudo chown -R hduser:hadoop /usr/local/hdfs/namenode

sudo chown -R hduser:hadoop /usr/local/hdfs/datanode

  1. hdfs-site.xml:

dfs.datanode.data.dir /usr/local/hdfs/datanode

dfs.namenode.data.dir /usr/local/hdfs/namenode

  1. hasoop-.
+1

, hdfs java-. , hasoop 1 jar hadoop 2, , , - hadoop 1, -

0

All Articles