Unable to initialize cluster. Check your configuration for mapreduce.framework.name and the corresponding server addresses - send job2remoteClustr

I recently upgraded my cluster from Apache Hadoop1.0 to CDH4.4.0. I have a web server in another machine, from where I submit jobs to this remote cluster through the mapreduce client. I still want to use MR1, not yarn. I compiled my client code with client banks in a CDH installation (/ usr / lib / hadoop / client / *)

We get the error below when creating an instance of JobClient. There are many messages related to the same problem, but all solutions relate to the scenario of sending a job to a local cluster, and not to a remote one, and in particular, in my case from the wls container .

JobClient jc = new JobClient(conf);

Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

But working from the command line on the cluster works fine.

Appreciate your timely help!

+9
mapreduce hadoop cloudera
source share
7 answers

Your application is probably looking at your old Hadoop 1.x configuration files. Maybe your application hardcodes some configuration? This error tends to indicate that you are using new client libraries, but that they do not see the new style configuration.

It must exist because the command line tools see them in order. Check your HADOOP_HOME or HADOOP_CONF_DIR env variables, though the fact that command line tools tend to pick up and they work.

Note that you need to install the mapreduce service, not the yarn, in CDH 4.4 to make it compatible with MR1 clients. See also artifacts "... -mr1 -..." in Maven.

+4
source share

I had a similar error and added the following jars to the classpath and it worked for me: hadoop-mapreduce-client-jobclient-2.2.0.2.0.6.0-76:hadoop-mapreduce-client-shuffle-2.3.0.jar:hadoop-mapreduce-client-common-2.3.0.jar

+17
source share

In my case, this error occurred due to the version of the banners, make sure that you are using the same version as on the server.

+3
source share

export HADOOP_MAPRED_HOME = / cloudera / parcels / CDH-4.1.3-1.cdh4.1.3.p0.23 / lib / hadoop-0.20-mapreduce

+2
source share

In my case, I ran sqoop 1.4.5 and pointed it to the latest version of hasoop 2.0.0-cdh4.4.0, in which there was yarn material, so he complained.

When I pointed sqoop to hadoop-0.20 / 2.0.0-cdh4.4.0 (looks like MR1), it worked.

+2
source share

In my case, strange this error was caused by the fact that in my file "core-site.xml" I mentioned "IP address" and not "host name". At that moment, when I mentioned "hostname" instead of the IP address and in "core-site.xml" and "mapred.xml" and reinstalled the files with the lib extension, the error was resolved.

+1
source share

As with Akshay (Setob_b comment), all I needed to fix was to get hadoop-mapreduce-client-shuffle-.jar in my class path.

For Maven, the following should follow:

 <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-shuffle</artifactId> <version>${hadoop.version}</version> </dependency> 
0
source share

All Articles