Found org.apache.hadoop.mapreduce.TaskAttemptContext interface

I have not yet seen a solution to my specific problem. At least it doesn't work. It drives me crazy. This particular combo does not have much space on Google. My mistake arises as the task is performed in the cartographer from what I can say. The entrance to this task is the output of avro schema'd, which is compressed with deflate, although I also tried to compose.

Avro: 1.7.7 Hadoop: 2.4.1

I am getting this error and I am not sure why. Here is my job, assembler and shortening. An error occurs when the display device is turned on.

Sample uncompressed Avro input file (StockReport.SCHEMA is defined this way)

{"day": 3, "month": 2, "year": 1986, "stocks": [{"symbol": "AAME", "timestamp": 507833213000, "dividend": 10.59}]} 

Work

 @Override public int run(String[] strings) throws Exception { Job job = Job.getInstance(); job.setJobName("GenerateGraphsJob"); job.setJarByClass(GenerateGraphsJob.class); configureJob(job); int resultCode = job.waitForCompletion(true) ? 0 : 1; return resultCode; } private void configureJob(Job job) throws IOException { try { Configuration config = getConf(); Path inputPath = ConfigHelper.getChartInputPath(config); Path outputPath = ConfigHelper.getChartOutputPath(config); job.setInputFormatClass(AvroKeyInputFormat.class); AvroKeyInputFormat.addInputPath(job, inputPath); AvroJob.setInputKeySchema(job, StockReport.SCHEMA$); job.setMapperClass(StockAverageMapper.class); job.setCombinerClass(StockAverageCombiner.class); job.setReducerClass(StockAverageReducer.class); FileOutputFormat.setOutputPath(job, outputPath); } catch (IOException | ClassCastException e) { LOG.error("An job error has occurred.", e); } } 

Mapper:

 public class StockAverageMapper extends Mapper<AvroKey<StockReport>, NullWritable, StockYearSymbolKey, StockReport> { private static Logger LOG = LoggerFactory.getLogger(StockAverageMapper.class); private final StockReport stockReport = new StockReport(); private final StockYearSymbolKey stockKey = new StockYearSymbolKey(); @Override protected void map(AvroKey<StockReport> inKey, NullWritable ignore, Context context) throws IOException, InterruptedException { try { StockReport inKeyDatum = inKey.datum(); for (Stock stock : inKeyDatum.getStocks()) { updateKey(inKeyDatum, stock); updateValue(inKeyDatum, stock); context.write(stockKey, stockReport); } } catch (Exception ex) { LOG.debug(ex.toString()); } } 

Scheme for the card output key:

  { "namespace": "avro.model", "type": "record", "name": "StockYearSymbolKey", "fields": [ { "name": "year", "type": "int" }, { "name": "symbol", "type": "string" } ] } 

Stack trace:

 java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyInputFormat.java:47) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:492) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:735) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 

Edit: It doesn't matter, but I'm working to reduce this to data from which I can create JFreeChart outputs. Do not get to the cartographer so that it is not connected.

+5
source share
2 answers

The problem is that org.apache.hadoop.mapreduce.TaskAttemptContext was a class in Hadoop 1 , but became in Hadoop 2 .

This is one of the reasons why Hadoop library-dependent libraries should have separately compiled jarfiles for Hadoop 1 and Hadoop 2. Based on your stack trace, it seems that you have the Hadoop1 compiler compiled by Avro jarfile, despite working with Hadoop 2.4 .1.

upload mirrors for Avro provide nice separate downloads for avro-mapred-1.7.7-hadoop1.jar vs avro-mapred-1.7.7-hadoop2.jar .

+6
source

The problem is that Avro 1.7.7 supports 2 versions of Hadoop and therefore depends on both versions of Hadoop. And by default, Avro 1.7.7 cans are dependent on the old version of Hadoop. To create with Avro 1.7.7 with Hadoop2 , simply add additional classifier lines to maven:

  <dependency> <groupId>org.apache.avro</groupId> <artifactId>avro-mapred</artifactId> <version>1.7.7</version> <classifier>hadoop2</classifier> </dependency> 

This will tell maven to search for avro-mapred-1.7.7-hadoop2.jar , not avro-mapred-1.7.7.jar

The same applies for Avro 1.7.4 and higher.

+1
source

All Articles