How to read an ORC file in a chaotic stream?

I would like to read ORC files in my mapreduce in Python. I try to run it:

hadoop jar /usr/lib/hadoop/lib/hadoop-streaming-2.6.0.2.2.6.0-2800.jar 
-file /hdfs/price/mymapper.py 
-mapper '/usr/local/anaconda/bin/python mymapper.py' 
-file /hdfs/price/myreducer.py 
-reducer '/usr/local/anaconda/bin/python myreducer.py' 
-input /user/hive/orcfiles/* 
-libjars /usr/hdp/2.2.6.0-2800/hive/lib/hive-exec.jar 
-inputformat org.apache.hadoop.hive.ql.io.orc.OrcInputFormat 
-numReduceTasks 1 
-output /user/hive/output

But I get the error:

-inputformat : class not found : org.apache.hadoop.hive.ql.io.orc.OrcInputFormat

I found a similar question to OrcNewInputformat as the input format for hadoop streaming , but the answer is not clear

Please give me an example of how to correctly read ORC files in a chaotic stream.

+4
source share
1 answer

Here is one example in which I use a split Hive ORC table as input:

    hadoop jar /usr/hdp/2.2.4.12-1/hadoop-mapreduce/hadoop-streaming-2.6.0.2.2.4.12-1.jar \
-libjars /usr/hdp/current/hive-client/lib/hive-exec.jar \
-Dmapreduce.task.timeout=0 -Dmapred.reduce.tasks=1 \
-Dmapreduce.job.queuename=default \
 -file RStreamMapper.R RStreamReducer2.R \
-mapper "Rscript RStreamMapper.R" -reducer "Rscript RStreamReducer2.R" \
-input /hive/warehouse/asv.db/rtd_430304_fnl2 \
-output /user/Abhi/MRExample/Output \
-inputformat org.apache.hadoop.hive.ql.io.orc.OrcInputFormat 
-outputformat org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat

Here /apps/hive/warehouse/asv.db/rtd_430304_fnl2is the path to the background ORC data store for the HIVE table. Rest I need to provide the appropriate banks for streaming, as well as HIVE.

+1
source

All Articles