If your text files begin / end with a unique sequence of characters, you can first combine them into one file with s3distcp(I did this by setting --targetSizea very large number), then use sedthe Hadoop stream to add new lines; in the following example, each file contains one json (file names begin with 0), and the command sedinserts a new line between each instance }{:
hadoop fs -mkdir hdfs:///tmpoutputfolder/
hadoop fs -mkdir hdfs:///finaloutputfolder/
hadoop jar lib/emr-s3distcp-1.0.jar \
--src s3://inputfolder \
--dest hdfs:///tmpoutputfolder \
--targetSize 1000000000 \
--groupBy ".*(0).*"
hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming.jar \
-D mapred.reduce.tasks=1 \
--input hdfs:///tmpoutputfolder \
--output hdfs:///finaloutputfolder \
--mapper /bin/cat \
--reducer '/bin/sed "s/}{/}\n{/g"'
source
share