Unfortunately, boto3 and the EMR API are pretty poorly documented. In the minimum case, an example of word counting would look like this:
import boto3 emr = boto3.client('emr') resp = emr.run_job_flow( Name='myjob', ReleaseLabel='emr-5.0.0', Instances={ 'InstanceGroups': [ {'Name': 'master', 'InstanceRole': 'MASTER', 'InstanceType': 'c1.medium', 'InstanceCount': 1, 'Configurations': [ {'Classification': 'yarn-site', 'Properties': {'yarn.nodemanager.vmem-check-enabled': 'false'}}]}, {'Name': 'core', 'InstanceRole': 'CORE', 'InstanceType': 'c1.medium', 'InstanceCount': 1, 'Configurations': [ {'Classification': 'yarn-site', 'Properties': {'yarn.nodemanager.vmem-check-enabled': 'false'}}]}, ]}, Steps=[ {'Name': 'My word count example', 'HadoopJarStep': { 'Jar': 'command-runner.jar', 'Args': [ 'hadoop-streaming', '-files', 's3://mybucket/wordSplitter.py#wordSplitter.py', '-mapper', 'python2.7 wordSplitter.py', '-input', 's3://mybucket/input/', '-output', 's3://mybucket/output/', '-reducer', 'aggregate']} } ], JobFlowRole='EMR_EC2_DefaultRole', ServiceRole='EMR_DefaultRole', )
I donβt remember how to do this with boto, but I had problems starting a simple stream job without disabling vmem-check-enabled .
Also, if your script is located somewhere in S3, upload it using -files (adding #filename to the argument will make the downloaded file available as filename in the cluster).
Taro sato
source share