I have an Amazon s3 public resource (text file) and you want to access it from a spark. That means I don't have Amazon credentials - it works fine if I just want to download it:
val bucket = "<my-bucket>" val key = "<my-key>" val client = new AmazonS3Client val o = client.getObject(bucket, key) val content = o.getObjectContent // <= can be read and used as input stream
However, when I try to access the same resource from a spark context
val conf = new SparkConf().setAppName("app").setMaster("local") val sc = new SparkContext(conf) val f = sc.textFile(s"s3a://$bucket/$key") println(f.count())
I get the following error with stacktrace:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781) at org.apache.spark.rdd.RDD.count(RDD.scala:1099) at com.example.Main$.main(Main.scala:14) at com.example.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
I do not want to provide any AWS data - I just want to access the resource anonymously (at the moment) - how to do this? I probably need to do something like AnonymousAWSCredentialsProvider, but how to put it inside a spark or hadoop?
PS My build.sbt just in case
scalaVersion := "2.11.7" libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.4.1", "org.apache.hadoop" % "hadoop-aws" % "2.7.1" )
UPDATED: after I have done some research, I see the reason why it does not work.
First of all, S3AFileSystem creates an AWS client with the following credential order:
AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain( new BasicAWSCredentialsProvider(accessKey, secretKey), new InstanceProfileCredentialsProvider(), new AnonymousAWSCredentialsProvider() );
the values ββof "accessKey" and "secretKey" are taken from the intrinsic safety instance (the keys must be "fs.s3a.access.key" and "fs.s3a.secret.key" or org.apache.hadoop.fs. s3a.Constants.ACCESS_KEY and org.apache.hadoop.fs.s3a.Constants.SECRET_KEY, which is more convenient).
Secondly - you probably see that AnonymousAWSCredentialsProvider is the third option (last priority) - what could be wrong? See the AnonymousAWSCredentials Implementation:
public class AnonymousAWSCredentials implements AWSCredentials { public String getAWSAccessKeyId() { return null; } public String getAWSSecretKey() { return null; } }
It just returns null for the passkey and secret key. Sounds reasonable. But take a look inside AWSCredentialsProviderChain:
AWSCredentials credentials = provider.getCredentials(); if (credentials.getAWSAccessKeyId() != null && credentials.getAWSSecretKey() != null) { log.debug("Loading credentials from " + provider.toString()); lastUsedProvider = provider; return credentials; }
He does not select a provider if both keys are equal to zero - this means that anonymous credentials cannot work. Looks like an error in aws-java-sdk-1.7.4. I tried using the latest version - but this is not compatible with hadoop-aws-2.7.1.
Any other ideas?