Invalid host name error when connecting to s3 when using a secret key with a slash

I have a forward slash private key .

When I try to connect to s3 sink

 Caused by: java.lang.IllegalArgumentException: Invalid hostname in URI s3://xxxx: xxxx@jelogs /je.1359961366545 at org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:41) 

When I encode a slash using %2F , I get

 The request signature we calculated does not match the signature you provided. Check your key and signing method. 

How do I encode a secret key.

+4
source share
3 answers

As a result, I create a new secret key without a slash. This is a problem with information, and creating a new key is only a solution.

+3
source

The samthebest solution works, you just need to add the "" surrounding keys. Here's how to use it:

 hadoop distcp -Dfs.s3a.awsAccessKeyId="yourkey" -Dfs.s3a.awsSecretAccessKey="yoursecret" <your_hdfs_path> s3a://<your-bucket> 
+4
source

Using

 -Dfs.s3n.awsAccessKeyId=<your-key> -Dfs.s3n.awsSecretAccessKey=<your-secret-key> 

eg.

 hadoop distcp -Dfs.s3n.awsAccessKeyId=<your-key> -Dfs.s3n.awsSecretAccessKey=<your-secret-key> -<subsubcommand> <args> 

or

 hadoop fs -Dfs.s3n.awsAccessKeyId=<your-key> -Dfs.s3n.awsSecretAccessKey=<your-secret-key> -<subsubcommand> <args> 
+2
source

All Articles