Why aren't there Amazon S3 authentication handlers?

I have the $ AWS_ACCESS_KEY_ID and $ AWS_SECRET_ACCESS_KEY environment variables set, and I run this code:

import boto conn = boto.connect_s3() 

and get this error:

 boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] 

What's happening? I don’t know where to start debugging.




Boto doesn't seem to take values ​​from my environment variables. If I pass the key and secret key as arguments to the connection constructor, this works fine.

+39
amazon-s3 amazon-web-services boto
Mar 22 '11 at 19:47
source share
11 answers

Boto will accept your credentials from environment variables. I tested this with V2.0b3 and it works great. It will give priority to the credentials specified explicitly in the constructor, but it will also receive credentials from environment variables.

The easiest way to do this is to put your credentials in a text file and specify the location of this file in the environment.

For example (on Windows: I expect it to work exactly the same on Linux, but I personally did not)

Create a file called "mycred.txt" and place it in the C: \ temp folder. This file contains two lines:

 AWSAccessKeyId=<your access id> AWSSecretKey=<your secret key> 

Define the AWS_CREDENTIAL_FILE environment variable to point to C: \ temp \ mycred.txt

 C:\>SET AWS_CREDENTIAL_FILE=C:\temp\mycred.txt 

Now your code snippet is above:

 import boto conn = boto.connect_s3() 

will work fine.

+31
Apr 29 2018-11-11T00:
source share

I am new to both python and boto, but I was able to reproduce your error (or at least the last line of your error.)

Most likely, you will not be able to export variables to bash. if you just define then they are only valid in the current shell, export them, and python inherits the value. In this way:

 $ AWS_ACCESS_KEY_ID="SDFGRVWGFVVDWSFGWERGBSDER" 

will not work if you have not added:

 $ export AWS_ACCESS_KEY_ID 

Or you can do it all in one line:

 $ export AWS_ACCESS_KEY_ID="SDFGRVWGFVVDWSFGWERGBSDER" 

Similarly for a different value. You can also put this in your .bashrc (assuming bash is your shell and assuming you don't want to export)

+10
Nov 11 '11 at 12:18
source share

Follow the response of nealmcb on the IAM role. During the deployment of EMR clusters using the IAM role, I had a similar problem when from time to time (not every time) this error occurred when connecting boto to s3.

 boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] 

The metadata service may use a timeout when retrieving credentials. Thus, as the docs suggest, I added a Boto section to the configuration and increased the number of retries to get the credentials. Please note that the default is 1 attempt.

 import boto, ConfigParser try: boto.config.add_section("Boto") except ConfigParser.DuplicateSectionError: pass boto.config.set("Boto", "metadata_service_num_attempts", "20") 

http://boto.readthedocs.org/en/latest/boto_config_tut.html?highlight=retries#boto

Scroll down to: You can control the timeouts and number of retries used when retrieving information from the Metadata Service (this is used for retrieving credentials for IAM roles on EC2 instances)

+9
May 20 '15 at 10:58
source share

I just ran into this problem when using Linux and SES, and I hope this can help others with a similar problem. I installed awscli and configured my keys:

 sudo apt-get install awscli aws configure 

This is used to configure your credentials in ~ / .aws / config, as described in @huythang. But boto looks for your credentials in ~ / .aws / credentials , so copy them over

 cp ~/.aws/config ~/.aws/credentials 

Assuming that the appropriate policy is configured for your user with these credentials - you do not need to set any environment variables.

+5
May 14 '16 at 12:10
source share

See the latest boto s3 introduction :

 from boto.s3.connection import S3Connection conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) 
+3
Nov 26 '13 at 13:32
source share

I found the answer here .

On Unix: first setting up aws config:

 #vim ~/.aws/config [default] region = Tokyo aws_access_key_id = xxxxxxxxxxxxxxxx aws_secret_access_key = xxxxxxxxxxxxxxxxx 

And set environment variables

 export AWS_ACCESS_KEY_ID="aws_access_key_id" export AWS_SECRET_ACCESS_KEY="aws_secret_access_key" 
+3
Sep 01 '14 at 10:34
source share

In my case, the problem was that in IAM, "users do not have permissions by default." It took me a whole day to track this, as I got used to the original AWS (pre-iam) authentication model, in which the only ways were now what is now called root.

There are many AWS user creation documents, but only in a few places where they point out that you must give them permission to execute them. One of them is Working with Amazon S3 Buckets - Amazon Simple Storage Service , but even in fact it doesn’t just tell you to go to the Policies tab, offer a good start-up policy and explain how to apply it.

Master Sorting simply encourages you to “Get Started with IAM Users” and doesn't explain what to do much more. Even if you get a little bored, you just see, for example. "Managed Policies There are no managed policies for this user." which does not imply that you need a policy to do anything.

To set up a root-like user, see Creating an Administrators Group Using the Console - AWS Identity and Access Management

I do not see a specific policy that simply allows read-only access to all S3 (my own buckets, as well as public buckets belonging to others).

+2
May 01 '15 at 4:09
source share

Now you can set them as arguments in the call to the connection function.

 s3 = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) 

Just thought I'd add that someone else was looking like me.

+1
Oct 22 '13 at 11:33
source share

On a Mac, exporting keys should look like this: key=value . So the export of, say, AWS_ACCESS_KEY_ID Environmental var should look like this: AWS_ACCESS_KEY_ID=yourkey . If you have any quotes around your meanings, as indicated in the answers above, boto will throw the above error.

0
Jun 22 '15 at 20:30
source share

I had a problem with flash application on ec2. I did not want to enter credentials in the application, but I managed the permission through IAM roles. This way you can avoid encoding keys in the code. Then I installed the policy in the AWS console (I didn’t even encode it, I just used the policy generator)

My code is exactly like OP. Other solutions here are good, but there is a way to access hard access coding without .

  • Create an IAM Security Group that Grants Access to an S3 Resource
  • Provide policy to EC2 instance
  • Connect using only boto.connect_s3() #no necessary keys
0
Dec 08 '17 at 16:56 on
source share

I see that you call them AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY .

When it seems that they should be set as AWSAccessKeyId and AWSSecretKey .

-four
Jul 28 2018-11-18T00:
source share



All Articles