How can I notify the script when the use of Amazon Web Service exceeds a certain amount?

We use S3, SimpleDB and SQS in a rather complex project.

I would like to be able to automatically track their use to make sure that we do not spend large amounts of money when we are not going to (perhaps due to an error).

Is there a way to read usage data for all Amazon web services and / or the current real-time dollar value of an account from a script?

Or any service or script that provides alerts based on this?

+6
amazon-s3 amazon-web-services amazon-ec2 amazon-simpledb
source share
3 answers

Amazon just announced that you can now "set alarms for any metric to Amazon CloudWatch monitors" (CPU usage, read and write to disk, network traffic, etc.). In addition, all instances now have basic monitoring for free.

+3
source share

We just released the Lab Management Service, which adds policies for using AWS: time limits, maximum number of instances, maximum machine sizes, etc. You can try this and see if it helps: http://LabSlice.com . Since this is a launch, we really appreciate feedback on how to resolve issues like yours (that is, email me if you think the application could be better modified to suit your requirements).

I do not believe that there is a direct way to control AWS costs per dollar. I doubt Amazon provides an API for getting detailed usage metrics, since obviously this will not be in their interest to help you reduce costs. In fact, I came across two examples where, due to improperly configured scripts, surprise costs incurred unexpectedly due to improperly configured scripts, so I know that this can be a problem.

+2
source share

I ran into the same problem with EC2 instances, but accessed it differently - instead of tracking instances, I automatically killed them after a certain amount of time. From your description, it sounds like it might not be practical in your environment, but I thought I would share it just in case it helps. My AMI was based on Fedora, so I created the following bash script, registered it as a service, and ran it at startup:

#!/bin/bash # chkconfig: 2345 68 20 # description: 50 Minute Kill # Source Functions . /etc/rc.d/init.d/functions start() { # Shut down 50 minutes after starting up at now + 50 minutes < /root/atshutdown } stop() { # Remove all jobs from the at queue because I'm not using at for anything else for job in $(atq | awk '{print $1}') do atrm $job done } case "$1" in start) start && success || failure echo ;; stop) stop && success || failure echo ;; restart) stop && start && success || failure echo ;; status) echo $"`atq`" ;; *) echo $"Usage: $0 {start | stop | restart}" RETVAL=1 esac exit $RETVAL 

You might consider doing something similar to suit your needs. If you do, be especially careful to stop the service before changing the image so that the instance does not turn off before you can repack.

If you wanted, you could turn off instances at a specific time (after everyone leaves work?), Or you can pass the keep-alive duration / shutdown time using the -d or -f options to ec2-run-instances and parse it in a script.

+1
source share

All Articles