Enabling Detailed Billing Report (with resources and tags) in AWS Billing Preferences

In order to provide an hourly grain view of your AWS usage and charges by resource and tags you will be required to enable detailed billing with your AWS account billing preferences.

In order to do so there are a couple of pre-requisites you will be required to enable with the billing preferences (https://portal.aws.amazon.com/gp/aws/developer/account/index.html?ie=UTF8&ie=UTF8&action=billing-preferences).

Firstly, you will need to sign up and enable Monthly Reporting to generate detailed statement of your AWS usage.

Secondly, you will need to sign up and enable Programmatic Access. This requires for you to create an Amazon S3 Bucket to publish estimated and month-end Billing Reports.

Once Amazon S3 Bucket has been created you will need to enable a bucket policy to enable reports to be published, the sign up process provides a sample policy as below:

    "Id": "Policy1335892530063",
    "Statement": [
             "Sid": "Stmt1335892150622",
             "Effect": "Allow",
             "Principal": {
                    "AWS": "arn:aws:iam::386209384616:root"
             "Action": [
             "Resource": "arn:aws:s3:::"{BUCKETNAME}"
             "Sid": "Stmt1335892526596",
             "Effect": "Allow",
             "Principal": {
                    "AWS": "arn:aws:iam::386209384616:root"
             "Action": "s3:PutObject",
             "Resource": "arn:aws:s3:::{BUCKETNAME}/*"

Once, you have saved the bucket name and the bucket policy has been verified the Detailed Billing Report will be enabled.

Updates to the AWS Management Console and Provisioned IOPS

After being away from the office for a few days, I noticed today that there is a updated user interface to the Amazon Web Services console.

In addition to the updated user interface a number of updates have been included for the Launch Instance Wizard.

Configuring Instance VPC and Subnet Details

On configuring your instance you may now select your VPC and Subnet details from an existing configuration or if required create a new VPC and/or subnet from the wizard.

ScreenHunter_435 Oct. 14 22.24

Search for snapshot when adding a new EBS volume

On adding storage, you may now search for a snapshot to create the EBS volume

ScreenHunter_430 Oct. 14 22.04

Search for EBS tags and values 

As you type, you may now search for existing tags and values and select from a list.

ScreenHunter_431 Oct. 14 22.07

Security Groups 

You may not view the rules for a particular security group before assigning to your instance as well as copying the rule set of an existing security group and if required modify the rules.

ScreenHunter_433 Oct. 14 22.12

One other notable change which occurred in the past week has been to provisioned IOPS and the ratio to capacity. Previously, this was configured as 10:1, now the ratio of IOPS to capacity has been increased to 30:1. So previously when a 100 GB EBS volume  could only provide a 1,000 IOPS, you can now obtain 3,000 IOPS from the same size EBS volume.


Using Best Practices checks with CloudCheckr

In the next couple of articles I will be writing I am going to be looking at Resource Control, Cost Optimisation and Best Practice features provided by CloudCheckr (cloudcheckr.com) and how I hope to use them in managing a deployment in Amazon Web Services.

Once you have registered for your CloudCheckr account (see https://deangrant.wordpress.com/2013/09/13/receive-extended-free-trial-of-cloudcheckr-pro/, for an extended trial) and performed the initial creation of a project and collection of your deployment, you can now start to take advantage of the various reports generated from your project.

The first report I will be looking at is the Best Practice report, which takes a detailed look at your deployment to ensure your infrastructure is configured as per each best practice check and provides a status.

The results are categorised into four sections; Availability, Cost, Security and Usage and may be filtered by importance and/or tag.

ScreenHunter_428 Oct. 14 14.40

The items in your report will be categorised with icons and colours to the severity of the status, as below:

  • Red = High
  • Orange = Medium
  • Yellow = Low
  • Blue = Informational
  • Green = No issues found

ScreenHunter_429 Oct. 14 14.48

Each issue generated allows you to either export the information to a CSV file, ignore the issue (maybe I have no DynamoDB clients and do not wish to report this status) or  to check the details of that particular alert.

Checking the details of the issue is really useful and provides a drill through interface to view each issue for that particular alert. For example, for EBS volumes without a snapshot I can not only view each snapshot reporting this status but further investigate the Volume ID to view the details such as Status, Cost Per Month, Availability Zone and the Instance this is attached to.

There are over 100 best practice checks performed and therefore far too many for me to mention in this article.

Also, a feature I do like is the email notification service which will notify you or any new issues discovered since the previous day and provide those details. Previous, best practice tools I have used generally produce static information with no previous history or comparison.

So which alerts did I find most useful?

Ultimately, the question I get asked is how much is this costing and how can we reduce the monthly spend? If you are using your deployment for Development it can become quite easy to have a number of instances and volumes that become idle or unused.. The best practices report will identify these resources and produce a 30 day trend for those idle instances.

One common mistake can also be creating under utilised resources, the usage report will help to identify any under utilised EBS volumes and EC2 instances and allow you to drill through to the resource and display a number of metrics as well as cost to help you determine the correct resource type to select.

The security checks provide a good detailed list of best practices to follow, this will detail information such as security groups that allow traffic from any IP address, all ports and potentially dangerous ports exposed. There are also a number of IAM security checks, such as password policies and mult-factor authentication configuration issues.

Availability alerts can provide a list of EC2 volumes that currently have no snapshots (previous 7 days) which in my case can highlight volumes that have not been configured correctly as snapshots are performed based on tagging of the resource. Also, if you protect instances with termination protection those instances without this option enabled will be highlighted.

In terms of providing high availability within your deployment a number of availability alerts can help to identify issues such as unhealthy instances within Elastic Load Balancers and Multi Availability Zone distribution.

For more detailed information, in regards to using the best practice reports see http://support.cloudcheckr.com/best-practices-detail-report/.

Making a start on reducing spend on AWS…

As part of ongoing monitoring an optimization of the Amazon Web Services (AWS) platform, I am beginning to more actively monitor cost control which can highlight a number of common mistakes with usage of the AWS platform.

Whilst, looking at a number of third party monitoring and resource control solutions, investigating and developing techniques internally all which I hope to blog about in the future there is a number of resources available online, that can help point you in the right direction and help to avoid any gasps at unexpected usage bills, reduce the amount of time you spend analyzing AWS usage data and help you make the correct decision when choosing resource types.

Probably, the first point of call should be looking at techniques to reduce your overall spend. Below, is a webcast from the official AWS YouTube Channel (http://www.youtube.com/user/AmazonWebServices?feature=watch) on how to help reduce your overall spend detailing cost saving strategies and sizing your applications within AWS.

Return AWS Service Health Dashboard details to Nagios

I was recently looking into returning service details from the AWS Service Health Dashboard to Nagios where any service issues would be reported as a critical state and remain with this status for the duration of the publication date.

I was able to do this by using the Invoke-RestMethod in Windows Powershell by querying a particular services RSS feed.

As the script was to be run against a number of different AWS services , I did not want to create multiple scripts as well as multiple services within Nagios. Therefore, the script defined parameters for the RSS feed, which were called with the RSS argument.

The RSS argument would be the filename of the services RSS file, as the URL for each service status would always contain http://status.aws.amazon.com/rss/. We will also specify the current date as a variable for later comparing to the publication date of the link,

Param ([string] $RSS)
$Date = (get-date).toString('dd MMM yyyy')

Once the RSS parameter is specified we can build the string to query the URL of the RSS feed using the Invoke-RestMethod. As querying the RSS feed returns multiple links we will also need to return the most recent link using the select-object cmdlet.

$url = Invoke-RestMethod -Uri "http://status.aws.amazon.com/rss/$RSS.rss"   | Select -first 1

Once we have returned the link, we want to check the publication date to determine if this is from the current day and if so return this to be a critical status to Nagios and return the description to be the service status.

If ($url.pubdate -like "*" + $Date + "*")
$returncode = 2

If the publication date is not the current date we want to return this as an OK status to Nagios and return the service status as returning as normally.

$returncode = 0
"Service is operating normally"  

Finally, we will exit the powershell session returning the exit code.

exit $returncode

An example of executing the above script to query the ‘Amazon Virtual Private Cloud (Ireland)’ service which has the URL of http://status.aws.amazon.com/rss/vpc-eu-west-1.rss, the script would be run as the below:

./Read-AWSServiceHealthDashboard.ps1 -RSS vpc-eu-west-1

While the script was created to be executed as an external script within Nagios, this can be run standalone from Windows Powershell. If your are looking to add external scripts to Nagios such as this one see the below link for more information;


The full Windows Powershell script can be downloaded from the below link: