Getting started with DevStack as OpenStack playground environment on Ubuntu

I was recently looking to deploy a minimal installation of OpenStack to use as a development environment and to minimise the complexity of installing a complete OpenStack environment where I can build and tear down the environment with minimal effort. This is where DevStack comes into play, as to quote:

DevStack is a series of extensible scripts used to quickly bring up a complete OpenStack environment based on the latest versions of everything from git master. It is used interactively as a development environment and as the basis for much of the OpenStack project’s functional testing.

For my minimal installation of DevStack I am using new installation of Ubuntu Server 16.10 (clean installation and isolated instance recommended due to significant changes to the system). In terms of hardware configuration, the following recommendations are provided. However, this will all be dependant on your use case for DevStack and what you require to achieve and DevStack may also be installed without satisfying the below criteria but you may experience performance/stability issues.

  • Processor – at least 2 cores
  • Memory – at least 8GB
  • Hard Drive – at least 60GB

First of all we will create a user account named ‘stack’ to install DevStack and grant the user sudo privileges with the no password parameter. It is important in this step that we do not complete the installation as the root user.

sudo adduser stack 
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

Now log out as the current user and reconnect as the ‘stack’ user created in the previous step and clone the repository to the current directory, in this example i am changing the directory to ‘/var’ to create my local copy.

cd /var 
sudo git clone git://github.com/openstack-dev/devstack.git

We will now need to create a configuration file (/var/local.conf) to specify user configuration variables to use when the installation script (/var/devstack/stack.sh) is executed in a subsequent step. The installation will complete without

The below example, contains only password variables so that you are not required to input the values at installation. For a more detailed configuration file containing additional parameters, check out the sample from the respoistory.

cd /var/devstack
sudo vi local.conf 
[[local|localrc]]
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD

Once the configuration file has been created, we can now execute the installation script (stack.sh). The installation process will take approximately 15-20 minutes to complete to which will receive console output. Once the installation has completed you will receive an installation summary, URLs, accounts and passwords.

./stack.sh

Once the installation has completed successfully you should be able to browse to the Horizon dashboard and authenticate with the admin credentials configured during installation, at http://{ip address}/dashboard.

horizon-login

If you need to remove the installation of DevStack there is a script included in the repository (./clean.sh) which will remove the installation of DevStack and dependancies and then cleanup the directories touched by the installation. For a detailed list of impacted files and directories during the installation refer to this link.

cd /var/devstack
./clean.sh
rm -rf /opt/stack
rm -rf /usr/local/bin

From my initial attempt at installation, I encountered a number of issues which appear to be permission related I believe this was due to not cloning the repository as the ‘stack’ user account used for the installation. In this case, to resolve you could run the below or alternatively clean-up your installation process and repeat the installation.

sudo chown -R stack:stack /var/devstack 
sudo chmod 770 /var/devstack
Advertisements

Updates to the AWS Management Console and Provisioned IOPS

After being away from the office for a few days, I noticed today that there is a updated user interface to the Amazon Web Services console.

In addition to the updated user interface a number of updates have been included for the Launch Instance Wizard.

Configuring Instance VPC and Subnet Details

On configuring your instance you may now select your VPC and Subnet details from an existing configuration or if required create a new VPC and/or subnet from the wizard.

ScreenHunter_435 Oct. 14 22.24

Search for snapshot when adding a new EBS volume

On adding storage, you may now search for a snapshot to create the EBS volume

ScreenHunter_430 Oct. 14 22.04

Search for EBS tags and values 

As you type, you may now search for existing tags and values and select from a list.

ScreenHunter_431 Oct. 14 22.07

Security Groups 

You may not view the rules for a particular security group before assigning to your instance as well as copying the rule set of an existing security group and if required modify the rules.

ScreenHunter_433 Oct. 14 22.12

One other notable change which occurred in the past week has been to provisioned IOPS and the ratio to capacity. Previously, this was configured as 10:1, now the ratio of IOPS to capacity has been increased to 30:1. So previously when a 100 GB EBS volume  could only provide a 1,000 IOPS, you can now obtain 3,000 IOPS from the same size EBS volume.

 

Using Best Practices checks with CloudCheckr

In the next couple of articles I will be writing I am going to be looking at Resource Control, Cost Optimisation and Best Practice features provided by CloudCheckr (cloudcheckr.com) and how I hope to use them in managing a deployment in Amazon Web Services.

Once you have registered for your CloudCheckr account (see https://deangrant.wordpress.com/2013/09/13/receive-extended-free-trial-of-cloudcheckr-pro/, for an extended trial) and performed the initial creation of a project and collection of your deployment, you can now start to take advantage of the various reports generated from your project.

The first report I will be looking at is the Best Practice report, which takes a detailed look at your deployment to ensure your infrastructure is configured as per each best practice check and provides a status.

The results are categorised into four sections; Availability, Cost, Security and Usage and may be filtered by importance and/or tag.

ScreenHunter_428 Oct. 14 14.40

The items in your report will be categorised with icons and colours to the severity of the status, as below:

  • Red = High
  • Orange = Medium
  • Yellow = Low
  • Blue = Informational
  • Green = No issues found

ScreenHunter_429 Oct. 14 14.48

Each issue generated allows you to either export the information to a CSV file, ignore the issue (maybe I have no DynamoDB clients and do not wish to report this status) or  to check the details of that particular alert.

Checking the details of the issue is really useful and provides a drill through interface to view each issue for that particular alert. For example, for EBS volumes without a snapshot I can not only view each snapshot reporting this status but further investigate the Volume ID to view the details such as Status, Cost Per Month, Availability Zone and the Instance this is attached to.

There are over 100 best practice checks performed and therefore far too many for me to mention in this article.

Also, a feature I do like is the email notification service which will notify you or any new issues discovered since the previous day and provide those details. Previous, best practice tools I have used generally produce static information with no previous history or comparison.

So which alerts did I find most useful?

Ultimately, the question I get asked is how much is this costing and how can we reduce the monthly spend? If you are using your deployment for Development it can become quite easy to have a number of instances and volumes that become idle or unused.. The best practices report will identify these resources and produce a 30 day trend for those idle instances.

One common mistake can also be creating under utilised resources, the usage report will help to identify any under utilised EBS volumes and EC2 instances and allow you to drill through to the resource and display a number of metrics as well as cost to help you determine the correct resource type to select.

The security checks provide a good detailed list of best practices to follow, this will detail information such as security groups that allow traffic from any IP address, all ports and potentially dangerous ports exposed. There are also a number of IAM security checks, such as password policies and mult-factor authentication configuration issues.

Availability alerts can provide a list of EC2 volumes that currently have no snapshots (previous 7 days) which in my case can highlight volumes that have not been configured correctly as snapshots are performed based on tagging of the resource. Also, if you protect instances with termination protection those instances without this option enabled will be highlighted.

In terms of providing high availability within your deployment a number of availability alerts can help to identify issues such as unhealthy instances within Elastic Load Balancers and Multi Availability Zone distribution.

For more detailed information, in regards to using the best practice reports see http://support.cloudcheckr.com/best-practices-detail-report/.

Patch Management with Windows Update Server and WuInstall

I was recently looking to schedule approved updates from Windows Update Server and schedule updates with finer granularity than those provided by the Windows Update Services client. I discovered a command line utility called WuInstall  (http://www.wuinstall.com/index.php/en) which allows for updates to be installed on demand.

As part of this solution, I still required updates to be approved by my Windows Update Server and to be automatically downloaded by the Windows Update Services client on a daily basis.

I created a group policy object and linked to the organisational unit containing the clients and set the following group policy object settings:

Setting State Options Description
Configure Automatic Updates Enabled 3 – Auto download and notify for install Specifies that the instance will download approved updates from the WSUS server, but will only notify for install.
Specify intranet Microsoft Update Service location Enabled http://<name of WSUS server> Specifies the WSUS server as the intranet server to host updates.
Allow non-administrators to receive update notifications Disabled N/A Specifies that only administrators receive update notifications.
Enable client-side targeting Enabled <name of target group> Specifies the target group for the instances to receive updates.

Once the approved updates have been downloaded, the command line utility WuInstall will manage the schedule and installation.

As previously mentioned, WuInstall is a command line tool that allows the installation of Windows Updates on demand and can use the internal WSUS server to discover approved updates.

In the case of a single server, the executable can be downloaded and run with a number of command line arguments, in my case the following are to be used:

WuInstall.exe /install /autoaccepteula /reboot_if_needed /logfile <path to log file>
Usage Description
/install Searches, downloads and installs available updates.
/autoaccepteula Automatically accepts EULA on every update.
/reboot_if_needed Only restarts the instance if needed.
/logfile Creates log file

However, I was required to run the following against a number of servers. The command utility WuInstall supports the use of PSExec to run agaisnt multiple servers (http://www.wuinstall.com/index.php/faq#psexec) and therefore this was the mechanism used to launch the executable from a central management server to invoke the command on each remote server, with the following command line arguments:

PSExec.exe -c -f -s \\Server\Share\WUInstall.exe /install /autoaccepteula /reboot_if_needed /logfile <path to log file>
Usage Description
-c Copies the specific program (WuInstall.exe) to the remote instance for execution.
-f On copying the specific program overwrite the file is this already exists on the remote system.
-s Run remote process in the System account.

Now a further requirement was introduced in that updates were required to be installed on all servers in a particular environment and prior to the updates being installed a snapshot of the root device performed (Amazon Web Services). Therefore all the above would be compiled into a Windows Powershell script and

Param ([string] $Environment)

As the script was to be run against a number of environments, , the script defined parameters for the Environment name, which were called with the Environment argument. The script would required importing the Powershell for Amazon Web Services ((http://aws.amazon.com/powershell/) snap-in to the current powershell session.

If (-not (Get-Module AWSPowershell -ErrorAction SilentlyContinue))
{ 
Import-Module "C:\Program Files(x86)\AWS Tools\Powershell\AWSPowershell\AWSPowershell.psd1" > $null
}

As previously mentioned I have a collection of AWS EC2 instances (not a particularly large environment) to which I want to install the updates agaisnt, therefore this was represented as a collection in a hashtable to contain the information I required the InstanceID, VolumeID of the root device (/dev/sda1) and server name, below is an example of what this would look like;

$Instances = New-Object System.Collection.Generic.List(System.Collections.Hashtable)
$Instances.Add(@{"instanceid"="i-xxxxxxx1";"volumeid"=vol-xxxxxxx1";"name"="xxxxxxxxDEMxxx"})
$Instances.Add(@{"instanceid"="i-xxxxxxx2";"volumeid"=vol-xxxxxxx2";"name"="xxxxxxxxPRExxx"})
$Instances.Add(@{"instanceid"="i-xxxxxxx1";"volumeid"=vol-xxxxxxx1";"name"="xxxxxxxxPRDxxx"})

Once the instances have been listed as a collection we are required to run a loop process to return each instance name and then filter the environment using the substring method as conditional logic, where the naming convention contains the environment name in the 8th, 9th and 10th character of the hostname.

ForEach ($Instance in $Instances)
{
$String = $Instance.Name.ToString().Substring(8,3)
If ($String -eq $Environment)

Once this instances have been returned for the environment, the following is performed against each instance returned in the filter:

  • Stop the EC2 Instance
  • Once stopped perform an EC2 snapshot of the VolumeID specified in the collection and add the description: <Host Name>: WIndows Update on ddMMyyyy_HHmm
  • Once the EC2 snapshot is complete, start the EC2 instance.
  • Once the EC2 instance is running Invoke WuInstall to install the approved updates.
{ 
Stop-EC2Instance $Instance.instanceid 
Do { 
$EC2Instance = Get-EC2Instance -Instance $instance.instanceid  
$State = $EC2Instance.RunningInstance| ForEach-Object {$_.InstanceState.Name}
{
Until ($State -eq "stopped")
New-EC2Snapshot -VolumeID $instance.volumeid -Description ($instance.name + ": Windows Update on " + $Date)
Do { 
$SnapshotStatus = (Get-EC2Snapshot | Where-Object {$_.Description -eq ($instance.name + ": Windows Update on " + $Date)}).status
}
Until ($SnapshotStatus -eq "completed")
Start-EC2Instance $instance.instanceid
Do { 
$EC2Instance = Get-EC2Instance -Instance $instance.instanceid  
$State = $EC2Instance.RunningInstance| ForEach-Object {$_.InstanceState.Name}
}
Until ($State -eq "running")
Start-Sleep -Seconds 300 
$Hostname = $instance.Name
$Command =  "& 'C:\Program Files\SysinternalSuite\PsExec.exe' \\$Hostname -c -f -s \\Server\Share\WUInstall.exe /install /autoaccepteula /reboot_if_needed /logfile \\Server\Share\Logs\logfile.log"
Invoke-Expression $Command
}
}

A couple of issues I experienced with the script, was that when the EC2 instance was reported as running the instance would not be contactable as the status checks had not been completed, so a the script is therefore suspended for a period of five minutes as a workaround.

There is currently a known issue with Powershell for Amazon Web Services snap-in (3.0.512.0) where the Get-EC2InstanceStatus cmdlet fails to return stopped instances. This requires a workaround to be performed where the instance state is returned using the Get-EC2Instance cmdlet and returning the value of the RunningInstance.InstanceState.Name object.

I plan to update this script to remove the dependency on a collection as a hashtable and return this information using the Powershell for Amazon Web Services Tools snap-in. Also, I will hopefully address the issue of reporting the EC2 instance as being in a running status where the status checks have not completed and remove the need to suspend the script.

The script will require AWS credentials to run the various cmdlets above, in this process I store the credentials on the local computer where the scheduled task is invoked.

The full Windows Powershell script can be downloaded from the below link:

https://app.box.com/s/7t2b5zqgbp4kus34rtwf

Receive extended free trial of CloudCheckr Pro

CloudCheckr provides otherwise unavailable visibility and analytics to remove the complexity from AWS usage. Where users quickly and efficiently gain control of their deployment, reduce costs, and optimize infrastructure performance.

ScreenHunter_409 Sep. 13 10.30

By signing up (no credit card required) and using the promotional code “cloud19” on signup you will receive an extended trial of CloudCheckr Pro.

For full details of the analytic features, see  http://cloudcheckr.com/home-2/aws-analytic-features/

In short, CloudCheckr addresses the following areas to optimise cloud performance:

Monitoring status of AWS EC2 Snapshots within Nagios

I recently wrote a script to automate the creation of snapshots for EBS volumes for Amazon EC2 instances (https://deangrant.wordpress.com/2013/08/06/aws-create-ec2-snapshot-based-on-metadata-tag-value/).

Following on from this I wanted to report the status of snapshots completed and return this status to Nagios. This was to be achieved by comparing the number of EBS volumes that contained a specific metadata tag value to the number of snapshots created on a particular day.

As per usual this script was to be written in Windows Powershell and importing the Powershell for Amazon Web Services ((http://aws.amazon.com/powershell/) snap-in to the current powershell session.

If (-not (Get-Module AWSPowershell -ErrorAction SilentlyContinue))
{ 
Import-Module "C:\Program Files(x86)\AWS Tools\Powershell\AWSPowershell\AWSPowershell.psd1" > $null
}

Once the snap-in has been imported we will need to set our AWS Credentials and AWS Region for this session:

Set-AWSCredentials -AccessKey XXXXXXXXXXXXXXXXXXXXXX -SecretKey XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Set-DefaultAWSRegion eu-west-1

As part of the script I will be required to output the Date string in two different formats one for the snapshot description and one for the date passed in the string for Nagios status information.

$Date = (Get-Date).toString ('ddMMyyyy')
$StatusDate = (Get-Date).toString ('dd/MM/yyyy')

Now we are required to compare the number of EBS volumes which match the metadata tag value and the number of snapshots created. Firstly, we will create the filter for the EBS volumes which in this instance was to return all volumes with the metadata tag value ‘EBS Snapshot: Yes’ and use the Get-EC2Volume cmdlet to return this volume and store within a variable.

Filter = (New-Object Amazon.EC2.Model.Filter).WithName("tag:EBS Snapshot").WithValue("Yes")
$Volumes = Get-EC2Volume -Filter $Filter

Now we have returned all our EBS volumes, we need to return all the snapshots created on the current date. This information is stored within the description of the snapshot upon creation in the format ‘EBS Snapshot created on ddMMyyyy’. To return all snapshots with this filter the Get-EC2Snapshot cmdlet is used to return all snapshots containing the filter.

$Snapshots = Get-EC2Snapshot | Where-Object ($_.Description -like ("EBS Snapshot Created on " + $Date + "*")}

Now it is time to compare the counts of variables returned, also in this case I am only creating a warning threshold only to generate my return codes.

$Warning = $Volumes.Count -5
If ($Snapshots.Count -eq $Volume.Count) {$returncode = 0} 
ElseIf ($Snapshots.Count -lt $Volume.Count and $Snapshots.Count -gt $Warning) {$returncode = 1}
ElseIf ($Snapshots.Count -lt $Warning) {$returncode = 2}

Now all that is left, is to exit the script and return the exit code to Nagios. However, before we do so I want to return Status Information as well to provide the number of snapshots performed on a certain date and the number of snapshots compared to actual EBS volumes.

"Total number of EBS Snapshots performed on " + $StatusDate + ": " + $SnapshotCount + "/" + $Volumes.Count 
exit $returncode

Below, is an example of a formatted Status Information message generated:

Total Number of EBS Snapshots performed on 12/09/2013: 137/137

There are one or two issues with the script, if a EBS volume is created during the day and no snapshot has been performed this will report that there are more volumes than snapshots, therefore if six EBS volumes were created this would then turn a warning. This can be negated by running the external script within the service command less frequently, in my case I run this once per day.

While the script was created to be executed as an external script within Nagios, this can be run standalone from Windows Powershell. If your are looking to add external scripts to Nagios such as this one see the below link for more information;

https://deangrant.wordpress.com/2013/09/12/creating-and-running-external-scripts-within-nagios-xi/

The full Windows Powershell script can be downloaded from the below link:

https://app.box.com/s/jm88wcrtosfc7xcisbn7

Eastern Time zone format and creating tasks in Ylastic

After signing up to the Ylastic service (ylastic.com) which provides a unified interface to manage your Amazon Web Services cloud environment, I started to struggle with the simple task of creating scheduled tasks using CRON expressions.

From the initial configuration, it would appear that Ylastic does not support using special characters other than the asterisk. Now that was not the issue I was running into, once the scheduled task was created it would appear to be invoked as configured using the CRON expression.

After a little bit of head scratching the Ylastic service only supports the Eastern Time zone, and does not take into consideration the time zone you configure for your account. So when creating your scheduled tasks you will need to configure scheduled times in Eastern Time zone format, so in my case I got used to adjusting the time zone (-05:00) when creating my tasks.