Windows PowerShell cmdlets to secure PSCredential Objects

I have previously discussed securing credentials using Windows Powershell atPowershell: Securing credentials using the PSCredential class. In this article, I will discuss a number of cmdlets I have created to secure credentials using a Advanced Encryption Standard (AES) encryption key to retrieve the content from a encrypted standard string.

As I am using an encryption key and storing the information in a content file, I will be using ACLs on the NTFS filesytem to control access. Alternative methods could be to store the encryption key in a database or use a certificate to control access to the item. Also, in practice I will store the encryption key on a remote file server.

Firstly, we need to create the encryption key using the ‘New-EncryptionKey’ cmdlet to which we use the RNGCryptoServiceProvider class to generate a random byte array for the encryption length. By default, the cmdlet using a 32-byte array to support the AES 256-bit encryption length. The cmdlet also supports using 128-bit and 192-bit encryption lengths which a 16-byte and 24-byte array. The random byte array for the specified encryption length is generated and sends the output to a file which will be the encryption key content.

Once the content has sent to the output file, the content of random byte array is then removed from the current session.

# Creates an AES 256-bit encryption key at the location D:\Output\Keys\mykey.key
New-EncryptionKey -Output D:\Output\Keys\mykey.key 

# Creates an AES 192-bit encryption key at the location D:\Output\Keys\mykey.key
New-EncryptionKey -Bytes 24 -Output D:\Output\Keys\mykey.key

We have now created an encryption key so that we may now convert the secure string for a credential object password using the specified encryption key and sends the output to a password file using the ‘New-EncryptedString’ cmdlet. The cmdlet will retrieve the content of the specified file containing the encryption key and from the stored credential objects convert the secure string of the password to an encrypted standard string and send the output to a file and clear the content of the stored encryption key from the current session.

New-EncryptedString -KeyFile D:\Output\Keys\mykey.key -PasswordFile D:\Output\Passwords\mypassword.txt 

Finally we want to retrieve the credential object to use as a variable to pass to a subsequent cmdlet which will require authentication. The content of the password file is retrieved and converted to a secure string using the content of the encryption key and stored as a password variable and passed to the PSCredential class to represent a set of security credentials and return the object. For subsequent cmdlets I can use the ‘$Password.GetNetworkCredential().Password’ property value for authentication from the PSCredential object.

$Password = Get-PSCredentialObject -Username administrator -KeyFile D:\Output\Keys\mykey.key -PasswordFile D:\Output\Passwords\mypassword.txt 

The cmdlets are available from the below:

New-EncryptionKey –
New-EncryptedString –
Get-PSCredentialObject –


Creating JetBrains YouTrack issues with Windows PowerShell

YouTrack is a proprietary, commercial browser-based bug tracker, issue tracking system and project management software developed by JetBrains. There is also a REST API provided which allows for various actions to be performed programmatically. In this article I will describe the cmdlets I have created to create issues for projects which leverage the Invoke-WebRequest cmdlet to interact with the REST API using Windows PowerShell.

The cmdlets require a minimum of Windows PowerShell 3.0 and uses the response object for HTML content without Document Object Model (DOM) parsing, this is required when Internet Explorer is not installed on the local instance invoking the cmdlets. An example of a use case for the cmdlets, is to be configured as an event handler from an infrastructure monitoring solution which will create issues for alarms raised.

The REST API is enabled by default and you can confirm connection and access permissions by browsing to ‘http://{baseURL}/rest/admin/project’ which should return an XML file with a list of all the projects. A more detailed description of the REST API can be found at

In this example, we are using cookie-based authorization. However this process can be adapted to also used Hub OAuth 2.0 authentication as well, which is described at

Firstly, we need to establish a connection to the REST API using specified credentials and store the web request session object using the session variable. This will allow for cookie information to be re-used for use in subsequent web requests. The specified parameters of login and password are required to be used in the POST method as below:

POST /rest/user/login?{login}&{password}

The cmdlet Connect-YouTrack establishes a connection by specifying the YouTrack Uri, Username and Password parameters and returns the web request session object, which in this example I am storing in the Connection variable.

$Connection = Connect-YouTrack -YouTrackUri http://server1 -Username administrator -Password P@55Word! 

Once a connection has been established and the web session object has been returned, we can now create an issue in a project and specify a summary and description. The cmdlet ‘New-YouTrackItem’ will invoke a PUT request to the REST API with the specified parameters of project, summary and description as below.

PUT /rest/issue?{project}&{summary}&{description}

Once the item has been created, the URI of the new issue is retrieved from the response header and using a regular expression pattern match we retrieve the item number so that we can use a subsequent web request to update the item with additional information as a variable, in this example ‘NewIssue’. The Connection variable returned when establishing a connection to the REST API is passed to the session variable for authentication.

$NewIssue = New-YouTrackItem -YouTrackUri http://server1 -Project YT -Summary "Summary API Rest Method" -Description "Description API Rest Method" -SessionState $Connection

Finally, we want to apply a command to the issue created to update items with specified paramaters, as below.

POST /rest/issue/{issue}/execute?{command}

The cmdlet ‘Update-YouTrackItem’ specifies a mandatory parameter named ‘ExecuteCommand’ to specify all the items you wish to update in the issue. In this example, I will be invoking a command to set the priority as ‘High’, the type as ‘ Service Desk’ and category as ‘Support’ to update the issue previously created from the stored ‘NewIssue’ variable.

Update-YouTrackItem -YouTrackUri http://server1 -Item $NewIssue -ExecuteCommand "priority High type Service Desk category Support" -SessionState $Connection

The cmdlets are available from the below:

Connect-YouTrack –
New-YouTrackItem –
Update-YouTrackItem –

Implementing R functionality on Tableau Server

R ( is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. In this example, I will be installing and configuring R on Ubuntu 14.04 and enabling R functionality with Tableau Server by installing the Rserve library.

​In order to install R ​​and R packages from the ‘Compreshensive R Archive Network’ (CRAN) we will use the Advanced Packaging Tool (APT) and therefre need ​​add the repository to ​​the list of sources ​as well as the ​public key ​to authenticate packages downloaded using APT, this will ensure we install the latest version of both R (r-base)and the CRAN package for Rserve.

sudo sh -c 'echo "deb trusty/" >> /etc/apt/sources.list'​​
gpg --keyserver --recv-key E084DAB9
gpg -a --export E084DAB9 | sudo apt-key add -
sudo apt-get update 
sudo apt-get install ​ r-base​​

To verify the installation we can enter an interactive shell session, once loaded we shall quit the session


Now we will install the Rserve CRAN package by invoking the install.package() function in R. In order for the package to be available to all users this is installed as root (su).

sudo su - -c "R -e \"install.packages('Rserve', repos = '')\""​

Again, we can verify the installation by entering the R interactive shell session, confirming the Rserve library is available and then quit the session.


By default Rserve only accepts local connections, in order to enable remote connections will we will need to modify the configuration file ‘/etc/Rserve.conf’. A detailed list of other Rserve connection properties that may be set see

remote enabled

In order to ensure the Rserve process is initialised at startyp as a daemon we will need to create the shell script ‘/etc/init.d/’ as below. As no ownership of files are required for the invocation of the Rserve CRAN package, we will use the ‘nobody’ account to start the daemon.

sudo -u nobody R CMD Rserve --vanilla 

In order to execute the shell script we will require to set execute permissions to the shell script and add a link to initialise the shell script at startup. ​

sudo chmod 755 /etc/init.d/
sudo update-rd.d /etc/init.d/ defaults

To confirm the Rserve process is initialised at startup using the shell script we can reboot the instance and confirm the process is running

ps aux | grep Rserve

The next step is optional, in this example I only want to permit inbound connections on the TCP service port 6311 (Rserve) from the Tableau Server. By default rules added to iptables are ephemeral and on restart will be removed. In order to save the configuration we will install the ‘iptables-persistent’ package.

sudo apt-get install iptables-persistent 

I will firstly insert a drop rule for all connections (IPv4) to the destination port 6311(tcp), then insert an accept rule for the Tableau Server ( to the destination port 6311(tcp) and then save the updates to preserve the iptables configuration.

sudo iptables -I INPUT -p tcp -s --dport 6311 -j DROP
sudo iptables -I INPUT -p tcp -s --dport 6311 -j ACCEPT
sudo service iptables-persistent save​

The last step is to configure the Tableau Server VizSQL Server connection properties for a Rserve host ( and port (6311) to enable R functionality within workbooks, optional configuration parameters are also available to use a username and password but in the example access is restricted by firewall rules.

tabadmin stop
​tabadmin set 
tabadmin set vizqlserver.rserve.port 6311
tabadmin configure
tabadmin start 

The configuration is now complete, to verify the VizSQL Server configuration, invoke ‘tabadmin configure -o ‘ to confirm the configuration parameter has been set by dumping the current configuration to a file.

Posting IMDB ratings to Twitter, my first multi-step Zap

I am sure the majority of us reading this article have one way or another automated tasks as part of our day jobs, with services such as Zapier and IFTTT we can easily take this approach for tasks in our personal lives to build integrations between applications we use every day.

In this example, I will be using a ‘Zap’ to automate the task of publishing a tweet containing ratings from my IMBD account for a film or TV series I have recently viewed. On a separate note, I am still quite not sure why this is not integrated as a sharing option for account settings!

What is a Zap? Well lets take the official explanation:

A Zap is a blueprint for a task you want to do over and over. In words, a Zap looks like this:

“When I get a new thing in A, do this other thing in B.”

So in my example, when I post a review to IMDB (A) a message to Twitter should be posted containing my rating(B).

Firstly, I created the trigger which would use the RSS app by Zapier to discover new item feed items from the publicly available URL of the RSS feed of my ratings from IMDB. To retrieve the URL, login to your IMDB account, select ‘Your Activity > Your Ratings’ and select the RSS icon in the right corner.

Also, in order to post you will require the list to be public, to which you will need to select ‘Change list settings’ and select ‘Make this list a public list visible to all public IMDB users’.


In my example the URL ‘’ will be used for the feed. In order to determine what triggers a new feed item, we will select ‘Different Guid/URL’ which is the default option.

Now that we have our trigger for the multi-step task, our final goal is to post a tweet containing content from items retrieving in the feed. In this step I have created an action using the Twitter app to create a Tweet (limited to 10 per hour) and account. Now we are provided with a template which will contain the content of the message you require to post from Twitter. Here you can select items returned from the feed trigger to generate the message by selecting the item returned by selecting a field.

So, lets have a look at the information returned from a new feed item:

            <pubDate>Sat, 13 Feb 2016 00:00:00 GMT</pubDate>
            <title>The Man in the High Castle (2015 TV Series)</title>
                mail-deangrant-689-891137 rated this 7.
            <pubDate>Sat Feb 13 00:00:00 2016</pubDate>

In my example, I want to post a message similar to ‘Rated {title} a {rating}. #imdb {link}’. From reading the above we can see that we cannot retrieve a field item only containing the rating score, but only a description item containing the text ‘{username} rated this {rating}’. So lets go back a step, from here I want to select the text pattern containing the rating score. We can achieve this by creating a step prior to posting the message to use the Code app to run python in response to information received from the trigger. In this example we want to provide the description field as the input item (input[‘description’]) and split the string into substrings where the text pattern ‘rated this ‘ is used to split the string and return the item in the array containing the rating score.

string = input['description']
rating = string.split('rated this ',1)[1]

return {
   'rating': rating

Now back to posting the message to Twitter we can select the following field items to generate the message as per my requirements. Field items prefixed with Step 1 are items returned from the RSS app and Step 2 being data returned from the Code app.


When the zap is run, and a new feed item is discovered the message should be posted to twitter based on the template created above and should read similar to the item below:


Finally, if we look at the task history we can see how the zap was triggered and data in and data out received/returned during invocation of each step.

Step 1 – Found 1 new in Item in Feed in RSS.

Data In


Data Out 

mail-deangrant-689-891137 rated this 7.
Sat, 13 Feb 2016 00:00:00 GMT,Sat Feb 13 00:00:00 2016
The Man in the High Castle (2015 TV Series)
Sat, 13 Feb 2016 00:00:00 GMT,Sat Feb 13 00:00:00 2016
mail-deangrant-689-891137 rated this 7.
The Man in the High Castle (2015 TV Series)
Fields with no value:

Step 2 – Sent 1 new Run Python to Code.

Data In 

mail-deangrant-689-891137 rated this 7.
string = input['description']
rating = string.split('rated this ',1)[1]

return {
   'rating': rating

Data Out 


Step 3 – Sent 1 new Tweet to Twitter

Data In 

Rated The Man in the High Castle (2015 TV Series) a 7. #imdb

Data Out 

Identifying applications vulnerable to the Sparkle MiTM attacks

As recently disclosed ( you may be already be aware of a vulnerability in Sparkle that exposes a large number of applications to man-in-the-middle (MiTM) attacks over insecure HTTP channels.

In order to identify Applications that are susceptible to MiTM attacks that install malicious code in the Sparkle software framework invoke the below from a terminal window. From the output we are looking for applications to which the version string is prior to 1.13.1 to which these will be vulnerable if set to load over HTTP.

find /Applications -path '*' -exec echo {} \; -exec grep -A1 CFBundleShortVersionString '{}' \; | grep -v CFBundleShortVersionString

The applications ‘Info.plist’ file will have a ‘SUFeedURL’ key which can identify any assets that are being loaded over unsecured HTTP. Alternatively, you can attempt to update the application and perform a packet capture using a utility such as Wireshark to determine if the HTTP protocol is being used.

A list of applications that are dependent on Sparkle can be found here, but not all of these may be communicating over insecure HTTP.

vCenter Server 5.5 Update 3a: Update fails with Warning 32014.

Recently during an upgrade of a an instance of a vCenter Server System to 5.5 Update 3a the installation returned an error with the following message:

Warning 32014. A utility for phone home data collector couldn’t be executed successfully Please see its log file (with name PhoneHome) and vminst.log in system temporary folder for more details.

Upon restarting the ‘VMware VirtualCenter Server’ service the following error message is returned:

Error 1053: The service did not respond to the start or control request in a timely fashion.”

This is a known issue affecting vCenter 5.5 where the installer failed to update the ‘deployPkg.dll’ file during the upgrade process, to which currently there is no resolution.

To workaround the issue we have two options. Firstly, we can rollback the instance of the vCenter Server System prior to the upgrade and remove the file from patch cache which by default is located at ‘C:\Windows\Installer\$PatchCache$\Managed\05550F1E83248734780F0115742A159D\5.5.0’ and perform the upgrade.

Alternatively, you can perform the following to provide resolution to the issue.

1) Browse to ‘C:\Program Files\VMware\Infrastructure\VirtualCenter Server\’ and remove the file ‘deployPkg.dll’.

2) Download the file ‘‘ from the official VMware site and extract to the location ‘C:\Program Files\VMware\Infrastructure\VirtualCenter Server\’.

3) Start both the ‘VMware VirtualCenter Server Service’ and ‘VMware VirtualCenter Management Webservices’ services.

4) Uninstall the Profile-Driven Storage service by running the following command with elevated privelages.

msiexec.exe /x {7BC9E9D9-3DF6-4040-B4A1-B6A3A8AE75BA} SKIPVCCHECK=1 SUPPRESS_CONFIRM_UNINSTALL="1" /qr

5) Install the vCenter Server 5.5 Update 3a version of the Profile-Driven Storage by running the following command with elevated privelages. In the below example, I am using ‘vcenter.dean.local’ as the FQDN of my vCenter Server System and the installation media is mounted on D:\.

msiexec.exe /L*V "%temp%\sps_vminst.log" /I "D:\vCenter-Server\Profile-Driven Storage\VMware vSphere Profile-Driven Storage.msi" INSTALLDIR="C:\Program Files\VMware\Infrastructure\" COMPUTER_FQDN=vcenter.dean.local TOMCAT_MAX_MEMORY_OPTION="S" VC_KEYSTORE_TYPE=PKCS12 VC_KEYSTORE_PASSWORD=testpassword VC_SSL_DIR="C:\ProgramData\VMware\VMware VirtualCenter\SSL\" VC_SPS_EXTENSION_DIR="C:\Program Files\VMware\Infrastructure\VirtualCenter Server\extensions\com.vmware.vim.sps\" IS_URL="https://vcenter.domain.local:10443" ARPSYSTEMCOMPONENT=1 SKIPVCCHECK=1 /qr

Now confirm that all your services are running and you are able to access the vCenter Server System.

As this is a known issue, you can prevent the symptom of this prior to performing an upgrade by removing the file from the patch cache prior to running your vCenter Server System upgrade.

{php}IPAM: The API server module and automating IP address reservation

I have recently provided articles in regards to the installation and configuration of {php}IPAM IP. One of my use cases for investigating the use of IP address management systems was to provide a API/server module to request an IP address for allocation.

By default, a module is provided to request an IP address by browsing to Administration > IPAM settings > Feature Settings  and enabling the ‘IP request module’.


However, this requires user interaction in order to approve the pending request. Now this is where I discovered a number of php scripts by Doug Morris that leverage the API framework. In this scenario, getFreeIP.php provides the functionality to retrieve the first available IP address for a subnet. In addition the following scripts are available from the repository:

getIPs.php – dump information for a subnet.
removeHost.php – removes IP address information from a subnet for a host.
tokenValid.php – validates API token for requests. ​

In order to  retrieve the first available IP address for a subnet we will also require to validate the API token for the request so we will also need the tokenValid.php file as well. The files should be placed in the ‘/var/www/phpipam/api’ directory.

cd /tmp 
git clone
cd /tmp/api
cp getFreeIP.php tokenValid.php /var/www/phpipam/api

In order to submit a request to the API we need to browse to Administration > IPAM Settings > Feature Settings and enable the API server module. Then browse to Administration > IPAM Settings > API Management to create an API key to which you will specify an application identifier and for this use case set ‘Read’ application permissions.



In order for the API server module to provide data to client requests the php curl and mcrypt extensions are required to be installed, the mcrypt extension enabled and a restart of the Apache Web Server to apply.

sudo apt-get install php5-curl php5-mcrypt
sudo php5enmod mcrypt
sudo service apache2 restart

Now we have enabled API server module, created the API key and placed the script files on the host. We can now send an HTTPS request which requires the following information:

Application Identifier
Application Code

The request will return the first free IP address in for the subnet and reserve the IP address. If an IP address has already been reserved for the hostname, this will be returned. In this example I am submitting a request to reserve an IP address for the hostname ‘server1.dean.local’ in the subnet ‘’ and specifying the owner as ‘deangrant’

curl https://ipam.dean.local/api/getFreeIP.php?apiapp=dean&apitoken=30c13f9d33668a5e13e79a5865dc409f&subnet=