Storage – Configure Software iSCSI port binding on an ESXi Host

In order to configure multipathing for an iSCSI storage device we can configure software iSCSI port binding on an ESXi Host in order to load-balance between paths when all paths are present and to handle failures of a path at any point between the server and the storage. Without port binding, all iSCSI LUNs will be detected using a single path per target. By default, ESX will use only one vmknic as an egress port to connect to each target, and you will be unable to use path failover or to load balance I/O between different paths to the iSCSI LUNs

To enable vmknic-based software iSCSI multipathing, we must perform the following from the vSphere Web Client:

1) Select the ESXi host system we wish to configure the Software iSCSI port binding.

2) Select Manage > Networking.

3) Select Add Host Networking and VMkernel Network Adapter.

4) Create a new Standard Switch (vSS).

5) Assign two physical network adapters, in this example vmnic1 and vmnic2.

6) Specify a network label, in the first instance I will name this vmk-iscsi-1.

7) Assign an IPv4 address to the the VMkernel Network Adapter.

8) Select Finish to complete the configuration.

Now, I will repeat Steps 3-8, but I will use the existing Standard Switch (vSS) I previously created assign the network label vmk-iscsi-2 and configure a unique IPv4 address. Once you have created two VMkernel Network Adapters on the Standard Switch (vSS) we will need to configure the failover order of each one to assign only a single active unique physical network adapter.

1) Highlight the VMkernal Network Adapter vmk-iscsi-1 and select Edit.

2) Select Teaming and Failover, enable the Override option and specify vmnic1 as the active adapter and vmnic2 as the unused adapter.

Again, repeat the above steps but for vmk-iscsi-2 select the active adpater as vmnic2 and the vmnic1 as the unused adapter.  Finally, we need to configure the network configuration of our Software iSCSI adapter to bind to port VMkernel Network Adapters created in the above steps.

1) Select Manage > Storage > Storage Adapters.

2) Highlight the Software iSCSI Adapter and select Network Port Binding.

3) Add both vmk-iscsi-1 and vmk-iscsi2 and rescan the adapter to apply the configuration.

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-binding.pdf

Advertisements

Storage – Identity and tag SSD and local devices on an ESXi Host

In my lab environment I was looking to tag a local hard drive as SSD in order to configure and manage vSphere Flash Read Cache, this activity actually ticks two boxes in the VCAP5-DCA blueprint as in order to achieve this I was required to use the Pluggable Storage Architecture (PSA) related commands from the esxcli storage namespace.

Firstly, as my ESXi host systems are nested I created a virtual machine hard disk to be used as a local SSD device and then I connected to the ESXi host system using an SSH client. By using the esxcli storage namespace I will list the devices available to determine the device name to which I require to create a rule to tag the local device as an SSD.

esxli storage nmp device list
Device Display Name: Local VMware Disk (mpx.vmhba1:C0:T1:L0)

Now, we want to create and add a PSA  rule to the device we have discovered and tag this as a SSD device. Once the rule has been created we will be required to restart the ESXi host system for the changes to apply to the local device.

esxcli storage nmp satp rule add –-satp=VMW_SATP_LOCAL –-device mpx.vmhba1:C0:T1:L0 --option "enable_local enable_ssd"

On restart, we can now confirm that the local device is now being discovered as an SSD drive type.

ssd_device


http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013188

VMware vSphere Storage Concepts – Part Three: vCenter Server Storage Filters

In an environment managed by a vCenter Server System four storage filters are provided by vSphere that determine the affect the action of the vCenter Server when scanning storage. These filters by default prevent all storage that is currently in use being found which I think we can agree is a good practice by default. However, you may have a use case where the default behaviour does not satisfy your requirements.

So what are these filters, and their default behaviour:

  • RDM filter – This will filter out LUNs that have been claimed by an RDM on any host system managed by the vCenter Server.
  • VMFS filter – This will filter out LUNs that been claimed by and VMFS formatted on any host system managed by the vCenter Server.
  • Host rescan filter – When a VMFS volume is created, all host systems on a vCenter Server  will perform an automatic rescan.
  • Same host and transport filter – This will filter out LUNS that cannot be used as VMFS datastore extent due to host or storage incompatability.

As discussed by default the storage filters are enabled (setting value set to ‘True’) and are not listed in the vCenter Server advanced settings. So what are the keys corresponding each filter:

  • RDM filter – config.vpxd.filter.rdmFilter
  • VMFS filter – config.vpxd.filter.vmfsFilter
  • Host rescan filter – config.vpxd.filter.hostRescanFilter
  • Same host and transport filter – config.vpxd.filter.SameHostAndTransportsFilter

These settings can be modified by using the vSphere Web Client and browsing to Manage > Settings > Advanced Setting and adding the above keys and your corresponding value.

Alternatively, you could connect to the vCenter Server System using PowerCLI and invoke the following cmdlet, where the RDM Filter has been configured to be ‘False’. If you require to modify a different storage filter replace the name value with the corresponding key as described above.

New-AdvancedSetting -Entity $global:DefaultVIServer -Name config.vpxd.filter.rdmFilter -Value False

In what circumstances would we want to modify the behaviour of each filter? Well here are a couple of  use cases:

Microsoft Cluster Server

In order to configure a SCSI-3 quorum disk so that the RDM can be presented to host systems the RDM filter value would be required to be set to ‘False’.

Automating the provisioning of a large number of datastores

In this scenario you may be creating a large number of datastores and therefore the default behaviour would invoke an automatic rescan of each host system following the creation of each datastore, and therefore can be considered to be an inefficient use of resources.  In this example, set the Host rescan filter to ‘False’, the rescan of each host system should be the final operation in your automated workflow or rescan the host systems manually after creation.

VMware vSphere Storage Concepts – Part Two: Raw Device Mapping, N-Port ID Virtualization and VMware Direct Path I/O

We have previously discussed the concept for the most common approach VMFS, now we will consider Raw Device Mapping (RDM), N-Port ID Virtualization (NPIV) and VMware Direct Path I/O concepts.

Raw Device Mapping 

This option for virtual machine hard disks allows for guest to directly utilise the provisioned LUN on the storage array. This may not be the common approach adopted in the datacenter, but this option is beneficial in a number of use cases, to which I have list a number below:

  • Applications that may require hardware specific SCSI commands, such as Microsoft Cluster Server to which the quorum disk requires to utilise SCSI-3 commands.
  • Configuring a virtual machine to use N-Port ID Virtualization (NPIV)
  • Enabling a virtual machine to use storage array management software, such as snapshots.
  • Physical to Virtual migration
  • Specific I/O requirements

There are two compatibility modes for RDMs, these being the default Physical (rdmp) which allows for SCSI commands to be passed directly from the guest OS to the hardware. The limitation of this mode is that some VMware features are not available as hypervisor support is required, such as snapshots, cloning and storage migration.

For RDMs that are configured with the virtual compatibility mode (rdm) this supports a subset of SCSI commands to pass through the hypervisor between the guest OS to the hardware and therefore the limitations provided with the physical compatibility mode are no longer an issue as hypervisor support is enabled in this mode.

What are the performance considerations when placing a virtual machine hard disk on either VMFS, NFS or RDM? The benefit of an RDM in terms of performance is that you may isolate virtual machine hard disks to the LUN provisioned, typically if the virtual machine hard disk was located on a VMFS datastore the available I/O operations would be shared between multiple virtual machines. If you had specific I/O requirements creating a RDM would isolate the I/O operations to this virtual machine hard disk.

However, one argument for this is that you can easily achieve the above by either placing the virtual machine or the virtual machine hard disk on a isolated datastore provisioned by a LUN. You can also to an extent use Storage I/O control to provide QoS in terms of latency to the datastore. Also, in future releases the concept of VVOLs can be leveraged with your storage array to provide QoS. In my experience, performance characteristics of the virtual machine has never been the driver for providing specific I/O requirements (whilst a valid use case) but it has been to provide hardware specific SCSI commands or to enable the use of storage array management in a guest OS.

N-Port ID Virtualization

A use case for N-Port ID Virtualization (NPIV) is to enable a virtual machine to be assigned an addressable World Wide Port Name  (WWPN) within the storage array. In normal operations, a virtual machine uses the host systems physical HBAs World Wide Name (WWN). This will allow storage to be zoned directly to a virtual machines unique WWN for possibly QoS or security requirements.

So what are the use cases for NPIV? Well this provides storage visibility to each virtual machine that has been configured by leveraging the  storage array management software. Also, this can also provide the ability to exceed the configuration maximums for an ESXi host which at the time of writing is a maximum of 8 HBA adapters and 16 HBA ports.

In order to enable NPIV you will be required to have created a RDM in physical compatibility mode to which the HBA and switch are NPIV aware.

VMware Direct Path I/O

Finally, lets discuss VMware Direct Path I/O which allows for a virtual machine to gain direct control of an adapter such as a NIC or HBA. In order to support VMware Direct Path I/O the host system will require to have either Intel Technology for Directed I/O (VT-d) or AMD I/O Virtualization (AMD-Vi or IOMMU) enabled in the BIOS.

An example of a use case for VMware Direct Path I/O is for performance, by providing direct control of a NIC for a workload with very high packet rates it is likely to achieve greater performance through CPU savings from the direct access.

Also by providing direct control to the virtual machine you may access devices from the guest OS which may yet be on the Hardware Compatibility Guide.

Unable to remove connected hosts from Unisphere managed storage array

I was recently bringing back to life a CX4-120 storage array, which had a number of connected hosts registered that were no longer active, I was unable to remove the host using the Unisphere GUI, after invoking the deregister command on the last HBA connection, as the host object remained.

After a  short period of scratching my head I found an article by Jason Boche (@jasonboche), who was experiencing the same issue and resolved this by restarting the management server on each storage processor (https:/x.x.x.x/setup) and reconnecting using the Unisphere GUI to which the connected hosts where no longer available.

The article describing the issue and actions in more detail can be found below:

http://www.boche.net/blog/index.php/2011/11/14/unable-to-remove-stubborn-hosts-from-unisphere-and-the-solution/

Reading archive performance data from EMC storage arrays

For EMC storage arrays you are able to collect and archive performance data and  export a NAR file using Navisphere/Unisphere.

In order to read the data collected you can convert the archive dump file to CSV format using the Navisphere CLI, by invoking the following command:

C:\Program Files (x86)\EMC\Navisphere CLI\NaviSECCli.exe analyzer -archivedump -data <filename.nar> -out <filename.csv> 

In order to run the above you will require Navisphere CLI installed.

Retrieving Disk Number and SCSI ID for volumes using the Windows Powershell storage module

I was recently looking at obtaining volume information to include the volume name, capacity, free space, disk number and SCSI ID and to include information for both volumes that contained a drive letter or a folder path.

For instances running Windows 8 or Windows Server 2012 I was able to do this using a combination of retrieving WMI objects and using the Get-Partition cmdlet which is included in the storage module within these operating systems and then to export to a comma-separated values (CSV) file.

The requirement was to invoke the above agaisnt a collection of servers names, in this example I will be using a text file containing server names and loop through each one.

$Servers = Get-Content -Path "C:\Collection\Servers.txt" 
$Storage = ForEach ($Server in $Servers)
   {

I will then invoke, the Get-WMIObject cmdlet to query the Win32_Volume class to retrieve all local disks where the label is not equal to ‘System Reserved’.

$Volumes = Get-WmiObject  Win32_Volume -ComputerName $Server | Where-Object {$_.DriveType -eq "3" -and $_.Label -ne "System Reserved"}

For each volume returned I will loop through each volume to retrieve the volume information and select  the computer name, caption,  capacity and free space, to include in the output:

ForEach ($Volume in $Volumes)
   { 
   ""  | Select-Object -Property  @{N="Name";E={$Volume.PSComputerName}},
   @{N="Caption";E={$Volume.Caption}},
   @{N="Capacity (KB)";E={$Volume.Capacity}},
   @{N="Free Space (KB)";E={$Volume.FreeSpace}},

As I am initially using  the Win32_Volume to retrieve the volume information, I will not be able to return the disk number. In order to provide this information I used the Get-Partition cmdlet to compare the volume caption value returned and to filter the partitions retrieved by using the conditional operator function to return a match where the access path value was like the caption value.

@{N="Disk";E={Invoke-Command -ComputerName $Server -ScriptBlock {(Get-Partition | Where-Object {$_.AccessPaths -like $Using:Volume.Caption}).DiskNumber}}},

The final requirement is to include the SCSI ID by returning both the SCSI Port and TargetID of the volume. Again this information is not available from the Win32_volume class so as per the previous item, I will be able to return this information by retrieving the disk number using the Get-Partition cmdlet and then passing the disk number value returned to the Get-WMIObject to query the Win32_DiskDrive class and to return object where the Index value is equal to the disk number and then join both the SCSI Port and Target ID values.

@{N="SCSI Id";E={Invoke-Command -ComputerName $Server -ScriptBlock {
$Disk = (Get-Partition | Where-Object {$_.AccessPaths -like $Using:Volume.Caption}).DiskNumber
$SCSI = Get-WmiObject -Class Win32_DiskDrive | Where-Object {$_.Index -eq $Disk}
"$($SCSI.SCSIPort):$($SCSI.SCSITargetID)"}}}

Finally, I will export the information for all servers in the collections to a comma-separated values (CSV) file.

   {
{
$Storage | Export-Csv -NoTypeInformation -UseCulture -Path "C:\Output\VolumeInformation.csv"