Objective 1.1 – Implement Complex Storage Solutions

Determine use cases for and configure VMware DirectPath I/O

The main use case for implementing Direct Path I/O for a NIC is to support extremely heavy workloads within a VM when other solutions like a dedicated vSwitch with dedicated physical NICs for the VM are insufficient.

To configure Direct Path I/O follow the following steps:

  • Select Configuration > Hardware > Advanced Settings (on an ESXi Host)
  • Select “Configure Passthrough

DirectPathIO

  • Select what you want to mark as a passthrough device

DirectPathIO_Device

  • Click OK
  • Right click and edit a VM
  • Add a PCI Device and choose the device you marked as Passthrough on the previous steps

Determine requirements for and configure NPIV

N-Port ID Virtualisation (NPIV) is used if you want a VM to be assigned an addressable World Wide Port Name (WWPN) within a SAN.

NPIV only works if the VM has an RDM and the HBA and Switch are NPIV aware.

To configure Direct Path I/O follow the following steps:

    • Select and Edit the VM to be assigned WWPNs
    • Select “Options”
    • Highlight Fibre Channel NPIV
    • Untick the “Temporarily Disable NPIV for this Virtual machine”
    • Toggle the “Generate new WWNs” radio button and use the drop downs to configure the number of WWPNs and WWNNs

NPIV

  • Click ok (the next time you edit the VM the WWPN/WWNNs will be displayed)

Understand use cases for Raw Device Mapping

  • Microsoft Clustering may require a “Quorum” RDM
  • Applications that need to communicate directly with the SAN (SRM)

Configure vCenter Server storage filters

vSphere provides four storage filters that can affect the actions of vCenter when scanning storage. Without these filters an administrator would be able to create a datastore with a LUN already marked as a datastore or RDM, is already in the process of being scanned by another host (post datastore creation on a different host) or that is not compatible storage.

These four filters are:

RDM Filter – Filters out LUNs that have been claimed as RDMs. This filter needs to be disabled when setting up a Microsoft cluster and attaching an RDM to multiple VMs.

VMFS Filter – Filters out LUNs that have claimed as VMFS datastores.

Host Rescan Filter – By Default when a datastore is created an automatic rescan of all hosts attached to the vcenter is carried out. This setting can be disabled if, for example, you are adding 1000 new datastores via PowerCLI you would want the rescan to complete after the storage is created.

Same Host and Transport Filter – This filters out LUNs that cannot be used as VMFS Extents due to host or storage incompatabiliy. An example of this would be a LUN that is not presented to all hosts in the cluster, this LUN could not be used as an extent.

To disable these filters follow these steps:

  • On the VIClient browse to Administration > vCenter Settings > Advanced Settings

Note: To disable the filter a new advanced setting must be created (they are not listed by default)

  • Enter either:
    • config.vpxd.filter.vmfsFilter – false
    • config.vpxd.filter.rdmFilter – false
    • config.vpxd.filter.SameHostAndTransportsFilter – false
    • config.vpxd.filter.hostRescanFilter – false

Storage_Filter

Understand and apply VMFS re-signaturing

Every VMFS volume has a Universal Unique Identifier (UUID) which is used to match a LUN to a VMFS Datastore.

Resignature a datastore using esxcli

  • Acquire the list of copies (if resignaturing replicated or duplicate LUNs): esxcli storage vmfs snapshot list
  • esxcli storage vmfs snapshot resignature –volume-label=<label>|–volume-uuid=<id>

Resignature a datastore using vSphere Client

  • Enter the Host view (Ctrl + Shift + H)
  • Click Storage under the Hardware frame
  • Click Add Storage in the right window frame
  • Select Disk/LUN and click Next
  • Select the device to add and click Next
  • Select Assign new signature and click Next
  • Review your changes and then click Finish

Understand and apply LUN masking using PSA-related commands

LUN masking at the ESXi host level can prevent an ESXi host from using a particular path or to completely block access to a storage device. There are a number of use cases for this including troubleshooting path communication and to avoid problems when a device loses access to a device.

In vSphere 5, when an ESXi host loses all communication with a device, an All Paths Down (APD) condition is triggered. An APD is considered transient with the possibility that it may come back online. For example, in the event of a cable pull. The new state, Permanent Device Loss (PDL), is a way for the ESXi host to recognize that I/O should no longer be queued in anticipation that a device will come back online. vSphere 5 can now determine if paths to a Datastore/Device are APD or PDL by the use of SCSI sense codes. When a virtual machine determines that All Paths are Down in vSphere 5, the I/O remains queued until a SCSI response code says officially that the link is down — transitioning it to PDL. This process continues for all paths defined for a device. If all links go PDL, the device is consider PDL. In the past, an APD condition could mean the indefinite queuing of I/O to the device. This queue would block hostd processes and eventually it could crash the process and virtual machine.  When a link goes PDL, I/O is failed and the hostd queue workload is freed.

To Mask with the PSA MASK_PATH Plugin Commands

  • Connect to the ESXi host using SSH
  • Verify that the MASK_PATH plugin is in use by running:  esxcli storage core claimrule list

PSA_MASK

  • You should see a file and runtime listed for MASK_PATH
  • Identify a Rule number that would not be in use from the previous command (with the exception of rules 0-100 that are reserved for VMware internal use)
  • Identify the storage device you want to mask and record the naa deviceID: esxcfg-scsidevs -l
  • Identify the path information and record the HBA, Target, Channel, and LUN number: esxcfg-mpath -b -d <deviceID>
  • Mask a path using a unique rule number: esxcli storage core claimrule add –rule <number> -t location –A <hba_adapter> -C <channel> -T <target> -L <lun> -P MASK_PATH
  • Verify that the rule was created: esxcli storage core claimrule list
  • Reload the claim rule: esxcli storage core claimrule load
  • Re-verify that you can see both the file and runtime class: esxcli storage core claimrule list
  • Unclaim all device paths: esxcli storage core claiming reclaim –d <deviceID>
  • Verify that the paths are masked by displaying all paths: esxcfg-scsidevs –m
  • Verify that the LUN is no longer active: esxcfg-mpath -l -d <deviceID>

Configure Software iSCSI port binding

Select the first host to be configured, then the “Manage” tab, then “Storage”, then “Storage adapters” then Select the vmhba listed under iSCSI Software Adapters then select “Network Port Binding” then select “Add”

iSCSI_PortBinding

Select the first distributed port group and click “OK”

iscsi_port_binding_1

Configure and manage vSphere Flash Read Cache

Virtual Flash Read Cache allows you to locally cache virtual machine read I/O on an ESXi host and even migrate that virtual machines cache to another Virtual Flash enabled ESXi host.

  • Open the vSphere Web Client
  • Go to the Host
  • Go to “Manage” and then “Settings”
  • All the way at the bottom you should see “Virtual Flash Resource Management”
    • Click “Add Capacity”
    • Select the appropriate SSD and click OK
  • Repeat on all hosts in the cluster

Now that you have enabled vFlash on your hosts you need to enable it on your virtual machine:

  • Right click the virtual machine and select “Edit Settings”
  • Uncollapse the harddisk you want to accelerate
  • Go to “Flash Read Cache” and enter the amount of GB you want to reserve as a cache

Configure Datastore Clusters

In the Datastores and Datastore Clusters view in the vSphere client, right click the Datacenter object and select ‘New Datastore Cluster’:

DSC1

Choose which level of automation SDRS will use:

DSC2

The next screen prompts for the SDRS runtime rules, which includes choosing whether I/O metrics will be used for SDRS recommendations, and setting thresholds for Utilized Space and I/O Latency:

DSC3

Note that enabling I/O metric inclusion will enable SIOC on all datastores in the cluster. You can also set the following advanced options:

DSC4

On the next screen you choose the cluster/hosts that will be part of the datastore cluster:

DSC5

The final step is to choose which datastores will make up the cluster:

DSC6

The next screen will give you a summary of the options you have chosen. click Finish to create the datastore cluster.

Upgrade VMware storage infrastructure

Upgrade VMFS3 to VMFS5 via the vSphere Client

  • In the vSphere client, navigate to the Hosts and Clusters view
  • Select a host on the left and click the Configuration tab on the right > click Storage
  • Click on the datastore you want to upgrade > click the “Upgrade to VMFS-5” hyperlink
  • Click OK to perform the upgrade

Upgrade VMFS3 to VMFS5 via esxcli

  • esxcli storage vmfs upgrade -l <datastore name>
  • Once the command completes you will see that volume reflected as VMFS5 under the Type column of the Datastore Views section within the vSphere client