Objective 3.2 – Implement and manage complex DRS solutions

Explain DRS / storage DRS affinity and anti-affinity rules

Two types of rules exist; VM-Host affinity rules and VM-VM affinity rules

  • VM-Host affinity rules
    • Allows you to tie a virtual machine or group of virtual machines to a particular host or particular set of hosts. Also allows anti-affinity for said objects
    • Before creating a VM-Host affinity rule you need to create a DRS group and a host group
    • Decide whether it is a “must” rule or a “should” rule
      • “Must” rules will never be violated by DRS, DPM, or HA
      • “Should” rules are best effort and can be violated
  • VM-VM affinity rules
    • Used to keep virtual machines on the same host or ensure they do NOT run on the same host. If you had two servers that provide load-balancing for an application, it’s a good idea to ensure they aren’t running on the same host
    • VM-VM affinity rules shouldn’t conflict with each other. Meaning, you shouldn’t have one rule that separates virtual machines and another rule that keeps them together. If you have conflicting rules then the older rule wins and the new rule is disabled
  • Storage DRS affinity and anti-affinity rules
    • Storage DRS affinity rules are similar to DRS affinity rules, but instead of being applied to virtual machines and hosts they are applied on virtual disks and virtual machines when using datastore clusters
    • The three different storage DRS affinity/anti-affinity rules are:
      • Inter-VM anti-affinity allows you to specify which virtual machines should not be kept on the same datastore within a datastore cluster
      • Intra-VM anti-affinity lets you specify the virtual disks that belong to a particular virtual machine are stored on separate datastores within a datastore cluster
      • Intra-VM affinity will store all of your virtual disks on the same datastore within the datastore cluster (this is the default)
    • Storage DRS affinity rules are invoked during initial placement of the virtual machine and when storage DRS makes its recommendations. A migration initiated by a user will not cause storage DRS to be invoked
    • You can change the default behavior for all virtual machines in a datastore cluster by modifying the Virtual Machine Settings when editing a DSC (This allows you to specify VMDK affinity or VMDK ant-affinity):

dsc7

 

Identify required hardware components to support Distributed Power Management (DPM)

DPM uses on of the following methods to bring hosts out of standby:

  • Intelligent Platform Management Interface (IPMI)
  • HP Integrated Lights-Out (HP iLO)
  • Wake on LAN (WOL)

IPMI and HP iLO both require a base management controller (BMC) – this allows access to hardware functions via a remote computer over LAN. The BMC is always on whether the host is or not, enabling it to listen for power-on commands. IPMI that uses MD2 for authentication is not supported (use plaintext or MD5).

To use the WOL feature instead of IPMI or HP iLO the NIC(s) you are using must support WOL. More importantly, the physical NIC that corresponds to the vMotion vmkernel portgroup must be capable of WOL.

In this case you can see that my vMotion vmkernel is located on vSwitch0, which has vmnic0 as its uplink. If you look at the Network Adapters section (host > configuration > network adapters) you can see that vmnic0 has WOL support:

drs8

Identify EVC requirements, baselines and components

  • Enhanced vMotion Compatibility (EVC) is used to mask certain CPU features to virtual machines when a host(s) in a cluster have a slightly different processor than other hosts in the cluster
  • There are multiple EVC modes so check out the VMware Compatibility Guide to see which mode(s) your CPU can run
  • Enable Intel VT or AMD-V on your hosts
  • Enable the execute disable bit (XD)
  • CPUs must be of the same vendor

A very good knowledge base article answers a lot of questions about EVC; VMware KB1005764

 

Understand the DRS / storage DRS migration algorithms, the Load Imbalance Metrics, and their impact on migration recommendations

DRS and Storage DRS use different metrics and algorithms, so I’ll talk about each of them separately

  • DRS
    • By default DRS is invoked every 5 minutes (300 seconds). This can be changed by modifying the vpxd configuration file.
    • Prior to DRS performing load-balancing it will first try and correct any constraints that exists, such as DRS rules violations
    • DRS then moves on to load-balancing using the following process:
      • Calculates the Current Host Load Standard Deviation (CHLSD)
      • If the CHLSD is less than the Target Host Load Standard Deviation (THLSD) then DRS has no further actions to execute
      • If CHLSD is greater than the THLSD then:
        • DRS executes a “bestmove” calculation which determines which VMs are candidates to be vMotioned in order to balance the cluster. The CHLSD is then calculated again
        • The costs, benefits and risks are then weighed based on that move
        • If the migration does not exceed the costs, benefits, and risks threshold, the migration will get added to the recommended migration list
      • Once all migration recommendations have been added to the list, the CHLSD is then calculated based on simulating those migrations on the list
    • The tolerance for imbalance is based on the user-defined migration thresholds (five total). The more aggressive the threshold, the lower the tolerance is for cluster imbalance
  • Storage DRS
    • There are two types of calculations performed by Storage DRS; initial placement and load-balancing
    • As with DRS, Storage DRS has a default invocation period, however it is much longer – 8 hours is the default interval. Again, it is not recommended that you change the default interval
    • Initial placement takes datastore space and I/O metrics into consideration prior to placing a virtual machine on a datastore. It also prefers to use a datastore that is connected to all hosts in the cluster instead of one that is not
    • Storage DRS Load imbalance
      • Before load-balancing is taken into consideration, corrections to constraints are processed first. Examples of constraints are VMDK affinity and anti-affinity rule violations
      • One constraint violations have been corrected, load-balancing calculations are processed and recommendations are generated
        • There are Storage DRS rules that are taken into account when the load-balancing algorithms run; Utilized Space and I/O Latency. Recommendations for Storage DRS migrations will not be made unless these thresholds are exceeded
        • Additionally, you can set advanced options that specify your tolerance for I/O imbalance and the percentage differential of space between source and destination datastores
          • Example: destination datastore must have more than a 10% utilization difference compared to the source datastore before that destination will be considered
      • Storage DRS also calculates a cost vs. benefits analysis (like DRS) prior to making a recommendation
    • Besides the standard invocation interval, the following will invoke Storage DRS:
      • If you manually click the Run Storage DRS hyperlink
      • When you place a datastore into datastore maintenance mode (the I/O latency metric is ignored during this calculation)
      • When you move a datastore into the datastore cluster
      • If the space threshold for a datastore is exceeded

Properly configure BIOS and management settings to support DPM

My lab kit does not have DRACs so the following is from Paul Grevink’s Blog

If a host supports multiple protocols, they are used in the order presented below:

  1. Intelligent Platform Management Interface (IPMI)
  2. Hewlett-Packard Integrated Lights-Out (iLO)
  3. Wake-On-LAN (WOL)

If a host does not support any of these protocols it cannot be put into standby mode by vSphere DPM.

Each protocol requires its own hardware support and configuration, hence BIOS and Management Settings will vary depending on the hardware (vendor).

Example, configuring a Dell R710 server with an iDRAC (Dell Remote access solution) for DPM. A Dell R710 contains also a BMC, which is also needed.

The iDRAC supports IPMI, but out-of-the-box, this feature is disabled.

So, log on to the iDRAC, go to “iDRAC settings”, section “Network Security” and enable IPMI Over LAN.

And while we are logged in, also create a user account. Go to the “Users” section and create a user. Make sure you grant enough privileges, in this case, Operator will do.
If you are not sure, read the documentation or do some trial and error, starting with the lowest level.

The remaining configuration steps take place in vCenter and are described in great detail for IPMI/iLO and WOL configuration.
For IPMI/iLO follow these steps:

  • The following steps need to be performed on each host that is part of your DRS Cluster.
  • In vCenter, select a Host, go to Configuration, Software and Power Management.
  • Provide the Username, Password, IP address and MAC address of the BMC.

Configuration for WOL has a few prerequisites:

  • Each host’s vMotion networking link must be working correctly.
  • The vMotion network should also be a single IP subnet, not multiple subnets separated by routers.
  • The vMotion NIC on each host must support WOL.
    • To check for WOL support, first determine the name of the physical network adapter corresponding to the VMkernel port by selecting the host in the inventory panel of the vSphere Client, selecting the Configuration tab, and clicking Networking.
    • After you have this information, click on Network Adapters and find the entry corresponding to the network adapter.
    • The Wake On LAN Supported column for the relevant adapter should show Yes.

  • The switch port that each WOL-supporting vMotion NIC is plugged into should be set to auto negotiate the link speed, and not set to a fixed speed (for example, 1000 Mb/s). Many NICs support WOL only if they can switch to 100 Mb/s or less when the host is powered off.

The Final step is to enable DPM on the Cluster Level

Test DPM to verify proper configuration

My lab kit does not have DRACs so the following is from Paul Grevink’s Blog

Put a host in Standby, by selecting Enter Standby Mode.
The host should Power down now.

Try to get the host out of Standby, by selecting Power On.

If a host fails the procedure, disable the host in the Cluster Settings.

In this example host ml110g6 succeeded and ml110g5 failed and is disabled for DPM.

Configure appropriate DPM Threshold to meet business requirements

My lab kit does not have DRACs so the following is from Paul Grevink’s Blog

After enabling DPM on the Cluster level, first you must choose the Automation level.

  • Off: feature is disabled;
  • Manual: recommendations are made, but not executed
  • Automatic: Host power operations are automatically executed if related virtual machine migrations can all be executed automatically

Second, the desired DPM Threshold should be selected. 5 options are available, ranging from Conservative to Aggressive.

Note: Conservative is only about Power On recommendations and no Power Off recommendations.

This excellent resource is VMware vSphere 5 Clustering, Technical Deepdive presents an excellent explanation on DPM

In a Nutshell:

  • TargetUtilizationRange = DemandCapacityRatioTarget +/- DemandCapacityRatioToleranceHost
  • DemandCapacityRatioTarget = utilization target of the ESXi host (Default is 63%)
  • DemandCapacityRatioToleranceHost = tolerance around utilization target for each host (Default is 18%)
  • This means, DPM attempts to keep the ESXi host resource utilization centered at 63% plus or minus 18%.
  • Values of DemandCapacityRatioTarget and DemandCapacityRatioToleranceHost can be adjusted in the DRA advanced options section
  • There are two kind of recommendations: Power-On and Power-Off.
  • Power-On and Power-Off recommendations are assigned Priorities, ranging from Priority 1 to Priority 5.
  • Priority level ratings are based on the resource utilization of the cluster and the improvement that is expected from the suggested recommendation.
  • Example: A Power-Off recommendation with a higher prioritylevel will result in more powersavings. Note Priority 2 is regarded higher than Priority 5.
  • Example: A Power-On Priority 2 is more urgent than a Priority level 3.
  • Power-On priority ranges from 1-3
  • Power-Off priority ranges from 2-5

Configure EVC using appropriate baseline

EVC (Enhanced vMotion Compatibility) overcomes incompatibility between a virtual machine’s CPU feature set  and the features offered by the destination host. EVC does this by providing a “baseline” feature set for all virtual machines running in a cluster and hides the differences among the clustered hosts’ CPUs from the virtual machines.

EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the actual CPUs on the hosts differ.

EVC is configured on the Cluster level.

When you configure EVC, you configure all host processors in the cluster to present the feature set of a baseline processor. This baseline feature set is called the EVC mode.

 

evc1 The EVC mode must be equivalent to, or a subset of, the feature set of the host with the smallest feature set in the cluster.

To enable EVC in a Cluster, you must meet these Requirements:

  • All virtual machines in the cluster that are running on hosts with a feature set greater than the EVC mode you intend to enable must be powered off or migrated out of the cluster before EVC is enabled.
  • All hosts in the cluster must have CPUs from a single vendor, either AMD or Intel.
  • All hosts in the cluster must be running ESX/ESXi 3.5 Update 2 or later.
  • All hosts in the cluster must be connected to the vCenter Server system.
  • All hosts in the cluster must have advanced CPU features, such as hardware Virtualization support (AMD-V or Intel VT) and AMD No eXecute (NX) or Intel eXecute Disable (XD), enabled in the BIOS if they are available.
  • All hosts in the cluster should be configured for vMotion.

There are two methods to create an EVC cluster:

  • Create an empty cluster, enable EVC, and move hosts into the cluster (Recommended method).
  • Enable EVC on an existing cluster.
  • Power off all the virtual machines on the host.
  • Migrate the host’s virtual machines to another host using vMotion

Change the EVC mode on an existing DRS cluster

To raise the EVC mode from a CPU baseline with fewer features to one with more features, you do not need to turn off any running virtual machines in the cluster. Virtual machines that are running do not have access to the new features available in the new EVC mode until they are powered off and powered back on. A full power cycling is required. Rebooting the guest operating system or suspending and resuming the virtual machine is not sufficient.

To lower the EVC mode from a CPU baseline with more features to one with fewer features, you must first power off any virtual machines in the cluster that are running at a higher EVC mode than the one you intend to enable, and power them back on after the new mode has been enabled.

evc2

Create DRS and DPM alarms

DRS Alarms: If you want to create DRS related Alarms, when creating a new alarm go to the General tab select Clusters from the list of available Event Triggers. On the Triggers tab, you can configure DRS related triggers.

DPM Alarms:  You can use event-based alarms in vCenter Server to monitor vSphere DPM by creating a new alarm, then selecting Hosts and Specific Events:

drs1

Configure applicable power management settings for ESXi hosts

ESXi 5 offers four different power policies that are based on using the processor’s ACPI performance states, also known as P-states, and the processor’s ACPI power states, also known as C-states. P-states can be used to save power when the workloads running on the system do not require full CPU capacity. C-states can help save energy only when CPUs have significant idle time; for example, when the CPU is waiting for an I/O to complete:

  • High Performance: This power policy maximizes performance, using no power management features. It keeps CPUs in the highest P-state at all times. It uses only the top two C-states (running and halted), not any of the deep states (for example, C3 and C6 on the latest Intel processors). High performance is the default power policy for ESX/ESXi 4.0 and 4.1.
  • Balanced: This power policy is designed to reduce host power consumption while having little or no impact on performance. The balanced policy uses an algorithm that exploits the processor’s P-states. Balanced is the default power policy for ESXi 5.
  • Low Power: This power policy is designed to more aggressively reduce host power consumption, through the use of deep C-states, at the risk of reduced performance.
  • Custom: This power policy starts out the same as balanced, but it allows individual parameters to be modified. If the host hardware does not allow the operating system to manage power, only the Not Supported policy is available. (On some systems, only the High Performance policy is available.)

The server bios must be configured to allow power management from ESXi layer.

Properly size virtual machines and clusters for optimal DRS efficiency

You don’t want to size your virtual machines to the cluster, rather, you want to sized your clusters based on virtual machines. If you size a VM with 4x vCPU and 4GB vRAM but its current workload is only 1x vCPU and 1GB vRAM. During initial placement, DRS considers the “worst case scenario” for a VM. So for our example , DRS will actively attempt to identify a host that can guarantee 4GB of RAM and 4x CPU to the VM. This is due to the fact that historical resource utilization statistics for the VM are unavailable. If DRS cannot find a cluster host able to accommodate the VM, it will be forced to “defragment” the cluster by moving other VMs around to account for the one being powered on. As such, VMs should be be sized based on their current workload.

Properly apply virtual machine automation levels based upon application requirements

When creating a DRS cluster you set a virtual machine automation level for the cluster. There might be some use cases that require a virtual machine, or a set of virtual machines, that require a different level of automation then what the default for the cluster is. You can set automation levels for virtual machines individually.

You may want to do this if you have an application that is constantly changing its memory contents, you may want not want it to move hosts as often as other virtual machines.

drs09

Create and administer ESXi host and Datastore Clusters

Not going to cover how to create these… however here is a reminder of the requirements:

In order for HA, DRS and SDRS to function properly, ESXi host and datastore clusters must be configured.

  • At least two ESXi 5 hosts.
  • All hosts should be configured with static IP addresses.
  • Identical vSwitch configuration among the participating hosts. Dedicated vSS’s or a shared vDS.
  • Consistent port group configuration between participating hosts. Note that port group naming is case sensitive, so ensure all related port groups are configured consistently.
  • CPU compatibility is required between participating hosts. At a minimum, CPUs must be from the same vendor (AMD or Intel), family (for example Xeon or Opteron), must support the same features and being virtualization enabled is a required for running 64-bit guests.
  • Common vmotion vmkernel network between the hosts. Connections must be at least 1 GB. Dedicated uplinks recommended but not required.
  • Shared storage between the hosts. FC, FCoE, iSCSI and NFS supported.
  • Maximum of 32 hosts per cluster.

Administer DRS / Storage DRS

Practise the following:

  • Adding and removing hosts datastores
  • Cluster Validation
  • Create and maintain Anti Affinity/Affinity rules
  • Invoke DRS SDRS
  • Host Maintenance Mode
  • Datastore Maintenance Mode
  • Storage DRS scheduled tasks
  • Apply DRS recommendations if manual mode
  • SDRS schedulding