Objective 2.2 – Implement and manage virtual distributed switch (vDS) networks

Determine use cases for and applying VMware DirectPath I/O

This is covered here.

Migrate a vSS network to a hybrid or full vDS solution

It’s definetely worth taking a lot of care when planning to migrate from vSS to a vDS. It may not be the best way to complete a migration but I have always built a new vDS and then created all appropriate portgroups from an existing vSS in the new vDS then used the migration wizard to move VMs, VMKernels and pNICs. More than one method exists and this is a very time consuming way to migrate if you have a large cluster/estate.

When replacing a vSS with a vDS, ensure that you configure the appropriate vDS Portgroups to match the settings on the original vSS portgroupremember to carry accross security, traffic shaping, NIC teaming, MTU, VLAN configurations, etc.

If you have spare physical adapters, it is worth thinking about connecting those to the new vDS and initially migrating just the VMs and vmkernel ports. This makes rolling back the the vDS move much easier.

A Hybrid Virtual Network Solution involves creating a vSS and a vDS. In my lab I have left vMotion, IP Storage and Management Traffic on my vSS and I will migrated all VM traffic onto a new vDS.

Existing vSS Design:

I have 6 Physical Uplink connected to 3 vSS for different services.
vSwitch0 – vmnic0 (Active), vmnic1 (Standby) – Management Traffic
vSwitch0 – vmnic0 (Standby), vmnic1 (Active) – vMotion
vSwitch1 – vmnic2 – IP-Storage Traffic
vSwitch1 – vmnic3 – IP-Storage Traffic
vSwitch2 – vmnic4 (Active), vmnic5 (Active) – VM Traffic
vSwitch2 – vmnic4 (Active), vmnic5 (Active) – VM Traffic

vds1

Purposed Hybrid Design:

vSwitch0 – vmnic0 (Active), vmnic1 (Standby) – Management Traffic
vSwitch0 – vmnic0 (Standby), vmnic1 (Active) – vMotion
vSwitch1 – vmnic2 (Active), vmnic3 (Active) – IP-Storage Traffic
vSwitch1 – vmnic2 (Active), vmnic3 (Active) – IP-Storage Traffic
vDSwitch-Internal – vmnic4 – VM Traffic
vDSwitch-Internal – vmnic5 – VM Traffic

Here is steps to migration vSS networks onto vDS:

  1. Create a new vDS
  2. Create dPortGroups
  3. Rename the Uplinks (Optional)
  4. Configure the Teaming/Failover for each PortGroup (if required)
  5. Add ESXi hosts and migrate VM PortGroups from vSS to VDs

1. Create a New vDS

From Web Client Home > Networking > Right Click on DataCenter > New Distributed Switch

Enter the vDS name “vDSwitch-Internal” -> Click Next

vds2

Choose the vDS version according to you lowest running ESXi version that will use the vDS. In my case, I have all lab ESXi host are running version 5.5. So I selected that then clicked Next (My vCenter and Management cluster are are running vSphere 6):

vds3

Select the total number of uplinks. It should be the same number as the physical NICs that you want to migrate. You can uncheck “Create default Port Group” option then click Next and then Finish

vds4

2. Create dPortGroups

Next step is to create the Distributed PortGroup for vDS. To create a new dPortGroup right click on the vDS and Choose “New Distributed Port Group”

vds5

Enter the Port Group name. In my case it is “Lab-VM-Network”. Click Next

vds6

Leave the settings as default -> Click Next and then Finish

vds7

3. Rename the Uplinks (this is optional but can help operations teams identify physical NICs)

Right Click on the vDS and click on “Edit Settings”

vds8

In General ->Click on the “Edit Uplink Name” hyperlink

vds9

Define the uplink names -> Click “OK” twice.

vds10

4. Configure the Teaming/Failover for each PortGroup

If you wanted to change the Teaming/Failover policy for a portgroup on a vDS (for example if you were moving iSCSI portgroups from vSS to a vDS you would have to change the portgroup Teaming/Failover policy). The steps to do this are below:

Right Click on the PortGroup -> Click Edit Settings

In Teaming and Failover Setting of PortGroup. Set the “IPStorage” Uplink as active and move the rest of the Uplinks to the unused uplinks category -> Click OK

5. Add ESXi hosts and migrate vKernel and VMS PortGroup from vSS to VDs

Right Click on the vDS -> Click “Add and Manage Hosts”

vds11

In the Wizard -> Select “Add Hosts” -> Click Next

vds12

Add the host to the vDS by click on Green “+” sign -> Click Next

vds13

Select the hosts that will be part of this vDS

vds14

Check the options to miragte Physical adapters and VM networking -> Click Next

vds15

Next Assign the Uplink (Physical adapters) to the Virtual Nics. Select the “vmnic4” and click on “Assign Uplink” to assign an uplink to vmnic5 (on the previous screen there is a check box that will treat one host as a “template” if you are configuring multiple hosts).

vds16

Configure vSS and vDS settings using command line tools

vSS can be completely created, configured, and managed from the command line. I have covered this here

Because vDS is created at the vCenter layer, they cannot be modified using the ESXi shell or the vCLI. However, the ESXi Shell and the vCLI can be used to identify and modify how an ESXi host connects to a vDS. The specific namespaces for vSS and vDS are respectively:

esxcli network vswitch standard
esxcli network vswitch dvs vmware

Here is a summary of the commands available from an ESXi host for a vDS:

vds17

The command to identify all Distributed switches accessed by the ESXi host is:

# esxcli network vswitch dvs vmware list

vds18

Analyze command line output to identify vSS and vDS configuration details

Just like the command above the list option can be run against a standard vSS:

# esxcli network vswitch standard list

vds19

Configure NetFlow

NetFlow is a networking protocol that collects IP traffic information as records and sends them to a collector such as CA NetQoS for traffic flow analysis. VMware vSphere 5 supports NetFlow v5, which is the most common version supported by network devices. NetFlow capability in the vSphere 5 platform provides visibility into virtual infrastructure traffic that includes

  • Intrahost virtual machine traffic (virtual machine–to–virtual machine traffic on the same host)
  • Interhost virtual machine traffic (virtual machine–to–virtual machine traffic on different hosts)
  • Virtual machine to physical infrastructure traffic

Figure below shows a Distributed Switch configured to send NetFlow records to a collector that is connected to an external physical network switch. The blue dotted line with arrow indicates the NetFlow session that is established to send flow records for the collector to analyze.

vds20

Usage

NetFlow capability on a Distributed Switch along with a NetFlow collector tool helps monitor application flows and measures flow performance over time. It also helps in capacity planning and ensuring that I/O resources are utilized properly by different applications, based on their needs.

IT administrators who want to monitor the performance of application flows running in the virtualized environment can enable flow monitoring on a Distributed Switch.

Configuration

NetFlow on Distributed Switches can be enabled at the port group level, at an individual port level or at the uplink level. When configuring NetFlow at the port level, administrators should select the NetFlow override tab, which will make sure that flows are monitored even if the port group–level NetFlow is disabled.

The NetFlow configuration screen below shows the different parameters that can be controlled during the setup.

vds21

  1. The Collector Settings of IP address and Port should be configured according to the information collected about the collector tool installed in your environment.
  2. The Advanced Settings parameters allow you to control the timeout and sampling rate for the flows. To change the amount of information that is collected for a flow, you can change the sampling rate. For example, a sampling rate of 2 indicates that the VDS will collect data from every other packet. You can also modify the Idle flow export timeout values.
  3. The VDS IP address configuration is useful when you want to see all flow information in the collector tool as part of one VDS IP address and not as a separate host management network IP address. In this example screen shot, because the VDS IP address is not entered, the collector tool will provide flow details under each host’s management network IP address.

You can also monitor only internal flows of the virtual infrastructure by checking “Process Internal flows only” box.

I almost always get the question about the CPU impact of enabling NetFlow feature. Just wanted to address that while I am on this topic. Answer is, it all depends on how many flows you have in your environment and what traffic rate they are operating at. If you think you have lot many flows in your environment and are concerned about CPU resources, you can use the controls provided in the NetFlow setup to choose which flows gets monitored. For example, you can change the sampling rate or choose to monitor only internal flows. Also, you can selectively enable or disable NetFlow on a port group or a port.

Vyenkatesh Deshpande’s Blog Post

Determine appropriate discovery protocol

Switch discovery protocols allow vSphere administrators to determine which switch port is connected to a given vSphere standard switch or vSphere distributed switch.

vSphere 5.0 supports Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). CDP is available for vSphere standard switches and vSphere distributed switches connected to Cisco physical switches. LLDP is available for vSphere distributed switches version 5.0.0 and later.

When CDP or LLDP is enabled for a particular vSphere distributed switch or vSphere standard switch, you can view properties of the peer physical switch such as device ID, software version, and timeout from the vSphere Client.

vSphere 5 Documentation Center

Determine use cases for and configure PVLANs

 

Use command line tools to troubleshoot and identify VLAN configurations