Category Archives: VSAN

vSAN Default Storage Policy Per vSAN Datastore

Everyday is a school day!!

I recently asked the vSAN vExpert slack channel the following: “Is there a way to set a default storage policy for a specific vSAN cluster? Use case – shared vCenter server with 10 hybrid vSAN clusters and 1 “private” customer with dedicated cluster running AF vSAN. The Private customer wants to use RAID6 but their deployment method just now does not allow the selection of a storage policy. We can’t change the default policy as the other 10 hybrid clusters are using this (and also don’t have a way to select a policy during deployment).”

Slightly embarrassingly for me that I didn’t know this but Steve Kaplan (@stvkpln) told me how to do it!

If you browse to the vSAN datastore object then Manage then general you can set the default policy for that datastore. Simples!

What’s New with VMware Virtual SAN 6.2 –…

What’s New with VMware Virtual SAN 6.2 – Technical Whitepaper

What’s New with VMware Virtual SAN 6.2 –…

In case you have missed the VSAN 6.2 announce recently, there is also a PDF which has been released – What’s New with VMware Virtual SAN 6.2. The paper details what’s already being announced about VSAN 6.2. Note that I have also had the details written in my post here – VMware VSAN 6.2 Announced – Nearline dedupe, Erasure Coding, QoS ++ .


VMware Advocacy

VM Limit per Host – VSAN

According to the VSAN maximums, there is a 100 VM limit per host in a VSAN 5.5 cluster and 200 in a VSAN 6 cluster. This seems to be a soft limit as I was recently able to deploy 999 VMs in to a 4 node VSAN 5.5 cluster (with one host acting as a dedicated HA node, so not running any compute). I got to ~333 VMs per host before I reached the 3000 component limit (which is a hard limit) on each host. The below is a screen grab of vsan.check_limits from RVC:

Max_Components

Continue reading

VSAN 5.5 vs. VSAN 6.0 – Performance Testing

After upgrading VSAN 5.5 to VSAN 6.0 I thought it would be a good idea to run the same set of tests that I ran previously (VSAN 5.5 Performance Testing) to see how much of a performance increase we could expect.

The test was run using the same IOAnalyser VMs and test configuration, on the same hardware. The only different was the vCenter/ESXi/VSAN version.

                  Read Only   Write Only  Real World
VSAN 6 IOPs (Sum) 113,169.88  28,552.85   38,658.47
VSAN 5 IOPs (Sum) 61,964.81   5,666.06    24,228.98

The detailed test results for VSAN 6, using a “Real World” test pattern (70% Random / 80% Read) as below:

VSAN6

Great increase in IOPs!

More VSAN Components than Physical Disks

When running through some VSAN operational readiness tests I stumbled across an issue when simulating host failures. When there are more VSAN Components than physical disks and a host fails, the components will not be rebuilt on remaining hosts.

Firstly here is some background information about the test cluster:

  • 4 x Dell R730XD Servers
  • 1 Disk Group per server with one 800GB SSD fronting six 4TB Magnetic Disks
  • 1 Test VM with a single 1.98TB VMDK
  • Disks to Stripe set to 1 on the storage policy applied to the VM
  • Failure to Tolerate set to 1 on the storage policy applied to the VM
  • ESXi 5.5 and VSAN 5.
  • All drivers/firmware on the VSAN HCL

The VMDK Object is split into 24 components (8 x “Primary” components (each 250GB), 8 x “Copy” components (each 250GB) and 8 x “Witness” components

Note: VSAN does not really have “Primary” and “Copy” components but for the sake of the following diagrams and ease of explanation I’ll group the components this way.

As below:

More VSAN Components than Physical Disks - Failure Scenario

More VSAN Components than Physical Disks – Failure Scenario

Continue reading

VSAN Health Plugin – Broken Web Client

Today I installed the VSAN Health Plugin – VSAN 6 Health Check Plugin. Unfortunately I did not RTFM (Read the… frigging… manual).

When I logged into the web client after restarting the vCenter services this is all I could see:

Webclient

Turns out I didn’t install the msi using the “run as admin” option… really should have read that manual.

Cormac Hogan to the rescue VMware Blogs

VSAN Performance Testing

I recently carried out some VSAN performance testing using 3 Dell R730xd servers:

  • Intel Xeon E5-2680
  • 530GB RAM
  • 2 x 10GbE NICs
  • ESXi5.5, 2068190
  • 800GB SSD (12GB/s Transfer Rate)
  • 3 x 4TB (7200RPM SAS disks)

On each of these hosts I built a IOAnalyzer Appliance (https://labs.vmware.com/flings/io-analyzer) (1 with it’s disks placed on the same host as the VM and the other 2 with “remote” disks). Something similar to this:

Continue reading

VSAN Network Partition

HA works differently on a VSAN cluster than on a non-VSAN cluster.

  • When HA is turned on in the cluster, FDM agent (HA) traffic uses the VSAN network and not the Management Network. However, when a potential isolation is detected HA will ping the default gateway (or specified isolation address) using the Management Network.
  • When enabling VSAN ensure vSphere HA is disabled. You cannot enable VSAN when HA is already configured. Either configure VSAN during the creation of the cluster or disable vSphere HA temporarily when configuring VSAN.
  • When there are only VSAN datastores available within a cluster then Datastore Heartbeating is disabled. HA will never use a VSAN datastore for heartbeating as the VSAN network is already used for network heartbeating using the Datastore for heartbeating would not add anything,
  • When changes are made to the VSAN network it is required to re-configure vSphere HA.

ESXi Isolation – VM with no underlying storage

Failure_4
Continue reading