After upgrading VSAN 5.5 to VSAN 6.0 I thought it would be a good idea to run the same set of tests that I ran previously (VSAN 5.5 Performance Testing) to see how much of a performance increase we could expect.
The test was run using the same IOAnalyser VMs and test configuration, on the same hardware. The only different was the vCenter/ESXi/VSAN version.
Read Only Write Only Real World
VSAN 6 IOPs (Sum) 113,169.88 28,552.85 38,658.47
VSAN 5 IOPs (Sum) 61,964.81 5,666.06 24,228.98
The detailed test results for VSAN 6, using a “Real World” test pattern (70% Random / 80% Read) as below:
Great increase in IOPs!
When running through some VSAN operational readiness tests I stumbled across an issue when simulating host failures. When there are more VSAN Components than physical disks and a host fails, the components will not be rebuilt on remaining hosts.
Firstly here is some background information about the test cluster:
- 4 x Dell R730XD Servers
- 1 Disk Group per server with one 800GB SSD fronting six 4TB Magnetic Disks
- 1 Test VM with a single 1.98TB VMDK
- Disks to Stripe set to 1 on the storage policy applied to the VM
- Failure to Tolerate set to 1 on the storage policy applied to the VM
- ESXi 5.5 and VSAN 5.
- All drivers/firmware on the VSAN HCL
The VMDK Object is split into 24 components (8 x “Primary” components (each 250GB), 8 x “Copy” components (each 250GB) and 8 x “Witness” components
Note: VSAN does not really have “Primary” and “Copy” components but for the sake of the following diagrams and ease of explanation I’ll group the components this way.
More VSAN Components than Physical Disks – Failure Scenario
Today I installed the VSAN Health Plugin – VSAN 6 Health Check Plugin. Unfortunately I did not RTFM (Read the… frigging… manual).
When I logged into the web client after restarting the vCenter services this is all I could see:
Turns out I didn’t install the msi using the “run as admin” option… really should have read that manual.
Cormac Hogan to the rescue VMware Blogs
After searching around and not finding anything that covers the entire string, I figured I’d throw in what information I have. I like to label my datastores with my source information, which makes it easy to search and isolate when SAN work has to be performed. The labels make it easy, but I’m relying on information gathered to create these labels. So, this is a way, if nothing else, to validate that the information is being applied to the correct identifier.
I recently carried out some VSAN performance testing using 3 Dell R730xd servers:
- Intel Xeon E5-2680
- 530GB RAM
- 2 x 10GbE NICs
- ESXi5.5, 2068190
- 800GB SSD (12GB/s Transfer Rate)
- 3 x 4TB (7200RPM SAS disks)
On each of these hosts I built a IOAnalyzer Appliance (https://labs.vmware.com/flings/io-analyzer) (1 with it’s disks placed on the same host as the VM and the other 2 with “remote” disks). Something similar to this: