NSX-T – Physical Requirements

I’ve been really lucky over the last few weeks getting to do some deep dive workshops on NSX-T and will be blogging a lot about the good the bad and the ugly over the next few weeks (really good timing for “Blogtober” right?!)

First things first the documentation, for the moment at least, is a little bit on the light side. VMware are obviously working on the documentation as I am starting to see some more become available in the public domain but it certainly wasn’t as well documented as other GA products.

This leads onto my first topic, as I think it’s quit a big one!

I’m going to post about the new routing and switching technologies/methodologies used in NSX-T as they are VERY different from NSX-V in the next few days but for now let’s assume there is a need to move away from the well known and loved Distributed Switch (start looking up the Opaque Switch). Put simply you can’t run a vSphere Distributed Switch on a KVM host, the price for delivering a Hypervisor agnostic SDN solution means we need to introduce a new type of virtual switch.

No big deal right?

Well almost all modern server deployments I see these days have 2 x 10GbE (or 25GbE or 40GbE etc.) physical NICs connecting to 2 x ToR switches. Datacentre Nirvana is also to fully load a rack with as much dense compute as possible, this means in a lot of cases as infrastructure engineers/designers/architects we are looking to have 40 1u servers and 2 ToR switches… just think of all those blinking LEDs!

It’s at this point for me the penny dropped… if you are running ESXi you will have a vSphere Standard or Distributed Switch with your 2 x 10GbE physical NICs with your Management, vMotion, vSAN, Replication, iSCSI and all the other kernel ports. You then prep the host for NSX-T and get a new Opaque Switch… what physical NICs do you attach to it?

You need an additional 2 x 10GbE physical NICs.

No big deal right?

A few thoughts on that:

  • What if you physically can’t fit an additional 2 x switches in the Nirvana rack I alluded to earlier? e.g. 42u server rack with 2 ToR Switches and 40 Servers?
  • What if you are running a leaf/spine fabric? – the cabling and the requirement to double the amount of leafs could ramp up the costs quite quickly.
  • What if your servers don’t physically have PCI slots spare for more physical NICs?

I understand why the Opaque switch has to exist however the requirement for additional network cards on NSX-T enabled servers is quite a big ask… my predication for the future is the next version of ESXi will run Opaque switches as standard.

This issue is still very much being worked on by VMware and if anything changes I’ll be sure to update the post!!

I’ve cross posted this to the Scottish VMUG community blog, use the Contact Us page at www.scottishvmug.com if you are interested in the community blog!

Leave a Reply

Your email address will not be published. Required fields are marked *