vRNI 3.3 Cluster Upgrade Process

When upgrading a vRNI cluster you currently have to follow this KB from VMware: https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2149265&sliceId=1&docTypeID=DT_KB_1_1&dialogID=361734145&stateId=1%200%20361736412

This post will is just a quick run through of the process and how it went for me.

First step is to stop the services running on each platform or proxy appliance. To do this we just ssh to the appliance (username: consoleuser and password ark1nc0ns0l3 <- these are publicly available on the cli guide for vRNI – here) then “services stop”

Strange one I noticed when stopping the service on other cluster nodes, let’s see if this becomes a “gotcha” or just a “quirk”. One of the platform nodes shows “stop: Unknown instance:”

Next we have to copy the upgrade bundle up to the appliance (so the version I downloaded was different from the command in the KB… which I annoyingly didn’t spot:

Next on the hit list is to start the install process (this can take between 30 mins and “hours”… in my case this took ~15 mins). Don’t just kick off the upgrade and move onto other things… it does require a y/n to go ahead with the upgrade:

Eventually the upgrade completes (the proxy appliances took ~5 minutes each to upgrade):

You should run a show-version to confirm the upgrade was successful:

After all VMs in cluster are upgraded, start services on VMs by running the services command:

Job done!

Leave a Reply

Your email address will not be published. Required fields are marked *