Upgrade ESXi/vCenter to v7

Trials and tribulations of upgrading the homelab to vSphere 7. Joys of running older hardware meant CPUs are unsupported, but there is a way around this!

VMware have recently released version 7 of their ESXi and vCenter software. As well as a whole host of other parts to the product range. I previously blogged that I was unable to use ESXi 6.7 due to my server CPUs being unsupported. As a result, I left them running ESXi 6.5. Anyway, I decided to readdress this situation and see what could be done.

This process is for upgrading to ESXi 7, it’s also a way to get around the CPU is unsupported warning. Bypassing this warning could leave you without support. Read up on the devices that are no longer supported by VMware to ensure your hardware will still work.

vCenter 7 is able to manage ESXi 6.5 and 6.7 hosts. So, first task was to upgrade vCenter from 6.7 to 7.0. This was a simple task, downloading the ISO, mounting it on my management workstation and then running through the upgrade process. This wizard will create a new vCenter server and then transfer all the data over, lastly performing a switch of hostname and IP address to the ‘old’ vCenter server. The upgrade process ran smoothly and with the installation of a v7 license key everything was happy.

I did some digging as I really didn’t want to upgrade my ESXi blades given that I have just replaced them with newer G7 blades. They are performing quite happily and have plenty of overhead available. It was after this digging I came across this blog post which had a simple way to do the upgrade. Traditionally I will have downloaded the latest ISO and either run a fresh install or upgrade of the software. This method would prove to be an issue due to the CPUs being unsupported. The blog post above has a nice way to do it, using the Offline bundle. So, armed with this information, I downloaded the offline bundle, uploaded it to a datastore and proceeded to deploy it with the following command:

esxcli software profile update -d /vmfs/volumes/ESXi\ VMs/ISO\ Store/VMware-ESXi-7.0.0-15843807-depot.zip -p ESXi-7.0.0-15843807-standard

It reported it was installed successfully and needed a reboot (understandably). I dutifully rebooted the server and low and behold it came back online! Result! I am now running ESXi 7. Look in vCenter and the host is showing up as being an ESXi 7 host. Fantastic! But what’s that in the summary …. No datastores ….. No core dump … Ah crap. Take a look on the datastores tab and sure enough, there aren’t any. I am using iSCSI to FreeNAS so lets check the networking … all looks good until I get to the Physical Network Adapters tab. Doh! The network adapters assigned to the iSCSI network are missing. So that’s a pain. Check the ESXi 6.5 host and which drivers are required, it’s the tg3 drivers, these are the based on the VMKlinux drivers which are no more.

I grabbed the Offline Bundle for ESXi 6.7, uploaded the ntg3 VIB, installed it and rebooted, needless to say that the Emulex onboard card then stopped working. Whooops, reboot again and the onboard card for management network starts working again … Phew. But still no tg3 based adapters. I think this could be in part because the generic drivers were removed:

esxcli software vib install -v /tmp/net-tg3/VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922.vib -f
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922
VIBs Removed: VMware_bootbank_native-misc-drivers_7.0.0-1.0.15843807
VIBs Skipped:

What are the next steps? Well, I have a few options

  1. Do away with the split networking, run everything through the onboard NICs and ‘front end’ switch. Advantage is that it has no additional cost.
  2. Purchase new Mezzanine cards from eBay, maintains the current setup, does have a cost involved.
  3. Keep digging, see if I can find a compatible driver to install and use

So, a little more digging and I have come across the issue preventing the VIB from installing, attempting the command without the -f (aka Force) option and I get the following:

[DependencyError]
VIB VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922 requires com.vmware.driverAPI-9.2.3.0, but the requirement cannot be satisfied within the ImageProfile.
VIB VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922 requires vmkapi_2_3_0_0, but the requirement cannot be satisfied within the ImageProfile.
Please refer to the log file for more details.

Looking at option 1 and because I have both NICs assigned to the vSwitch connecting to the LAN, iSCSI doesn’t like this. To combat this, I have reduced the uplink to 1 connection, created a new vSwitch and vmkernel adapter for the iSCSI connection and set the iSCSI Software adapter to use this. This gets the datastores back online. although not in an ideal way. I am sure the error messages will bug me in to finding a compatible mezzanine card to replace the existing one.

Next issue that I am having is that Updates are not applying from vCenter Lifecycle Manager. The baselines are showing updates are available/required but each time I try and run them it reports that it was unable to download them. After much reading the update log and various other bits and pieces, I have managed to create a new baseline for the Tools which wouldn’t install and then manually install both the MRVL-QLogic-FC_4.1.9.0-1OEM.700.1.0.15525992-signed_component-15809375 and VMW-ESX-7.0.0-lpfc-12.6.147.6-offline_bundle-15281471 VIBs with the offline installers from HP. This hasn’t brought the other network adapters back in to play yet but with 3 patches down, I have just two outstanding.

With as many patches applied as I can, I added the server back in to the cluster. Unfortunately, my vMotion network was set up to go over the iSCSI network. As a result, vMotion didn’t work. The easiest way to get around this in the short term was to change the port group that the vMotion interface is used with. So, with the first host now in as good a position as I can get it for now, after vMotioning off a guest to test all is working, I have vMotioned off all guests and put the first host in to Maintenance Mode. Using the same upgrade method earlier in the post I have updated the host to v7. With the host upgraded, the updates applied, it can be rejoined to the cluster.

Next, it’s time to upgrade the guests. VMware tools is done first, then the hardware. Remember, don’t upgrade the tools and hardware on appliances. Upgrading the hardware requires a reboot and should be tested with your chosen OS and applications.

Leave a Reply

Your email address will not be published. Required fields are marked *