At home, I am lucky, I have an understanding wife. As a result, I have a fairly large sized home lab environment. This environment consists of a HP C7000 Chassis with 5 BL465c blade servers. I also have a HP DL 360 1U server and a HP LeftHand SAN. All of this in a nice HP rack. More recently, I have been using Azure to host my lab environment but this hasn’t been easy due to costs. Recently, I got a chance to play around with VMware ESXi 6.7, along with some other software and after last using ESXi 5.5, I realised it had come along a long way! This, along with mainly the cost factor meant that I wanted to give the lab environment a refresh and get playing again.
I spent a large part of today, stripping out the blades, storage and 1U server to allow me to move the rack and get access to the back. I wanted to make some changes to the network connectivity. Previously I had all traffic just running through the two onboard NICs. That was the SAN traffic, client traffic, management, you name it, it all went through two 1GBps NICs connected to a Cisco blade switch. I have a HP Virtual Connect system that I wanted to run the SAN through as well as some Mezzanine cards to facilitate this. I’ve made the following notes on what I am going to do with all of this
Cisco 3750 switch for all management ports/ILO Cisco Blade Switch for LAN connection HP Switch for SAN connection DL360 is the Domain Controller Hosts 1 & 2 are Hyper-V Hosts 3 & 4 are ESXi BL Server 5 is Quest Rapid Recovery Hyper-V, Site A in AD 192.168.20.x/24 (VLAN 20) ESXi, Site B in AD 192.168.30.x/24 (VLAN 30) Physical Hosts, 192.168.10.x/24 (VLAN 10) SAN connectivity 192.168.100.x/24 (VLAN 100 / dedicated network) Management Interfaces 192.168.5.x/24 (VLAN 5) VLAN 1 - 'External' network, connect to router p-DC - Build the DL as Domain Controller, 2019 Core p-HV-01 - Build Hyper-V host 1, 2019 Core p-HV-02 - Build Hyper-V host 2, 2019 Core p-ESXi-01 - Build ESXi Host 1, ESXi 6.7 p-ESXi-02 - Build ESXi Host 2, ESXi 6.7 P-QuestRR - Build Quest Rapid Recovery backup h-DC - Hyper-V hosted Domain Controller, 2019 core h-RootCA - Hyper-V hosted Root CA, 2019 core h-ICA - Hyper-V hosted Issuing Certificate Authority h-OCSP - Hyper-V hosted OCSP Server, 2019 core h-SCCM - SCCM Server, 2019 GUI h-Workstation - Windows 10 client with full management tools H-AGPM - Advanced Group Policy Manager as per blog H-WSUS - WSUS, distribution point for SCCM, SCCM update images using WSUS e-DC - ESXi Hosted domain controller, 2019 core e-VC - ESXi Hosted VCenter 6.7 appliance e-OSX - ESXi hosted ESX Client e-Kali - ESXi hosted Kali Server e-SQL - ESXi hosted linux server running SQL Server, run as Docker container, use data persistence technique e-tenable - ESXi hosted Tenable appliance E-ELK - ESXi hosted ELK stack for log correlation E-Sophos - ESXi Hosted Sophos Firewall
So, after starting late morning, I got the rack stripped down and out of the corner that it lives in. I added the extra Virtual Connect modules in to the back, ran a few cables whilst I was in there, removed a few that were now redundant and began to work. After fishing out my Cisco console cable I was able to get the two switches reset to factory default, add a basic config including Port-Channel between the two and VTP to pass VLAN information from the chassis switch to the 3750. With this, I re-racked the DL360, cabled it up, and powered it on. Problem number 1 arose. Memory mismatch between two banks. Ah well, time to source some new RAM for it.
So, on to the next task, get Hyper-V deployed. There aren’t any DVD drives connected, so I fire up the Windows 7 USB DVD Download tool to create a bootable USB drive. Going through the wizard, select the ISO, select the USB drive, it formats, then cannot copy the data. Try multiple USB drives, no joy. So, next is logging in to the management console. Only it’s using old certificates. Edge won’t connect. Fire up IE, enable old TLS and get a connection. Progress!
Except no it’s not progress. In the BL465c G5’s that I have are AMD Opteron 2300 CPU’s. It turns out that ESXi stopped supporting these CPUs after 6.5 U3. So I’ve now downloaded and am installing 6.5, not the 6.7 I wanted. I see this as an excuse to buy some new blade servers 😀 Oh and installing over ILO2 is SLOW. Veeeery SLOW! But fingers crossed the desktop PC which is wired will get the job done. Eventually.
Well, after leaving the installation of Hyper-V to run over night, I woke to find it had failed part way through. I think a reboot of the desktop occurred. So, day 2 starts with trying to install Windows Server 2019 again. Slowly. The planned ESXi host isn’t yet working as I am unable to configure the BIOS with the required virtualization settings. I realized that after being unable to input keyboard commands I upgraded the firmware on the ILO which locked me out of the console. Downgraded the firmware and I can now access the console again to configure the required BIOS options. With the BIOS configured for virtualization, I was now able to install ESXi 6.5 via an ISO mounted on the ILO card.
It goes to show, if you’re not using this stuff day in day out, you don’t half forget half the niggly stuff!
1 thought on “Rebuilding the home lab”