Day 5
Arrgghhh things not going so well at the moment. The second ESXi host won’t boot and the second Hyper-V host isn’t booting. Both of them failing POST and coming up with a red light on the front. It looks like at least one of them is a faulty Mezzanine card. It looks like I will be able to pick up new Mezzanine cards for connecting to the SAN fairly cheaply. One of the things I have come to accept with running such old hardware is the fact that things like firmware, drivers and HP software is a nightmare to get fully working. If ever there was a reason to upgrade, this would be it.
Sophos this week have released version 18 of their XG Firewall. So, as the environment is still being prepared, I have managed to upgrade the firewall to v18.
Day 6
Still struggling with the hardware for the second Hyper-V host, and the second ESXi host. It looks like something on the system board has stopped working. I have another physical host that I was planning to use for backups, but this is stuck in a reboot loop before POST is complete.
In a bid to continue getting things up and running, today I have deployed the vCenter 6.5 appliance. I am also deploying the HP OneView appliance with a view to ensuring that the HP software, firmware, BIOS etc are all updated. This has highlighted out out of date the HP Virtual Connect switches that are for the SAN connection are. As a result this is now being updated to v4.10 (hopefully).
I have managed to get the Hyper-V based DC up and running which is a positive step. I am getting fed up of all the cert errors that are appearing, I think the next step might be to set up PKI for issuing certificates. I would use something like Lets Encrypt to get hold of the certificates, but this hardware is so old that managing the renewals would be a pain. I know that browsers are going to start encouraging short term certificates but for now I should be fine with my lab and using a 2 year certificate for the HTTPS traffic. Other reasons for the internal PKI are so that I can blog about setting it up and so that I can configure RDP to use these certificates rather than the self signed ones. PKI will also be used later when I deploy SCCM.
Day 7
So, today I have built up what will be the offline root CA server. I’ve also been looking at replacement blades to go in the C7000 chassis. It looks like the G7 is about right cost wise but they are only using an ILO 3 card. After looking at the VMware documentation they will still only support ESXi 6.5 and not 6.7. Maybe I will get a couple that have got a large amount of RAM in them for replacing the Hyper-V Servers. Save up a little more for some G8/9 servers for the ESXi environment.
As well as the Root CA server, the Issuing CA server has been provisioned along with the OCSP server.
Day 8
After provisioning the Root CA, Issuing CA and OCSP server, today they have all been configured ready to issue certificates. The Domain Controllers have already enrolled and received their DC certificates allowing for LDAPS connectivity. This has been tested with the LDP.exe utility.
So a bit of a review of where the lab is up to. I have :
- HP P4500 running FreeNAS
- 1 ESXi host
- Domain Controller
- vCenter Server
- Sophos XG firewall, running firmware v18
- Windows 10 workstation
- 1 Hyper-V host.
- Domain Controller
- Offline Root CA
- Enterprise CA
- OCSP Server
The HP OneView software doesn’t work with the Virtual Connect modules that are running the SAN. I have been struggling with getting the HP Management Agents working on the Hyper-V server with the ILO 2 card. I think the hardware is just too old to work with the OneView software so for now, I have removed it. The server was also fairly resource hungry requiring 24GB of RAM, which when the host has just 32GB, for now it’s just too much.