Rebuilding the home lab – Part 5

Further progression and ramblings on the rebuild of the home lab. It’s now in a state where it’s working as I would expect it to be.

Sorry, this is a bit of a long one as there has been a while since the last one. We have progress and things are working as I would want them to. I am currently working on proving out some stuff which I might blog about in future.

Day 12

Battling with SCCM and deployments has been fun. Finally managed to pin point the issue. The SCCM DP needs a workstation certificate of its own. Without it, the deployment just was not working. It’s been a nightmare tracking it down, so many blog posts were talking about reinstalling the PXE component, when all I needed was the correct client certificate as I am using HTTPS. All distribution points need a copy of the Workstation Template certificate that can be exported with its private key. Shows how complicating things causes nightmares! I will always look to keep things simple. Always. The only reason for not doing it in the lab is because this is the chance to make it overly complex. Break it and then figure it out.

Day 13

Day 13 was spent having a look at Graylog. I’ve got it deployed and the Windows Servers reporting in basic data. I still need to play around with it and look at some dashboards and pretty graphs. Two new blades have now arrived, some G7 460’s with dual Xeon CPUs, 96GB RAM, a couple of Mezzanine cards and a couple of hard drives. As they have internal SD Card readers, I have decided to run ESXi on them. The G7 also sports an ILO3 rather than the ILO2 in the G5s. They are Intel rather than AMD but ESXi can handle moving between the two.

Day 14

There is a part of me that is beginning to wonder if this will ever finish! That said, I have installed one of the new ESXi servers, on to the SD card rather than Hard Drive. I did come across the issue with using the HP ISO for deploying ESXi 6.5 to a Gen 7 server where it purple screens after boot. To combat this I have deployed the vanilla ESXi image instead. Seems a real shame to have had to do this. Deploying ESXi over the ILO card is slow to say the least!

I have updated vCenter to v6.7 as well as this can manage the 6.5 hosts happily.

Both new Intel ESXi servers are up and running. I was unable to add the second built ESXi Server to the vCenter server, it keeps reporting an issue with the license.

Day 15

More progress. I was able to add the second ESXi server to vCenter. It turns out the clock on the server was some 5 years out. Reset the time, added the host and then performed updates. I had to add a second small datastore to store the log files (it’s all hosted on SD Cards) and for the HA heartbeat. Now that’s all done everything is happy in the ESXi environment. VMware Tools are all up to date.

Now to rebuild the second Hyper-V host so that is all clustered nicely. I did make the mistake of attempting to put the HP Management Agents on the ESXi host which caused it to panic and purple screen after boot again. So, I had to rebuild this server again. But all good now.

Hyper-V host is installed, I have added the Failover Cluster role and just need to reboot the first host before setting up the failover cluster.

Day 16

By default, Windows Server doesn’t include MPIO. I thought that this was causing me some performance issues with SAN connectivity as it is all set up for MPIO. To add MPIO to the iSCSI connectivitiy, I have had to do the following:

Install-WindowsFeature Multipath-IO
Restart-Computer

Then from a command prompt, execute the following to configure all devices as available for MPIO:

mpclaim -r -i -a ""

Note that this will automatically reboot the server without warning so make sure you have moved all resources.

Day 17

So the above alone did not help fix the issues I was having with stalling on the SAN. Both the Hyper-V cluster and ESXi environment would randomly just stop talking to the P4500 G2 running FreeNAS. In the end, I have rebuilt one of the Hyper-V servers as a FreeNAS server. This has a storage blade attached to it. After migrating the data off the P4500 and on to the BL465c G5, everything is a lot happier. Things perform as I would expect them to. I suspect that some firmware is massively out of date on the P4500 so what I might try is installing Windows Server 2012 along with management agents for HP and then seeing if I can perform NIC/RAID and Drive firmware updates. This will have to wait a little while though. In the mean time I will continue using the storage blade. It’s not got the capacity of the P4500 but it will suffice for now.

Leave a Reply

Your email address will not be published. Required fields are marked *