All change in the Home Lab

I started writing this post with a view to just changing the storage over. I was running low on storage space, so after much tooing and froing, I decided to build a new storage machine. Although I didn’t go crazy on the spec (AMD Ryzen 5, 16GB RAM, M2 OS SSD, 1TB SSD and 2 x 2TB HDD), I decided to make use of this new machine as a replacement for my desktop and put the 5TB of storage in to my old desktop. I then became really taken with the ASUS PN50 Micro PC. So the blog post starts with storage then will move on to the rest.

Storage

I have a C7000 blade chassis, as part of this I have a BL460c G5 with SB40 attached. This includes 6 SAS drives. I have been debating what to do for a little while with the storage as I have been running low on space. I have been watching a G7 BL460 and a new SB2200 on eBay for a little while. My issue with these is they use a P410i RAID controller. This controller doesn’t support HBA mode without flashing and in Linux, a custom kernel. I’ve looked in to all sorts, including the customisation, replacing the P410i with a LSI HBA and many hours of searching.

I am currently running Windows Storage Server 2012 on my blade. With each of the drives in a RAID 0 of their own. I then use a storage pool with 7 drives, let Windows manage it, split it down in to two volumes and then run an iSCSI target for the Hyper-V and ESXi estate. It ticks along nicely with a limited CPU use, RAM use and network hovers around 500Mbps.

I’ve tried using FreeNAS on this, and on an old HP Lefthand SAN unit. FreeNAS is a lovely looking piece of software, but on the BL460, I would occasionally get stalling of the drive access. The Lefthand is big, noisy and expensive to run.

So, after much deliberation, over the weekend, I decided to put together a PC which will run Windows Server 2019. Alright, it doesn’t have full redundancy within the hardware, but for home, do I really need full redundancy? No, not really. It’s nice having server hardware, playing with ILO cards, setting up server settings etc. But, do I really need it? No.

So the new PC will be based on an AMD Ryzen 5 3400G CPU, mounted to an ASUS Prime A320M-K motherboard. It will have 16GB of RAM installed and a TP Link PCI Express Network card as well as the onboard card. The storage will be a 120GB WD Green M2 SSD for the OS, I will have a Samsung 870 1TB SSD and 2 Seagate Barracuda 2TB drives. I will mix and match the Samsung and Seagate drives in a storage pool and make use of the Windows Storage Tiering to keep hot data on the SSD and cold data on the HDDs. I still have 1 SATA port on the motherboard if I wish to add another SSD. This will all be housed in an Aerocool Cylon Mini RGB Chassis. This case will take all the drives I have included, isn’t too big and it includes RGB LEDs … because why not?!

So, that’s the plan. Hopefully, if I can switch out the C7000 and 24U rack for a PC and some NUCs (yes, I am looking at switching the 4 ‘new’ BL460 G7s for NUCs) I can re-purpose the server room as an office to use. Which would be nice. And reduce power bills.

Ripping out the C7000 isn’t going to be an easy task. The Cisco switch in the back of the chassis is currently running all the routing. I will have to change the VMware networking and make it so that the Sophos XG does the inter-VLAN routing. This will be a fun task.

Update

I realise I could have deleted this post and started again but it was at this point, I had a change of heart. I built the PC for myself and the storage server became an Intel Core i7 based machine. The storage is based on a 120GB SSD for the OS, 1TB SSD and 2 x 2TB HDD’s in a tiered storage pool on Windows Server 2019. The onboard NIC is teamed with a PCIE TP-Link network card to give a 2GB network connection to the Ubiquiti Unifi 24 port switch.

I have ordered the ASUS P50 devices, running AMD Ryzen 5 CPU’s, 64GB of RAM in each with a 120GB M2 SSD. To start with I have ordered 4 of these. I plan to add another 2 for the hypervisor hosts and an AMD Ryzen 3 version with 4 or 8GB RAM to run as a physical Domain Controller. These will run perfectly fine for Windows Server 2019, I know I am going to come in to an issue for the ESXi estate. These machines run a Realtek network adapter which does not have a ‘native’ ESX driver. As a result, I will either have to run ESXi 6.7 and convert the VMs back from V7 (looking likely) or make use of a USB network adapter.

The tough part now comes in that I will lose inter-VLAN routing. I am debating using the Sophos XG firewall for this, or getting a Ubiquiti USG-PRO-4. The Sophos XG works really well, and it supports the DMZ. The USG would give me a better looking dashboard, but I would not be able to do the DMZ in the same way.

Leave a Reply

Your email address will not be published. Required fields are marked *