Necessity Lead Me to My Perfect Home Lab

By Rory Monaghan

SHARE

In the last 10 years I have moved from Ireland to the US and then back to Ireland again. When moving to the US, I retired my home lab as it was too costly to bring it with me. I had automated builds in MDT, App-V Sequencing VMs, multiple Hyper-V hosts etc. I put a lot of time, money and effort into that lab but when I moved to the US I started renting small unfurnished apartments at a much higher cost than what I paid in Ireland, this meant I could not afford to rebuild my lab to the same spec I left behind and importantly, when living in 1 or 2 bedroom apartments or condos I didn’t have a dedicated space for a lab – particularly a loud lab! Due to these factors, when in the US, I had 5 Intel NUCs and a cheap PC I built myself.

The Hardware I had

I liked my Intel NUCs, they got me over the hump when living in tiny condos. I even brought them back to Ireland from the US as they fit easily in my luggage and most NUCs include different plug types as standard so I could easily just plug them in and start using them once back in Ireland but the majority of Intel NUC models do not support Windows Server OS, to run a Server OS on the NUCs, you have to use some generic drivers which can be problematic. Certain hardware components may not work (like audio drivers) and there may be issues caused with updates like Blue Screen of Death events. On my NUCs, I installed VMware ESXi on some of the devices. The experience was only an OK , in my opinion. The NUCs ran quiet which was good but they could each only handle a single SSD and were pretty limited with memory capcity. What is more, my Intel NUCs died relatively quickly. I had some NUCs that stopped working or started PSOD or BSOD frequently then completely died within 3-5 years.

The Hardware I Have

When I got down to a single Intel NUC working, I was forced to build a new lab. At this point, I had been living in my own house for a couple of years, so I now have more space than I had in my apartment/condo days but I do not have a garage or basement so I cannot house racks or blades. I spent months researching before building. It was not an enjoyable an experience but I settled on building my own high end PC. One which would allow me to have several SSDs, could handle a lot of memory and had a great processor but most importantly – one that did not sound like airplane taking off when powered on.

This is what I built!

A PC with some key components:

CPU: AMD Ryzen Threadripper PRO with 32 Core CPU

GPU: 6GB NVIDIA GEFORCE RTX 2060

Hard Drives: 1 x 1TB INTEL 670p M.2 NVMe PCIe SSD, 1x 1TB PCS 2.5″ SSD, SATA 6 Gb and 1x 256GB PCS 2.5″ SSD, SATA 6 Gb

Memory: 256GB

Other: Power Supply is an Ultra Quiet model. I opted for a FrostFlow cooling system and an ASUS Motherboard that does give the option for Wi-Fi.

Since I had 5 NUCs, I had spare memory and SSDs just sitting there so I gave my laptop an upgrade and put some of the other SSDs to new use in a Synology DS918. I do still have 1 NUC left which I re-imaged with Windows 10 but I don’t use it too often. I have some other gadgets like Raspberry Pi devices and my laptops but I’ll focus on the core lab which for me is my PC, NAS and cloud resources.

The Software and Efficient Provisioning

While trying to get buy with a couple of NUCs, I decided to install VMware Workstation on the machines and then install ESXi on VMs within Workstation. That way it allowed me to have Windows installed as the base OS on both of the physical devices and still get to use vCenter and ESXi for most of my VM management. It also let me continue provisioning VVOLs as my storage layer within vSphere. It also allowed me to run multiple different types of hypervisors.

I had one of the NUCs on Server OS and setup as my primary domain controller. The other one was Windows 10. At the time, I was working for ControlUp and wanted to have as many machines as possible for generating load for monitoring purposes. I also specifically wanted the agents on some physical machines, not just virtual but this showed me the light.

While this setup was used out of necessity. When I got my big beefy PC built, I applied what I learned when trying to do a lot with a little and decided to run Windows 10 on my PC and simply install VMware Workstation. Some may read that and think I’m wasting resources and that was what I thought in the past and that is why I installed ESXi bare metal on my NUCs but having Windows 10 with VMware Workstation provides several benefits and to me, it is worth the resource spend of running a full Windows base OS rather than going bare metal.

One benefit is that you can create VMs in Workstation and split the disk between multiple files and have it so the disk is presented with the amount you select but the space is only allocated as it is used. For example, if you only have a 200GB of space free, you could set the disk to more and it will appear that there is more available but in reality there is not but if you don’t actually require to use that amount of disk space right away, you can get away with it. This is useful if you have applications that have pre-requisite checks that require a lot of storage but in reality, when running them in a lab they use a lot less than the requirements reflect. For example, VMware vCenter has pretty lofty storage requirements.

Aside from the storage in my custom built PC which can be chopped up and provisioned across various different ESXi host VMs running in Workstation (vs on my NUCs where it was 1:1), I also have my old SSDs in my Synology NAS which I provision to my ESXi hosts as NFS storage. All of this means I have TBs worth of storage available, flexibility by being able to split the VM disks when creating the VMs and physical storage available across multiple ESXi hosts.

Not only does having the base OS with Workstation on top work out great from a storage distribution perspective, it can also be useful for mixed use case VMs. I have some VMs that make sense to be hosted with vCenter e.g. my Citrix lab, my secondary Domain Controller, my Configuration Manager Servers, SQL Servers etc. Those are enterprise domain joined machines by their nature but I also want non-domain joined VMs with frequent snapshots for doing testing and packaging work. This works great with my setup.

Keeping Costs Down

I also prefer to have my primary Domain Controller off my ESXi hosts so I can power that up before powering on my hosts and of my other domain joined VMs. This approach also helps if I only want to package applications. It means I can just work with those non-domain joined machines without powering on my entire lab. With the cost of living crisis this has become a big benefit. Power consumption is another factor when considering the hardware to buy.

I choose to run VMware software. After years of using Hyper-V, I made the switch for my lab. It made sense as most organisations I worked for used vSphere. The great thing about VMware software is that you can avail of licensing from the VMUG Advantage program for $200 a year and Microsoft product licensing through the Visual Studio subscriptions. There are many vendors who also offer community licenses or NFR keys upon request. I have also opted for an E5 subscription and a Windows 365 Cloud PC so I can test Intune and have a Cloud PC to test alongside my Azure Virtual Desktops running thanks to my Visual Studio Subscription’s Azure credits.

I have a lot of friends who run some serious home labs with enterprise grade hardware but I still feel very lucky and privileged to be able to have the lab that I do.

Photo by Tai Bui on Unsplash

Let's make virtualization easier!

Be amongst the first to know when I publish new reviews, guides and tools to simplify your projects.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.

We'll virtualise your 5 most complex apps for FREE