My home server rack houses a trio of Intel NUCs (among other things). I’ve always set them up as Ubuntu Linux running on bare-metal - my career has been spent entirely at companies who run most of their servers that way, so it feels familiar. One thing that NUCs don’t support, though, is out-of-band management. To reinstall an OS, I have to flash a USB stick, plug it in to my TV, and grab a USB keyboard - a far cry from the “just press a button” tooling that’s possible if you have even basic IPMI support. The pain of even starting a re-install also means I’ve never put in the work to script the install process with preseed - something else I miss from work setups!
I decided to rebuild my NUCs as VM Hypervisors - that way, I could use VM tooling as the automation layer, and hopefully never have to deal with flash drives again! In the end, I’m probably going to run a bunch of identical Linux VMs running Kubernetes, but easier tooling for building machines means I can experiment with beta features without interfering with my “production” cluster (that runs this blog!).
Since I have very little experience with virtualization and there are many options out there, I made sure to figure out my requirements front:
- I want to run VMs on my home servers instead of using the bare metal.
- The hypervisor needs to be as simple as possible - I want to run close to a “stock” install so I can focus on what’s in the VMs, not on the host. Bonus points if it’s simple in a security-hardened sort of way - I don’t want to stay up at night worrying about whether my hypervisor is running a vulnerable version of some tool I never use.
- It must support Linux well, preferably without needing additional software installed on guests.
- There must be an API for managing VMs - at some point, I’d like to script the setup of my VMs so I can build and tear down clusters at will. It doesn’t need to be a fancy API - a set of CLI tools I can call from bash suffices.
- It needs to be free - not that I am ideologically opposed to paying for useful software, but the market for hypervisors is extremely “enterprise”-oriented (which means large $$$).
- No specialized network or storage requirements - large virtualization installations often involve “network virtualization” (overlay networks) and “storage virtualization” (SANs) - massive overkill for a home setup. I have three servers, they have some local scratch space, it’s fine.
- My hardware must be supported - the machine I was testing with is an Intel NUC6i3SYK with 16 GB of RAM and a 100 GB SSD. This isn’t “enterprise-class” hardware, nor does it have multiple TB of RAM, so I want to watch out for hardware support.
Alternatives and attempts
I didn’t set out to do a bakeoff, but one eventually emerged after several options I tried didn’t work out:
- Microsoft Hyper-V Server: Microsoft of 2017 seems keenly interested in making Linux-compatible products that are actually useful to developers (developers developers developers). Hyper-V Server is a free (?!?!) edition of Windows Server that only runs the Hyper-V hypervisor. Installing it on the NUC was a little tricky (Intel doesn’t sign drivers for the NUC’s network card for Server editions of Windows, so you have to hack it some). I ended up stymied by WinRM for remote management of the hypervisor. I suspect this is less obnoxious in an Active Directory domain environment.
- VMWare vSphere Hypervisor: VMWare’s ESXi is an enterprise workhorse, and the standalone hypervisor is free to use. I had no trouble getting it running on the NUC. I set up a few VMs, but had severe performance issues. The machine wasn’t oversubscribed, I had the guest additions installed, I couldn’t find any obviously wrong configs, but VMs would often hang for several seconds.
- Intel Clear Linux: Intel’s Linux distribution is extremely minimal and focuses on auto-update functionality. My intention was to use it as a KVM host with as few bells and whistles as possible. Bizarrely (for an Intel product) I had driver issues with the NIC - at startup the card would lose link and refuse to work without kicking systemd-udevd and doing the
ip link set eno1 up/downdance. That issue aside, it also apparently doesn’t bundle a version of netcat that is compatible with the
virt-managerKVM remote management tool, so I moved on.
- oVirt Node: oVirt itself is a broader virtualization management project in the Red Hat sphere of influence. oVirt Node is a Fedora Linux spin that runs KVM and integrates with the oVirt management service. It’s definitely targeted at larger enterprise installs - once I got things working (including having to specify a storage subsystem living on my NAS, because it doesn’t seem to easily support local storage of VM volumes?), it quickly became clear that it was way too complicated for my home setup.
- Promox VE: If you search the Internet for what enthusiasts have used for their home setups, you’re bound to hear about Promox VE, a Linux distribution based around KVM and a custom web interface. The tone of the documentation, marketing materials, and folks singing its praises made it feel like it was perhaps overly focused on cramming in features (in a sort of “Android 1.6” way) and less on “getting out of your way” so I didn’t end up trying it.
Meet the new champion, same as the old champion
Where I finally ended up was…running Ubuntu - the same as I had started. A fresh install including KVM tools fit my needs better than any of the others I tried.
I probably should have tried this first - I’m already familiar with the OS. Several friends also recommended “just use normal Linux and KVM” to me when I told them of my project. I had some vague worries about it being “too heavy”, but so far it has worked quite well.
A Kubernetes cluster is already running all my services built on VMs:
Performance has been good (none of the weird hanging I saw with VMWare), Ubuntu has a large community of users, and there is plenty of scope for me to automate setting up new hypervisors and guests with preseed. Hopefully soon I will also have a more automated configuration of the Kube nodes I run as guests to share with y’all!
OK I _think_ I have one of my NUCs successfully set up as a kvm host and running a 3 node kube 1.8.5 (CRI-O) cluster— Sophie Haskins (@sophaskins) December 9, 2017