Monthly Archives: July 2018

Server Upgrade

One of the things it appears I do from time to time is to upgrade my server infrastructure.

The motivation to do this is in part to replace a noisy and power-hungry server (Dell Poweredge 2950,  which sounds like a cross between a jet engine and a washing machine) with something better.  It is also in part to increase my storage beyond its current 4TB.  And finally in part to play with shiny new toys.

In the past, cost has been a major concern as these projects were essentially done on my pocket money using eBay purchases.  Having more-or-less retired,  but also doing some consultancy on the side,  I decided to route some of  this income into the new server project and not to constrain myself to second-hand hardware.

The hardware of this upgrade is essentially identical to that Brian Moses’ 2017 server build. I used 6 8TB WD red drives,  which are designed for 24/7 operation in servers.   Total cost around £3500.

For software, I started with FreeNas runniing from a usb thumb (if you’ve got tiny thumbs) drive.

The new server supports IPMI.  The manufacturer of the board (SuperMicro) provides an IPMI program that runs on my windows 10 machine.  The killer feature is the kvm connection,  which provides a window showing what’s showing on the host machine VGA adapter, and accepts input.  It also supports virtual storage,  which is how the installation DVD image is connected.

Much later I discovered that all the features supported over IPMI are also available on a browser gui at the BMC IP address,  so IPMI is not strictly needed.   I didn’t know this at the time of the install,  so it changes nothing.

The FreeNas version 11.1 software installed flawlessly and effortlessly allowed me to create nfs and samba shares.  My old NAS was a FreeNas 11.1 virtual machine on my old server.  ZFS send/recv allowed me to  move my major datasets.   The discs of the old VMs were logical volumes managed by Linux lvm on the poweredge.  I created snapshots of the running lvm (lvcreate –snapshot), and copied them to the new server,  where they are zvols by piping the output of dd into ssh running dd to write the device.   Worked flawlessly.

Along the way I replaced my 100Mbps ethernet switch near the server with 1Gbps so that the old and new servers could exchange data at this speed, and did indeed observe transfers going at this speed.   I also discovered that the wiring to the servers (in the garage) from the house was perfectly capable of supporting 1Gbps.   So if I’d done this £25 upgrade years ago,  I could have had 10x throughput to my file server from the house.   Oh well,  at least I discovered this now.

The new hardware is hopelessly over specified when it comes to networking.   It has two 10GB and two 1GB Ethernet ports.   I use just one of these at 1 GB.   I did look at a lower-cost Xeon single board computer, but decided to follow exactly Brian’s configuration, which was known to work.  If I upgrade to 10GB in the future, perhaps to add a compute server, at least this hardware will support it.

The big question  for me was whether FreeNas could support the virtual machines I want to run,  or whether I needed to create a separate hardware host for these.  I hoped the former, because I didn’t want to shell out any more money for a virtual machine host.

FreeNas 11.1 supports virtual machines using bhyve.  Support is rudamentary – for example,  there is no pause, save or restore.   You can only specify a vnc console for a UEFI VM, which is a pain because there is no way to debug a bios booted VM.   This constraint created a problem for me because all my running VMs were bios mbr boots.   Needless to say,  the images ported over didn’t do anything useful.

I followed the instructions by Oded Lazar to move my network infrastrucuture server (Fedora 22, bind, dns, ldap, openssh) to UEFI, and had it running effortlessly.

I then tried to move my freepbx image, and found no obvious way to port this to UEFI.  So I rebuilt a UEFI-booted VM using the latest freepbx 64 bit image.   All kinds of things appeared to be going wrong.  A fresh install showed (on the Freepbx control panel) that it couldn’t talk to the Asterisk server.   I installed three different recent releases,  and they all showed something similar.

I had no confidence in the new freepbx installation.  I don’t know if this was a problem with freepbx itself, or some kind of interaction with the bhyve environment.  A later install of the same software on a virtual machine under QEMU worked fine. Anyhow,  it made the decision for me to ditch FreeNas as a VM host.   So the next question was whether to keep freenas and add a compute server,  or to replace freenas.   Having spent enough on this project,  I decided to replace freenas.

Because the OS is running from a USB flash drive,  it is trivial to pull it and keep it safe.   So I installed Ubuntu 18.04 LTS onto a flash drive and used this to host the new machine.

It is easy to find instructions on how to support ZFS.  The man pages are comprehensive, and I’ve a reasonable amount of experience using it.   I don’t need the shiny FreeNas user-interface because once set up,  my configuration of shares is not going to change very much.

I did install zfs-auto-snapshot,  which does exactly what it sounds like.

I installed virt-manager, kvm-qemu and set up bridged networking.  The images copied from the old server ran without a hitch.

Ubuntu was running from the flash.  The next job was to get the server booting and running from zfs.  This took some messing around.   But I eventually have it.  See Phillip Heckel’s description.

Summary of process to get native boot from zfs.
Using gdisk, Repartition disks so that there is a 1M partition of type EF02 (“Bios boot partition”)
Create zfs pool p3
Create p3/root, not mounted.   (p3 is my pool name, you can guess what p1 and p2 were).
Create p3/root/ubuntu,  canmount=yes,  mountpoint=none,  manually mount it at /root.
(this was where I had an issue with various instructions.  It seems that mountpoint none is
required for the initramfs to work.  I didn’t bother to research further.)
create p3/boot mounted at /boot.
copy everything from running system root to /root, excluding dev,sys,proc,boot which are mounted –bind.
Chroot /root
update-initramfs
update-grub
for each disk in the zfs array: grub-install /dev/sd<x> –modules=”zfs part_gpt”   (this modules parameter is probably unnecessary)

That’s it.  If the root file systems gets toasted somehow,  I can boot ubuntu off a flash drive and use zfs tools to roll back to one of the snapshots.