The page was originally written in ~2001, when having a home network was unusual, and so was I. Now with cheap broadband, and even cheaper components made in China, having a home network is nothing to write about.
But that won’t stop me. And I’m still unusual.
The Stephens home has a small network consisting of the following items:
Cable Modem (200M down, 8M up) connected to Virgin Media cable service. multiple APs. Wired 1000 Mb/s Ethernet in the house, garage and shed. Server in the shed (16-thread Xeon 64 GB memory. 6x8TB WD Red disks on zfs forming a redundant slow pool, and a 2TB Samsung Evo Pro forming a non-redundant fast pool) running Ubuntu 18.04 LTS and hosting a number of virtual machines: pfsense firewall, Web server, plex media server and asterisk (freepbx) PBX.
Applications accessed through http:
Wordpress websites (this one + a community website)
Piwigo (photo album)
Some password-protected directories via nextCloud
Various password-protected SVN repositories
Dynamic IP Address Translation
The network sits on a single dynamic IP address granted by ntlworld to the firewall. This is a DHCP client to the outside world, does NAT and provides DHCP services to the rest of the network. A free dynamic IP name translation service is used to resolve this address (Zoneedit). The firewall periodically checks with Zoneedit that the external IP address has not changed. This might create a delay between a change of IP address and name resolution updates. In practice, the dynamic IP address assigned by Virgin Media seems very stable.
Setting up the Linux Server
Setting up the linux server was a lot of fun – but not something for those who don’t want to learn some command-line runes.
- 2005: The server started off running Red Hat 7.3 on a 300MHz Compaq Deskpro SFF (now obsoleted). This was upgraded to Red Hat 9.
Then I moved the server to a new machine: a Dell Optiplex 170L bought for £70 from ebay, and upgraded to Fedora Core 3.
At the same time, I restructured the file system so that /backup, /home, /var/mail, /var/www/html are separate filesystems, and changed all filesystems to reiserfs. These are created in block devices provided by LVM on a single 200GB hard disk, with room for spare copies of the root partition. It was upgraded to Fedora Core 4. The only difficulty upgrading was with the LDAP database getting hosed.
- 2011: The Optiplex was replaced by a dual-core 3GHz, 1GB ram nameless system. Just upgraded from Fedore Core 10 to Fedora Core 15, which was a lot of work. There’s no alternative but to rebuild from scratch and then copy config files from the old system one by one and bring them up. I had two 1TB disks that were mirrored and contained the website and backup of our internal documents/images/music. The NAS server in the garage provide an iSCSI target that is mounted and backs up the backup. The system and boot disks are raid1 mirrored on to partitions on one of the 1TB disks. A cron job periodically connects the mirror and then disconnects it, thereby giving me a bootable standby if the SSD goes down, but allowing me to spin down the hard disks. The backup and html directories are automatically dismounted after 30s inactivity and the disks are spun down after 30minutes of inactivity.
- 2013: The Optiplex was retired and a second hand IBM 2U server was installed in the garage, where it constantly makes a noise like a jet engine taking off. This has a 5TB software raid (the hardware RAID that comes with the server is cludgy and unreliable, and slow). A bunch of virtual machines run the various servers and provide a place to experiment. The system boots off lvm over raid5.
- 2014: The IBM server was retired and replaced with a Dell poweredge 2950. This is just as loud. The hardware raid is much better. CPU performance is much better.
- 2018: I replaced the Dell poweredge 2950 with a home-built system using new components. This is the first time I’ve had a brand-new server. It has two two-disk mirrors on 4 x 8 TB disks, plus a spare. Probably overkill for my needs, but at least it should last me forever (yes, I know, 640 KB is enough for anybody :0). More details here: https://chezstephens.org.uk/server-upgrade
The mail system
Pre 2018: The mail system was configured to collect email from a number of IMAP accounts. IMAPS/SMTPS was used outside the home network to access the local mail service. An LDAP server provided centralized storage of email addresses for outlook and outlook express clients. One of the annoyances of having an ntlworld dynamic IP address is that some SMTP servers refuse to accept mail from such clients (e.g. AOL). Postfix can be configured to route SMTP email for specific domains via some other SMTP server, so all my outgoing mail is sent via my ISP (ntlworld).
All other external access into the home network was via ssh tunnels. This is much more secure than opening various ports.
Post 2019: I still run a local email server, but it is less important. It essentailly is a place to gather email from various external accounts and hold archives. Outgoing email is sent via one of the external accounts. This is because certain email systems silently discard email from dynamic IP addresses, regardless of valid DKIM and SPF settings.
I was also running email reflector lists for a charity using my ISP as an SMTP server. It worked, but it caused Virgin Media to flag me as sending spam email, and send me a periodic letter to that effect. I then discovered that Google G-suite was available free of charge to bona-fide charities in the UK, and so the charity I was supporting switched their email to Google G-suite, and my nasty letters went away.
As someone who’s been working with computer media all my working life (paper tape and cards at university, 8″ floppies, DK05, RL01, RM03 disk packs, 5.25″ floppies, Travan tape, Iomega disks, and more hard disks than you can shake a stick at), backup is important to me.
Probably the most important thing on this network is our digital photos. I have multiple copies spread across multiple machines. I download the cameras into the “playroom” computer. This gets backed up periodically to the linux server. I also upload the photos to my gallery system (this is the public view of the photos available at www.chezstephens.org.uk/piwigo) at the end of each month. The linux server hosting this backs itself up using rsync over ssh to my W2K machine that has cygwin + rsync & ssh installed. The W2K machine backs itself up using syncback to the linux box. The playroom computer also periodically grabs a copy of the gallery from the linux server from an SMB mount using syncback.
The hard part of all of this was to try and avoid backing up the backups of the machine doing the backing up – otherwise the backup size would grow forever and exceed the capacity of the hard disks. So each machine has a /backup/<machine-name> directory for the two other machines. And this is excluded from backups.
2014: Backup is now a fedora core 19 desktop that wakes at 3:00am every day and uses rdisk to mirror the /backup directory on the file server. The NAS server (FreeNAS virtual machine running on the production server) also snapshots its datasets daily, so that makes everything, in theory, go-backable.
2018: I moved to a zfs production and backup server. Previously everything had been backed into a single filesystem. Now the production server frequently snapshots multiple datasets and the backup server incrementally copies these datasets (and their snapshots) once a day. The backup server is configured to run some essential virtual machines (firewall, web server, email server) if the production server is hosed. See: https://chezstephens.org.uk/how-to-make-a-home-zfs-backup-server/
2020: The server now has slow (non-redundant) and fast pools. The datasets on the fast pool are incrementally mirrored (using zrep) to the slow pool and a separate backup server. The datasets on the slow pool that are not mirrors of the fast pool are also mirrored to the backup server. The backup server is woken once a day using Ethernet Wake-on-LAN packets, and the backup is pushed to it.