The motivation for this action was that I had a fully-loaded (6-drive) NAS using RAIDZ2 across the 6 x WD Red 8TB disks. I wanted to optimize performance and noticed that a common thread on the internet is that this is a poor choice for performance. Studying the documentation shows that RAIDZ2 is also a poor choice for flexibility – i.e., you cannot easily take a disk out of such an array and repurpose it.
So I decided to bite the bullet and move my zpool contents to a new mirror-based pool.
My NAS build is fairly recent, so the entire data will fit on a single disk. It’s also an opportunity
to throw stuff I don’t need to keep away.
For first zfs set up, I followed much of the following: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
This is what I did to move to the new pool:
- Checked my backup was up to date and complete.
- Offlined one of the devices of the RAIDZ2 array. This left the original pool in a degraded, but functional state.
- Cleared the disk’s partition table with gdisk (advanced)
- Note, I tried re-using the disk (partition2) as a new pool. ZFS was too clever for that, and knew it was part of an existing pool (even with -f). Even when I’d overwritten the start of the partition with zeros, zfs still knew. It think it must associate the disk serial number in the original pool.
- The way I bludgeoned zfs into submission was to overwrite the partition table, and physically move the device to a new physical interface (in my case to a USB3/SATA bridge). I don’t know if both of these were necessary.
- Created a new pool on the disk. Note, zfs wouldn’t create it on the partition I’d set up, but would on the whole disk. Some of the work in 3.2 above might have been unnecessary.
- Copied each dataset I wanted to keep onto the new disk using zfs send | zfs recv. Note that this loses the snapshot history.
- Set the new root dataset canmount=off, mountpoint=none. If you don’t do this, the boot into the new root will fail, but you can recover from this failure by adjusting the mountpoint from the recovery console and continuing.
- In the new copied root, use the mount –rbind procedure to provide proc, sys, dev and boot. Chroot to the new root. (see https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS). In the chroot:
- update-initramfs updates the new /boot.
- update-grub to create the new /boot/grub/grub.cfg file.
- I edited this to change the name of the menuentry to “Ubuntu <pool-name>” so that I could be sure I was booting the right pool.
- grub-install on one of the old disks
- The zfs pool on the whole disk means there is nowhere for the grub boot loader on the new pool disk. As the old pool disks were physically present and did have the necessary reserved space, I could use one of them to store the updated boot loader.
- reboot, select BIOS boot menu, boot off the hard disk with the new grub installation
- This should boot off the new pool.
- Export the old pool.
- You might need to stop services such as samba and nfs that are depedent on the old pool first.
- Set up mountpoints from the new datasets
- Edit any other dependencies on the new pool name (e.g. virtual machine volumes)
- Now you are running on the new pool.
- Once you are absolutely sure you have everything you need from the old pool, import it and then destroy it.
- Do a label clear on each of the partitions to be used in the new new pool.
- Create the new new pool using the disks from the old pool.
- Repeat 5-14 to move everything to the new new pool. Except, install the grub bootloader on all the disks used by the pool, excluding the one booting the new pool.
- This means you can boot into the new pool if the new new pool is broken.
- Export the new pool.
- Keep the disk for a while in case you left anything valuable behind on the move to the new new pool.
- Eventually, when you are happy that nothing was left behind, import, destroy and labelclear the disk/partition used for the new pool, and add the disk/partition to the new new pool. Also update the boot loader on the disk that was booting the new pool.