Skip to content

Commit 5c08c20

Browse files
committed
installation/guides/zfs.md: new root-on-ZFS guide
1 parent a6ac329 commit 5c08c20

File tree

3 files changed

+186
-0
lines changed

3 files changed

+186
-0
lines changed

src/SUMMARY.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
- [Installation via chroot
1414
(x86/x86_64/aarch64)](./installation/guides/chroot.md)
1515
- [Full Disk Encryption](./installation/guides/fde.md)
16+
- [Root on ZFS](./installation/guides/zfs.md)
1617
- [ARM Devices](./installation/guides/arm-devices/index.md)
1718
- [Supported Platforms](./installation/guides/arm-devices/platforms.md)
1819
- [musl](./installation/musl.md)

src/installation/guides/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,5 @@ This section contains guides for more specific or complex use-cases.
66

77
- [Installing Void via chroot (x86 or x86_64)](./chroot.md)
88
- [Installing Void with Full Disk Encryption](./fde.md)
9+
- [Installing Void on a ZFS Root](./zfs.md)
910
- [ARM Devices](./arm-devices/index.md)

src/installation/guides/zfs.md

Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
# Installing Void on a ZFS Root
2+
3+
Because the Void installer does not support ZFS, it is necessary to install via
4+
chroot. Aside from a few caveats regarding bootloader and initramfs support,
5+
installing Void on a ZFS root filesystem is not significantly different from any
6+
other advanced installation. [ZFSBootMenu](https://zfsbootmenu.org) is a
7+
bootloader designed from the ground up to support booting Linux distributions
8+
directly from a ZFS pool. However, it is also possible to use traditional
9+
bootloaders with a ZFS root.
10+
11+
## ZFSBootMenu
12+
13+
Although it will boot (and can be run atop) a wide variety of distributions,
14+
ZFSBootMenu officially considers Void a first-class distribution. ZFSBootMenu
15+
supports native ZFS encryption, offers a convenient recovery environment that
16+
can be used to clone prior snapshots or perform advanced manipulation in a
17+
pre-boot environment, and will support booting from any pool that is importable
18+
by modern ZFS drivers. The [ZFSBootMenu
19+
wiki](https://github.com/zbm-dev/zfsbootmenu/wiki) offers, among other content,
20+
several step-by-step guides for installing a Void system from scratch. The [UEFI
21+
guide](https://github.com/zbm-dev/zfsbootmenu/wiki/Void-Linux---Single-disk-UEFI)
22+
describes the procedure of bootstrapping a Void system for modern systems. For
23+
legacy BIOS systems, the [syslinux
24+
guide](https://github.com/zbm-dev/zfsbootmenu/wiki/Void-Linux----Single-disk-syslinux-MBR)
25+
provides comparable instructions.
26+
27+
## Traditional bootloaders
28+
29+
For those that wish to forego ZFSBootMenu, it is possible to bootstrap a Void
30+
system with another bootloader. To avoid unnecessary complexity, systems that
31+
use bootloaders other than ZFSBootMenu should plan to use a separate `/boot`
32+
that is located on an ext4 or xfs filesystem.
33+
34+
### Installation media
35+
36+
Installing Void to a ZFS root requires an installation medium with ZFS drivers.
37+
It is possible to build a custom image from the official
38+
[void-mklive](https://github.com/void-linux/void-mklive) repository by providing
39+
the command-line option `-p zfs` to the `mklive.sh` script. However, for
40+
`x86_64` systems, it may be more convenient to fetch a pre-built
41+
[hrmpf](https://github.com/leahneukirchen/hrmpf/releases) image. These images,
42+
maintained by a Void team member, are extensions of the standard Void live
43+
images that include pre-compiled ZFS modules in addition to other useful tools.
44+
45+
### Partition disks
46+
47+
After booting a live image with ZFS support, [partition your
48+
disks](../live-images/partitions.md). The considerations in the partitioning
49+
guide apply to ZFS installations as well, except that
50+
51+
- The boot partition should be considered necessary unless you intend to use
52+
`gummiboot`, which expects that your EFI system partition will be mounted at
53+
`/boot`. (This alternative configuration will not be discussed here.)
54+
- Aside from any EFI system partition, GRUB BIOS boot partition, swap or boot
55+
partitions, the remainder of the disk should typically be a single partition
56+
with type code `BF00` that will be dedicated to a single ZFS pool. There is
57+
no benefit to creating separate ZFS pools on a single disk.
58+
59+
As needed, format the EFI system partition using
60+
[mkfs.vfat(8)](https://man.voidlinux.org/mkfs.vfat.8) and the the boot partition
61+
using [mke2fs(8)](https://man.voidlinux.org/mke2fs.8) or
62+
[mkfs.xfs(8)](https://man.voidlinux.org/mkfs.xfs.8). Initialize any swap space
63+
using [mkswap(8)](https://man.voidlinux.org).
64+
65+
> It is possible to put Linux swap space on a ZFS zvol, although there may be a
66+
> risk of deadlocking the kernel when under high memory pressure. This guide
67+
> takes no position on the matter of swap space on a zvol. However, if you wish
68+
> to use suspension-to-disk (hibernation), note that the kernel is not capable
69+
> of resuming from memory images stored on a zvol. You will need a dedicated
70+
> swap partition to use hibernation. Apart from this caveat, there are no
71+
> special considerations required to resume a suspended image when using a ZFS
72+
> root.
73+
74+
### Create a ZFS pool
75+
76+
Create a ZFS pool on the partition created for it using
77+
[zpool(8)](https://man.voidlinux.org/zpool.8). For example, to create a pool on
78+
`/dev/disk/by-id/wwn-0x5000c500deadbeef-part3`:
79+
80+
```
81+
# zpool create -f -o ashift=12 \
82+
-O compression=lz4 \
83+
-O acltype=posixacl \
84+
-O xattr=sa \
85+
-O relatime=on \
86+
-o autotrim=on \
87+
-m none zroot /dev/disk/by-id/wwn-0x5000c500deadbeef-part3
88+
```
89+
90+
Adjust the pool (`-o`) and filesystem (`-O`) options as desired, and replace the
91+
partition identifier `wwn-0x5000c500deadbeef-part3` with that of the actual
92+
partition to be used.
93+
94+
> When adding disks or partitions to ZFS pools, it is generally advisable to
95+
> refer to them by the symbolic links created in `/dev/disk/by-id` or (on UEFI
96+
> systems) `/dev/disk/by-partuuid` so that ZFS will identify the right
97+
> partitions even if disk naming should change at some point. Using traditional
98+
> device nodes like `/dev/sda3` may cause intermittent import failures.
99+
100+
Next, export and re-import the pool with a temporary, alternate root path:
101+
102+
```
103+
# zpool export zroot
104+
# zpool import -N -R /mnt zroot
105+
```
106+
107+
### Create initial filesystems
108+
109+
The filesystem layout on your ZFS pool is flexible. However, it is customary to
110+
put operating system root filesystems ("boot environments") under a `ROOT`
111+
parent:
112+
113+
```
114+
# zfs create -o mountpoint=none zroot/ROOT
115+
# zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT/void
116+
```
117+
118+
Setting `canmount=noauto` on filesystems with `mountpoint=/` is useful because
119+
it permits the creation of multiple boot environments (which may be clones of a
120+
common Void installation or contain completely separate distributions) without
121+
fear that ZFS auto-mounting will attempt to mount one over another.
122+
123+
To separate user data from the operating system, create a filesystem to store
124+
home directories:
125+
126+
```
127+
# zfs create -o mountpoint=/home zroot/home
128+
```
129+
130+
Other filesystems may be created as desired.
131+
132+
### Mount the ZFS hierarchy
133+
134+
All ZFS filesystems should be mounted under the `/mnt` alternate root
135+
established by the earlier re-import. Mount the manual-only root filesystem
136+
before allowing ZFS to automatically mount everything else:
137+
138+
```
139+
# zfs mount zroot/ROOT/void
140+
# zfs mount -a
141+
```
142+
143+
At this point, the entire ZFS hierarchy should be mounted and ready for
144+
installation. To improve boot-time import speed, it is useful to record the
145+
current pool configuration in a cache file that Void will use to avoid walking
146+
the entire device hierarchy to identify importable pools:
147+
148+
```
149+
# mkdir -p /mnt/etc/zfs
150+
# zpool set cachefile=/mnt/etc/zfs/zpool.cache zroot
151+
```
152+
153+
Mount non-ZFS filesystems at the appropriate places. For example, if `/dev/sda2`
154+
holds an ext4 filesystem that should be mounted at `/boot` and `/dev/sda1` is
155+
the EFI system partition:
156+
157+
```
158+
# mkdir -p /mnt/boot
159+
# mount /dev/sda2 /mnt/boot
160+
# mkdir -p /mnt/boot/efi
161+
# mount /dev/sda1 /mnnt/boot/efi
162+
```
163+
164+
### Installation
165+
166+
At this point, ordinary installation can proceed from the ["Base Installation"
167+
section](https://docs.voidlinux.org/installation/guides/chroot.html#base-installation).
168+
of the standard chroot installation guide. However, before following the
169+
["Finalization"
170+
instructions](https://docs.voidlinux.org/installation/guides/chroot.html#finalization),
171+
make sure that the `zfs` package has been installed and `dracut` is configured
172+
to identify a ZFS root filesystem:
173+
174+
```
175+
(chroot) # mkdir -p /etc/dracut.conf.d
176+
(chroot) # cat > /etc/dracut.conf.d/zol.conf <<EOF
177+
nofsck="yes"
178+
add_dracutmodules+=" zfs "
179+
omit_dracutmodules+=" btrfs resume "
180+
EOF
181+
(chroot) # xbps-install zfs
182+
```
183+
184+
Finally, follow the "Finalization" instructions and reboot into your new system.

0 commit comments

Comments
 (0)