How to set up a disposable VM for running the ZFS test suite on bleeding-edge kernels
Why This Matters
OpenZFS supports a wide range of Linux kernels, but regressions can slip through on newer ones. Arch Linux ships the latest stable kernels (6.18+ at the time of writing), making it an ideal platform for catching issues before they hit other distributions.
The ZFS test suite is the project’s primary quality gate — it exercises thousands of scenarios across pool creation, send/receive, snapshots, encryption, scrub, and more. Running it on your kernel version and reporting results is one of the most valuable contributions you can make, even without writing any code.
Why a VM, Not Docker?
This is the key architectural decision. ZFS is a kernel module — the test suite needs to:
- Load and unload
spl.koandzfs.kokernel modules - Create and destroy loopback block devices for test zpools
- Exercise kernel-level filesystem operations (mount, unmount, I/O)
- Potentially crash the kernel if a bug is triggered
Docker containers share the host kernel. If you load ZFS modules inside a container, they affect your entire host system. A crashing test could take down your workstation. With a QEMU/KVM virtual machine, you get a fully isolated kernel — crashes stay inside the VM, and you can just reboot it.
┌─────────────────────────────────────────────────┐│ HOST (your workstation) ││ Arch Linux · Kernel 6.18.8 · Your ZFS pools ││ ││ ┌───────────────────────────────────────────┐ ││ │ QEMU/KVM VM │ ││ │ Arch Linux · Kernel 6.18.7 │ ││ │ │ ││ │ ┌─────────────┐ ┌───────────────────┐ │ ││ │ │ spl.ko │ │ ZFS Test Suite │ │ ││ │ │ zfs.ko │ │ (file-backed │ │ ││ │ │ (from src) │ │ loopback vdevs) │ │ ││ │ └─────────────┘ └───────────────────┘ │ ││ │ │ ││ │ If something crashes → only VM affected │ ││ └──────────────────────────────────┬────────┘ ││ SSH :2222 ←┘ │└─────────────────────────────────────────────────┘
What Is the Arch Linux Cloud Image?
We use the official Arch Linux cloud image — a minimal, pre-built qcow2 disk image maintained by the Arch Linux project. It’s designed for cloud/VM environments and includes:
- A minimal Arch Linux installation (no GUI, no bloat)
- cloud-init support for automated provisioning (user creation, SSH keys, hostname)
- A growable root filesystem (we resize it to 40G)
- systemd-networkd for automatic DHCP networking
This is NOT the “archzfs” project (archzfs.com provides prebuilt ZFS packages). We named our VM hostname “archzfs” for convenience, but we build ZFS entirely from source.
The cloud-init seed image is a tiny ISO that tells cloud-init how to configure the VM on first boot — what user to create, what password to set, what hostname to use. On a real cloud provider, this comes from the metadata service; for local QEMU, we create it manually.
Step-by-Step Setup
Prerequisites (Host)
# Install QEMU and toolssudo pacman -S qemu-full cdrtools# Optional: virt-manager for GUI managementsudo pacman -S virt-manager libvirt dnsmasqsudo systemctl enable --now libvirtdsudo usermod -aG libvirt $USER
1. Download and Prepare the Cloud Image
mkdir ~/zfs-testvm && cd ~/zfs-testvm# Download the latest Arch Linux cloud imagewget https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2# Resize to 40G (ZFS tests need space for file-backed vdevs)qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 40G
2. Create the Cloud-Init Seed
mkdir -p /tmp/seed# User configurationcat > /tmp/seed/user-data << 'EOF'#cloud-confighostname: archzfsusers: - name: arch shell: /bin/bash sudo: ALL=(ALL) NOPASSWD:ALL lock_passwd: false plain_text_passwd: test123ssh_pwauth: trueEOF# Instance metadatacat > /tmp/seed/meta-data << 'EOF'instance-id: archzfs-001local-hostname: archzfsEOF# Build the seed ISOmkisofs -output seed.img -volid cidata -joliet -rock /tmp/seed/
3. Boot the VM
qemu-system-x86_64 \ -enable-kvm \ -m 8G \ -smp 8 \ -drive file=Arch-Linux-x86_64-cloudimg.qcow2,if=virtio \ -drive file=seed.img,if=virtio,format=raw \ -nic user,hostfwd=tcp::2222-:22 \ -nographic
What each flag does:
| Flag | Purpose |
|---|---|
-enable-kvm | Use hardware virtualization (huge performance gain) |
-m 8G | 8GB RAM (ZFS ARC cache benefits from more) |
-smp 8 | 8 virtual CPUs (adjust to your host) |
-drive ...qcow2,if=virtio | Boot disk with virtio for best I/O |
-drive ...seed.img | Cloud-init configuration |
-nic user,hostfwd=... | User-mode networking with SSH port forward |
-nographic | Serial console (no GUI window needed) |
Login will appear on the serial console. Credentials: arch / test123.
You can also SSH from another terminal:
ssh -p 2222 arch@localhost
4. Install Build Dependencies (Inside VM)
sudo pacman -Syu --noconfirm \ base-devel git autoconf automake libtool python \ linux-headers libelf libaio openssl zlib \ ksh bc cpio fio inetutils sysstat jq pax rsync \ nfs-utils lsscsi xfsprogs parted perf
5. Clone and Build ZFS
# Clone YOUR fork (replace with your GitHub username)git clone https://github.com/YOUR_USERNAME/zfs.gitcd zfs# Build everything./autogen.sh./configure --enable-debugmake -j$(nproc)
The build compiles:
- Kernel modules (
spl.ko,zfs.ko) against the running kernel headers - Userspace tools (
zpool,zfs,zdb, etc.) - Test binaries and test scripts
Build time: ~5-10 minutes with 8 vCPUs.
Note: You’ll see many objtool warnings about spl_panic() and luaD_throw() missing __noreturn. These are known issues on newer kernels and don’t affect functionality.
6. Load Modules and Run Tests
# Load the ZFS kernel modulessudo scripts/zfs.sh# Verify modules are loadedlsmod | grep zfs# Run the FULL test suite (4-8 hours)scripts/zfs-tests.sh -v 2>&1 | tee /tmp/zts-full.txt# Or run a single test (for quick validation)scripts/zfs-tests.sh -v \ -t /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_001_pos.ksh
Important notes on zfs-tests.sh:
- Do NOT run as root — the script uses sudo internally
- The
-tflag requires absolute paths to individual.kshtest files - Missing utilities
netandpamtesterare okay — only NFS/PAM tests will skip - The “Permission denied” warning at startup is harmless
7. Extract and Analyze Results
From your host machine:
# Copy the summary logscp -P 2222 arch@localhost:/tmp/zts-full.txt ~/zts-full.txt# Copy detailed per-test logsscp -r -P 2222 arch@localhost:/var/tmp/test_results/ ~/zfs-test-results/
Understanding the Results
The test results summary looks like:
Results SummaryPASS 2847FAIL 12SKIP 43Running Time: 05:23:17
What to look for:
- Compare against known failures — check the ZFS Test Suite Failures wiki
- Identify NEW failures — any FAIL not on the known list for your kernel version
- Check the detailed logs — in
/var/tmp/test_results/<timestamp>/each test has stdout/stderr output
Reporting Results
If you find new failures, file a GitHub issue at openzfs/zfs with:
Title: Test failure: <test_name> on Linux 6.18.7 (Arch Linux)**Environment:**- OS: Arch Linux (cloud image)- Kernel: 6.18.7-arch1-1- ZFS: built from master (commit <hash>)- VM: QEMU/KVM, 8 vCPU, 8GB RAM**Failed test:**<test name and path>**Test output:**<paste relevant log output>**Expected behavior:**Test should PASS (passes on kernel X.Y.Z / other distro)
Tips and Tricks
Snapshot the VM after setup to avoid repeating the build:
# On host, after VM is set up and ZFS is builtqemu-img snapshot -c "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2# Restore laterqemu-img snapshot -a "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2
Run a subset of tests by test group:
# All zpool testsfor t in /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_*/*.ksh; do echo "$t"done# Run tests matching a patternfind /home/arch/zfs/tests/zfs-tests/tests/functional -name "*.ksh" | grep snapshot | head -5
Increase disk space if tests fail with ENOSPC:
# On host (VM must be stopped)qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 +20G# Inside VM after rebootsudo growpart /dev/vda 3 # or whichever partitionsudo resize2fs /dev/vda3
Suppress floppy drive errors (the harmless I/O error, dev fd0 messages):
# Add to QEMU command line:-fda none
This guide was written while setting up an OpenZFS test environment for kernel 6.18.7 on Arch Linux. The same approach works for any Linux distribution that provides cloud images — just swap the base image and package manager commands.
OpenZFS Test VM Architecture
QEMU/KVM + Arch Linux Cloud Image + ZFS from Source
:2222 ⇄ serial
ttyS0
./autogen.sh → ./configure –enable-debug → make -j8
Uses loopback devices (file-vdev0..2)
Per-test logs with pass/fail/skip
⚠ Why a VM instead of Docker?
ZFS tests need to load and unload kernel modules (spl.ko, zfs.ko). Docker containers share the host kernel — loading ZFS modules in a container affects your host system and could crash it. A QEMU/KVM VM has its own isolated kernel, so module crashes stay contained. The VM also provides loopback block devices for creating test zpools, which Docker can’t safely offer.
Setup Flow
Get Cloud Image
Download official Arch cloud image. Resize qcow2 to 40G with qemu-img resize.
Create Cloud-Init
Write user-data + meta-data YAML. Build ISO seed with mkisofs.
Boot VM
qemu-system-x86_64 -enable-kvm -m 8G -smp 8 with SSH forward on 2222.
Install Deps
pacman -S base-devel git ksh bc fio linux-headers and test dependencies.
Build ZFS
Clone fork → autogen.sh → configure → make -j8
Load & Test
scripts/zfs.sh loads modules. zfs-tests.sh -v runs the suite (4-8h).
Extract Results
SCP results to host. Compare against known failures. Report regressions on GitHub.




