Testing OpenZFS on Arch Linux with QEMU/KVM: A Contributor’s Guide

How to set up a disposable VM for running the ZFS test suite on bleeding-edge kernels


Why This Matters

OpenZFS supports a wide range of Linux kernels, but regressions can slip through on newer ones. Arch Linux ships the latest stable kernels (6.18+ at the time of writing), making it an ideal platform for catching issues before they hit other distributions.

The ZFS test suite is the project’s primary quality gate — it exercises thousands of scenarios across pool creation, send/receive, snapshots, encryption, scrub, and more. Running it on your kernel version and reporting results is one of the most valuable contributions you can make, even without writing any code.

Why a VM, Not Docker?

This is the key architectural decision. ZFS is a kernel module — the test suite needs to:

  • Load and unload spl.ko and zfs.ko kernel modules
  • Create and destroy loopback block devices for test zpools
  • Exercise kernel-level filesystem operations (mount, unmount, I/O)
  • Potentially crash the kernel if a bug is triggered

Docker containers share the host kernel. If you load ZFS modules inside a container, they affect your entire host system. A crashing test could take down your workstation. With a QEMU/KVM virtual machine, you get a fully isolated kernel — crashes stay inside the VM, and you can just reboot it.

┌─────────────────────────────────────────────────┐
│ HOST (your workstation) │
│ Arch Linux · Kernel 6.18.8 · Your ZFS pools │
│ │
│ ┌───────────────────────────────────────────┐ │
│ │ QEMU/KVM VM │ │
│ │ Arch Linux · Kernel 6.18.7 │ │
│ │ │ │
│ │ ┌─────────────┐ ┌───────────────────┐ │ │
│ │ │ spl.ko │ │ ZFS Test Suite │ │ │
│ │ │ zfs.ko │ │ (file-backed │ │ │
│ │ │ (from src) │ │ loopback vdevs) │ │ │
│ │ └─────────────┘ └───────────────────┘ │ │
│ │ │ │
│ │ If something crashes → only VM affected │ │
│ └──────────────────────────────────┬────────┘ │
│ SSH :2222 ←┘ │
└─────────────────────────────────────────────────┘

What Is the Arch Linux Cloud Image?

We use the official Arch Linux cloud image — a minimal, pre-built qcow2 disk image maintained by the Arch Linux project. It’s designed for cloud/VM environments and includes:

  • A minimal Arch Linux installation (no GUI, no bloat)
  • cloud-init support for automated provisioning (user creation, SSH keys, hostname)
  • A growable root filesystem (we resize it to 40G)
  • systemd-networkd for automatic DHCP networking

This is NOT the “archzfs” project (archzfs.com provides prebuilt ZFS packages). We named our VM hostname “archzfs” for convenience, but we build ZFS entirely from source.

The cloud-init seed image is a tiny ISO that tells cloud-init how to configure the VM on first boot — what user to create, what password to set, what hostname to use. On a real cloud provider, this comes from the metadata service; for local QEMU, we create it manually.

Step-by-Step Setup

Prerequisites (Host)

# Install QEMU and tools
sudo pacman -S qemu-full cdrtools
# Optional: virt-manager for GUI management
sudo pacman -S virt-manager libvirt dnsmasq
sudo systemctl enable --now libvirtd
sudo usermod -aG libvirt $USER

1. Download and Prepare the Cloud Image

mkdir ~/zfs-testvm && cd ~/zfs-testvm
# Download the latest Arch Linux cloud image
wget https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2
# Resize to 40G (ZFS tests need space for file-backed vdevs)
qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 40G

2. Create the Cloud-Init Seed

mkdir -p /tmp/seed
# User configuration
cat > /tmp/seed/user-data << 'EOF'
#cloud-config
hostname: archzfs
users:
- name: arch
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: false
plain_text_passwd: test123
ssh_pwauth: true
EOF
# Instance metadata
cat > /tmp/seed/meta-data << 'EOF'
instance-id: archzfs-001
local-hostname: archzfs
EOF
# Build the seed ISO
mkisofs -output seed.img -volid cidata -joliet -rock /tmp/seed/

3. Boot the VM

qemu-system-x86_64 \
-enable-kvm \
-m 8G \
-smp 8 \
-drive file=Arch-Linux-x86_64-cloudimg.qcow2,if=virtio \
-drive file=seed.img,if=virtio,format=raw \
-nic user,hostfwd=tcp::2222-:22 \
-nographic

What each flag does:

FlagPurpose
-enable-kvmUse hardware virtualization (huge performance gain)
-m 8G8GB RAM (ZFS ARC cache benefits from more)
-smp 88 virtual CPUs (adjust to your host)
-drive ...qcow2,if=virtioBoot disk with virtio for best I/O
-drive ...seed.imgCloud-init configuration
-nic user,hostfwd=...User-mode networking with SSH port forward
-nographicSerial console (no GUI window needed)

Login will appear on the serial console. Credentials: arch / test123.

You can also SSH from another terminal:

ssh -p 2222 arch@localhost

4. Install Build Dependencies (Inside VM)

sudo pacman -Syu --noconfirm \
base-devel git autoconf automake libtool python \
linux-headers libelf libaio openssl zlib \
ksh bc cpio fio inetutils sysstat jq pax rsync \
nfs-utils lsscsi xfsprogs parted perf

5. Clone and Build ZFS

# Clone YOUR fork (replace with your GitHub username)
git clone https://github.com/YOUR_USERNAME/zfs.git
cd zfs
# Build everything
./autogen.sh
./configure --enable-debug
make -j$(nproc)

The build compiles:

  • Kernel modules (spl.ko, zfs.ko) against the running kernel headers
  • Userspace tools (zpool, zfs, zdb, etc.)
  • Test binaries and test scripts

Build time: ~5-10 minutes with 8 vCPUs.

Note: You’ll see many objtool warnings about spl_panic() and luaD_throw() missing __noreturn. These are known issues on newer kernels and don’t affect functionality.

6. Load Modules and Run Tests

# Load the ZFS kernel modules
sudo scripts/zfs.sh
# Verify modules are loaded
lsmod | grep zfs
# Run the FULL test suite (4-8 hours)
scripts/zfs-tests.sh -v 2>&1 | tee /tmp/zts-full.txt
# Or run a single test (for quick validation)
scripts/zfs-tests.sh -v \
-t /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_001_pos.ksh

Important notes on zfs-tests.sh:

  • Do NOT run as root — the script uses sudo internally
  • The -t flag requires absolute paths to individual .ksh test files
  • Missing utilities net and pamtester are okay — only NFS/PAM tests will skip
  • The “Permission denied” warning at startup is harmless

7. Extract and Analyze Results

From your host machine:

# Copy the summary log
scp -P 2222 arch@localhost:/tmp/zts-full.txt ~/zts-full.txt
# Copy detailed per-test logs
scp -r -P 2222 arch@localhost:/var/tmp/test_results/ ~/zfs-test-results/

Understanding the Results

The test results summary looks like:

Results Summary
PASS 2847
FAIL 12
SKIP 43
Running Time: 05:23:17

What to look for:

  1. Compare against known failures — check the ZFS Test Suite Failures wiki
  2. Identify NEW failures — any FAIL not on the known list for your kernel version
  3. Check the detailed logs — in /var/tmp/test_results/<timestamp>/ each test has stdout/stderr output

Reporting Results

If you find new failures, file a GitHub issue at openzfs/zfs with:

Title: Test failure: <test_name> on Linux 6.18.7 (Arch Linux)
**Environment:**
- OS: Arch Linux (cloud image)
- Kernel: 6.18.7-arch1-1
- ZFS: built from master (commit <hash>)
- VM: QEMU/KVM, 8 vCPU, 8GB RAM
**Failed test:**
<test name and path>
**Test output:**
<paste relevant log output>
**Expected behavior:**
Test should PASS (passes on kernel X.Y.Z / other distro)

Tips and Tricks

Snapshot the VM after setup to avoid repeating the build:

# On host, after VM is set up and ZFS is built
qemu-img snapshot -c "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2
# Restore later
qemu-img snapshot -a "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2

Run a subset of tests by test group:

# All zpool tests
for t in /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_*/*.ksh; do
echo "$t"
done
# Run tests matching a pattern
find /home/arch/zfs/tests/zfs-tests/tests/functional -name "*.ksh" | grep snapshot | head -5

Increase disk space if tests fail with ENOSPC:

# On host (VM must be stopped)
qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 +20G
# Inside VM after reboot
sudo growpart /dev/vda 3 # or whichever partition
sudo resize2fs /dev/vda3

Suppress floppy drive errors (the harmless I/O error, dev fd0 messages):

# Add to QEMU command line:
-fda none

This guide was written while setting up an OpenZFS test environment for kernel 6.18.7 on Arch Linux. The same approach works for any Linux distribution that provides cloud images — just swap the base image and package manager commands.

OpenZFS Test VM Architecture

QEMU/KVM + Arch Linux Cloud Image + ZFS from Source

Host Machine
Hardware Arch Linux · Kernel 6.18.8 · 24 cores
Hypervisor QEMU 9.x + KVM (hardware virtualization)
VM Disk Arch-Linux-x86_64-cloudimg.qcow2 (resized 40G)
Cloud-Init Seed seed.img (ISO9660) → user, password, hostname
Network User-mode networking · hostfwd :2222→:22
Get Results scp -P 2222 arch@localhost:/var/tmp/test_results/ .
SSH
:2222
serial
ttyS0
QEMU VM (archzfs)
Guest OS Arch Linux · Kernel 6.18.7 · 8 vCPU · 8GB RAM
Cloud-Init User: arch · Pass: test123 · NOPASSWD sudo
ZFS Source (from fork) git clone github.com/YOUR_USER/zfs
./autogen.sh → ./configure –enable-debug → make -j8
ZFS Kernel Modules scripts/zfs.sh → loads spl.ko + zfs.ko
ZFS Test Suite scripts/zfs-tests.sh -v
Uses loopback devices (file-vdev0..2)
Test Results /var/tmp/test_results/YYYYMMDDTHHMMSS/
Per-test logs with pass/fail/skip

⚠ Why a VM instead of Docker?

ZFS tests need to load and unload kernel modules (spl.ko, zfs.ko). Docker containers share the host kernel — loading ZFS modules in a container affects your host system and could crash it. A QEMU/KVM VM has its own isolated kernel, so module crashes stay contained. The VM also provides loopback block devices for creating test zpools, which Docker can’t safely offer.

Setup Flow

1

Get Cloud Image

Download official Arch cloud image. Resize qcow2 to 40G with qemu-img resize.

2

Create Cloud-Init

Write user-data + meta-data YAML. Build ISO seed with mkisofs.

3

Boot VM

qemu-system-x86_64 -enable-kvm -m 8G -smp 8 with SSH forward on 2222.

4

Install Deps

pacman -S base-devel git ksh bc fio linux-headers and test dependencies.

5

Build ZFS

Clone fork → autogen.shconfiguremake -j8

6

Load & Test

scripts/zfs.sh loads modules. zfs-tests.sh -v runs the suite (4-8h).

7

Extract Results

SCP results to host. Compare against known failures. Report regressions on GitHub.

,

Leave a comment

Discover more from /root

Subscribe now to keep reading and get access to the full archive.

Continue reading