Skip to content
    • About
    • Contact

/root

  • FreeBSD 14.4 BETA 2

    February 15th, 2026
    A summary of changes since BETA1 includes:
    
    o Multiple bug fixes in diff(1)
    
    o Several updates to qlnxe(4)
    
    o Updates to the blocklist (aka blacklist) system
    
    o Compatibility in the ipfw(8) userland tools with FreeBSD 15 kernels

    https://lists.freebsd.org/archives/freebsd-stable/2026-February/003848.html

  • How a TrueNAS Nightly Update Bug Left My Server Unbootable

    February 14th, 2026

    How a failed nightly update left my TrueNAS server booting into an empty filesystem — and the two bugs responsible.

    I run TrueNAS Scale on an Aoostar WTR Max as my homelab server, with dozens of Docker containers for everything from Immich to Jellyfin. I like to stay on the nightly builds to get early access to new features and contribute bug reports when things go wrong. Today, things went very wrong.

    The Update Failure

    It started innocently enough. I kicked off the nightly update from the TrueNAS UI, updating from 26.04.0-MASTER-20260210-020233 to the latest 20260213 build. Instead of a smooth update, I got this:

    error[EFAULT] Error: Command ['zfs', 'destroy', '-r',
      'boot-pool/ROOT/26.04.0-MASTER-20260213-020146-1']
      failed with exit code 1:
      cannot unmount '/tmp/tmpo8dbr91e': pool or dataset is busy

    The update process was trying to clean up a previous boot environment but couldn’t unmount a temporary directory it had created. No big deal, I thought — I’ll just clean it up manually.

    Down the Rabbit Hole

    I checked what was holding the mount open:

    bash$ fuser -m /tmp/tmpo8dbr91e    # nothing
    $ lsof +D /tmp/tmpo8dbr91e     # nothing (just Docker overlay warnings)

    Nothing was using it. A force unmount also failed:

    bash$ sudo umount -f /tmp/tmpo8dbr91e
    umount: /tmp/tmpo8dbr91e: target is busy.

    Only a lazy unmount worked:

    bash$ sudo umount -l /tmp/tmpo8dbr91e

    So I unmounted it and destroyed the stale boot environment manually. Then I retried the update. Same error, different temp path. Unmount, destroy, retry. Same error again. Each attempt, the updater would mount a new temporary directory, fail to unmount it, and bail out.

    I even tried stopping Docker before the update, thinking the overlay mounts might be interfering. No luck.

    The Real Problem

    Frustrated, I rebooted the server thinking a clean slate might help. The server didn’t come back. After 10 minutes of pinging with no response, I plugged in a monitor and saw this:

    consoleMounting 'boot-pool/ROOT/26.04.0-MASTER-20260213-020146' on '/root/' ... done.
    Begin: Running /scripts/local-bottom ... done.
    Begin: Running /scripts/nfs-bottom ... done.
    run-init: can't execute '/sbin/init': No such file or directory
    Target filesystem doesn't have requested /sbin/init.
    run-init: can't execute '/etc/init': No such file or directory
    run-init: can't execute '/bin/init': No such file or directory
    run-init: can't execute '/bin/sh': No such file or directory
    No init found. Try passing init= bootarg.
    
    BusyBox v1.37.0 (Debian 1:1.37.0-6+b3) built-in shell (ash)
    Enter 'help' for a list of built-in commands.
    
    (initramfs)

    The system had booted into the incomplete boot environment from the failed update — an empty shell with no operating system in it. The update process had set this as the default boot environment before it was fully built.

    The Recovery

    Fortunately, ZFS boot environments make this recoverable. I rebooted again, caught the GRUB menu, and selected my previous working boot environment (20260210-020233). After booting successfully, I locked in the correct boot environment as the default:

    bash$ sudo zpool set bootfs=boot-pool/ROOT/26.04.0-MASTER-20260210-020233 boot-pool

    Then cleaned up the broken environment:

    bash$ sudo zfs destroy -r boot-pool/ROOT/26.04.0-MASTER-20260213-020146

    Server back to normal.

    Two Bugs, One Update

    There are actually two separate bugs here:

    Bug 1 — Stale Mount Cleanup The update process mounts the boot environment into a temp directory but can’t clean it up when something fails. umount -f doesn’t work; only umount -l does. And since each retry creates a new temp mount, the problem is self-perpetuating.
    Bug 2 — Premature Bootfs Switch (Critical) This is the dangerous one. The updater sets the new boot environment as the GRUB default before it’s fully populated. When the update fails mid-way, you’re left with a system that will boot into an empty filesystem on the next reboot. If you don’t have physical console access and a keyboard handy, you could be in serious trouble.

    What Happens During a Failed Update

    Update starts
    →
    Sets new bootfs
    →
    Build fails
    →
    Reboot = initramfs

    The Fix Should Be Simple

    The updater should only set the new boot environment as the default after the update is verified complete. And it should use umount -l as a fallback when umount -f fails, since the standard force unmount clearly isn’t sufficient here.

    I’ve filed this as NAS-139794 on the TrueNAS Jira. If you’re running nightly builds, be aware of this issue — and make sure you have console access to your server in case you need to select a different boot environment from GRUB.

    Lessons Learned

    Running nightly builds is inherently risky, and I accept that. But an update failure should never leave a system unbootable. The whole point of ZFS boot environments is to provide a safety net — but that net has a hole when the updater switches the default before the new environment is ready.

    In the meantime, keep a monitor and keyboard accessible for your TrueNAS box, and remember: if you ever drop to an initramfs shell after an update, your data is fine. Just reboot into GRUB and pick the previous boot environment.

    • TrueNAS
    • ZFS
    • Homelab
    • Boot Environments
    • Bug Report
  • ArchZFS – Arch Linux official ZFS Repository

    February 14th, 2026

    The ArchZFS project has moved its official package repository from archzfs.com to GitHub Releases. Here’s how to migrate — and why this matters for Arch Linux ZFS users.

    If you run ZFS on Arch Linux, you almost certainly depend on the ArchZFS project for your kernel modules. The project has been the go-to source for prebuilt ZFS packages on Arch for years, saving users from the pain of building DKMS modules on every kernel update.

    The old archzfs.com repository has gone stale, and the project has migrated to serving packages directly from GitHub Releases. The packages are built the same way and provide the same set of packages — the only difference is a new PGP signing key and the repository URL.

    How to Migrate

    If you’re currently using the old archzfs.com server in your /etc/pacman.conf, you need to update it. There are two options depending on your trust model.

    Option 1: Without PGP Verification

    The PGP signing system is still being finalized, so if you just want it working right away, you can skip signature verification for now:

    pacman.conf[archzfs]
    SigLevel = Never
    Server = https://github.com/archzfs/archzfs/releases/download/experimental

    Option 2: With PGP Verification (Recommended)

    For proper package verification, import the new signing key first:

    bash# pacman-key --init
    # pacman-key --recv-keys 3A9917BF0DED5C13F69AC68FABEC0A1208037BE9
    # pacman-key --lsign-key 3A9917BF0DED5C13F69AC68FABEC0A1208037BE9

    Then set the repo to require signatures:

    pacman.conf[archzfs]
    SigLevel = Required
    Server = https://github.com/archzfs/archzfs/releases/download/experimental

    After updating your config, sync and refresh:

    bash# pacman -Sy

    What’s Available

    The repository provides the same package groups as before, targeting different kernels:

    Package GroupKernelUse Case
    archzfs-linuxlinux (default)Best for most users, latest stable OpenZFS
    archzfs-linux-ltslinux-ltsLTS kernel, better compatibility
    archzfs-linux-zenlinux-zenZen kernel with extra features
    archzfs-linux-hardenedlinux-hardenedSecurity-focused kernel
    archzfs-dkmsAny kernelAuto-rebuilds on kernel update, works with any kernel
    Note on DKMS vs Prebuilt Prebuilt packages are tied to a specific kernel version — if the Arch repos push a newer kernel than ArchZFS has built for, you’ll be blocked from updating until ArchZFS catches up. The DKMS packages avoid this by compiling locally, at the cost of longer update times. Choose based on your tolerance for build times vs. update delays.

    Why GitHub Releases?

    Hosting a pacman repository on GitHub Releases is a clever approach. GitHub handles the CDN, availability, and bandwidth — no more worrying about a single server going down and blocking ZFS users from updating. The build pipeline uses GitHub Actions, so packages are built automatically and transparently. You can even inspect the build scripts in the repository itself.

    The trade-off is that the URL is a bit unwieldy compared to the old archzfs.com/$repo/$arch, but that’s a minor cosmetic issue.

    A Note of Caution

    The project labels this as experimental and advises starting with non-critical systems. In practice, the packages are the same ones the community has been using — the “experimental” label applies to the new distribution method, not the packages themselves. Still, the PGP signing system is being reworked, so you may want to revisit your SigLevel setting once that’s finalized.

    If You’re Using the Old Repository The old archzfs.com repository is stale and will not receive updates. If you haven’t migrated yet, do it now — before your next pacman -Syu pulls a kernel that your current ZFS modules don’t support, leaving you unable to import your pools after reboot.

    Quick Migration Checklist

    Edit pacman.conf
    →
    Import new PGP key
    →
    pacman -Sy
    →
    pacman -Syu

    For full details and ongoing updates, check the ArchZFS wiki and the release page.

    • Arch Linux
    • ZFS
    • OpenZFS
    • pacman
    • ArchZFS
  • February 13th, 2026
  • Contributing Device-Specific Error Reporting to OpenZFS

    February 13th, 2026

    A kernel-to-userspace patch that replaces a vague zpool create error with one that names the exact device and pool causing the problem. Here’s how it works, from the ioctl layer to the formatted error message.

    The problem

    If you’ve managed ZFS pools with more than a handful of disks, you’ve almost certainly hit this error:

    bash$ sudo zpool create tank mirror /dev/sda /dev/sdb /dev/sdc /dev/sdd
    cannot create 'tank': one or more vdevs refer to the same device,
    or one of the devices is part of an active md or lvm device

    Which device? What pool? The error gives you nothing. In a 12-disk server you’re left checking each device one by one until you find the culprit.

    I’d been working on a previous PR (#18184) improving zpool create error messages when Brian Behlendorf suggested a follow-up: pass device-specific error information from the kernel back to userspace, following the existing ZPOOL_CONFIG_LOAD_INFO pattern that zpool import already uses.

    So I built it. The result is PR #18213:

    Error message
    Beforecannot create 'tank': one or more vdevs refer to the same device
    Aftercannot create 'tank': device '/dev/sdb1' is part of active pool 'rpool'

    Why this is harder than it looks

    The obvious approach would be: when zpool create fails, walk the vdev tree, find the device with the error, and report it. But there’s a timing problem in the kernel that makes this impossible.

    When spa_create() fails, the error cleanup path calls vdev_close() on all vdevs. This function unconditionally resets vd->vdev_stat.vs_aux to VDEV_AUX_NONE on every device in the tree. By the time the error code reaches the ioctl handler, all evidence of which device failed and why has been wiped clean.

    Key Insight The error information must be captured at the exact moment of failure, inside vdev_label_init(), before the cleanup path destroys it. And it must be stored somewhere that survives the cleanup — the spa_t struct, which represents the pool itself.

    The only errno that travels back through the ioctl is an integer like EBUSY. No context about which device, no pool name, nothing. The entire design challenge is getting two strings (a device path and a pool name) from a kernel function that runs during vdev initialization all the way back to the userspace zpool command.

    Architecture: the data flow

    The solution follows the same mechanism that zpool import already uses to return rich error information: an nvlist (ZFS’s key-value dictionary, like a JSON object) packed into the ioctl output buffer under a well-known key.

    vdev_label_init()
    detect conflict,
    read label
    →
    spa→errlist
    vdev + pool name
    →
    spa_create()
    hand off errlist
    →
    ioc_pool_create()
    wrap → put_nvlist
    →
    ioctl
    kernel → user
    →
    zpool_create()
    unpack → format

    Four touch points, each doing one small thing. Let’s walk through them.

    Implementation

    1. Capture the error at the moment of failure

    This is the heart of the change. Inside vdev_label_init(), when vdev_inuse() returns true, we build an nvlist with the device path, then read the on-disk label to extract the pool name:

    module/zfs/vdev_label.c/*
     * Determine if the vdev is in use.
     */
    if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_SPLIT &&
        vdev_inuse(vd, crtxg, reason, &spare_guid, &l2cache_guid)) {
            if (spa->spa_create_errlist == NULL) {
                    nvlist_t *nv = fnvlist_alloc();
                    nvlist_t *cfg;
    
                    if (vd->vdev_path != NULL)
                            fnvlist_add_string(nv,
                                ZPOOL_CREATE_INFO_VDEV, vd->vdev_path);
    
                    cfg = vdev_label_read_config(vd, -1ULL);
                    if (cfg != NULL) {
                            const char *pname;
                            if (nvlist_lookup_string(cfg,
                                ZPOOL_CONFIG_POOL_NAME, &pname) == 0)
                                    fnvlist_add_string(nv,
                                        ZPOOL_CREATE_INFO_POOL, pname);
                            nvlist_free(cfg);
                    }
    
                    spa->spa_create_errlist = nv;
            }
            return (SET_ERROR(EBUSY));
    }

    The NULL check on spa_create_errlist ensures we only record the first failing device. If there are multiple conflicts, the first one is what you need to fix anyway. fnvlist_alloc() and fnvlist_add_string() are the “fatal” nvlist functions that panic on allocation failure — appropriate here since we’re in a code path where memory should be available.

    2. Hand the errlist to the caller

    On error, spa_create() transfers ownership of the errlist via the new errinfo output parameter:

    module/zfs/spa.cif (error != 0) {
            if (errinfo != NULL) {
                    *errinfo = spa->spa_create_errlist;
                    spa->spa_create_errlist = NULL;
            }
            spa_unload(spa);
            spa_deactivate(spa);
            spa_remove(spa);
            ...

    Setting spa_create_errlist to NULL after the handoff prevents spa_deactivate() from freeing it — ownership transfers to the caller.

    3. Wrap and pack into the ioctl output

    The ioctl handler wraps the errlist under a ZPOOL_CONFIG_CREATE_INFO key, mirroring how zpool import uses ZPOOL_CONFIG_LOAD_INFO:

    module/zfs/zfs_ioctl.cerror = spa_create(zc->zc_name, config, props, zplprops, dcp,
        &errinfo);
    if (errinfo != NULL) {
            nvlist_t *outnv = fnvlist_alloc();
            fnvlist_add_nvlist(outnv,
                ZPOOL_CONFIG_CREATE_INFO, errinfo);
            (void) put_nvlist(zc, outnv);
            nvlist_free(outnv);
            nvlist_free(errinfo);
    }

    put_nvlist() serializes the nvlist into zc->zc_nvlist_dst, which is a shared buffer between kernel and userspace.

    4. Unpack and format in userspace

    In libzfs, after the ioctl fails, we unpack the buffer, extract the device and pool name, and format the error:

    lib/libzfs/libzfs_pool.cnvlist_t *outnv = NULL;
    if (zc.zc_nvlist_dst_size > 0 &&
        nvlist_unpack((void *)(uintptr_t)zc.zc_nvlist_dst,
        zc.zc_nvlist_dst_size, &outnv, 0) == 0 &&
        outnv != NULL) {
            nvlist_t *errinfo = NULL;
            if (nvlist_lookup_nvlist(outnv,
                ZPOOL_CONFIG_CREATE_INFO, &errinfo) == 0) {
                    const char *vdev = NULL;
                    const char *pname = NULL;
                    (void) nvlist_lookup_string(errinfo,
                        ZPOOL_CREATE_INFO_VDEV, &vdev);
                    (void) nvlist_lookup_string(errinfo,
                        ZPOOL_CREATE_INFO_POOL, &pname);
                    if (vdev != NULL) {
                            if (pname != NULL)
                                    zfs_error_aux(hdl,
                                        dgettext(TEXT_DOMAIN,
                                        "device '%s' is part of "
                                        "active pool '%s'"),
                                        vdev, pname);
                            else
                                    zfs_error_aux(hdl,
                                        dgettext(TEXT_DOMAIN,
                                        "device '%s' is in use"),
                                        vdev);
                            ...
                    }
            }
    }

    If both values are available, you get: device ‘/dev/sdb1’ is part of active pool ‘rpool’. If only the path is available (label can’t be read), you get: device ‘/dev/sdb1’ is in use. If no errinfo came back at all, the existing generic error handling kicks in unchanged.

    What changed

    File+−
    module/zfs/vdev_label.c+23-1
    lib/libzfs/libzfs_pool.c+41
    module/zfs/zfs_ioctl.c+12-1
    module/zfs/spa.c+10-1
    cmd/ztest.c+5-5
    include/sys/fs/zfs.h+3
    include/sys/spa.h+1-1
    include/sys/spa_impl.h+1
    tests/.../zpool_create_errinfo_001_neg.ksh+99
    11 files total+195-10

    93 lines of feature code across 8 C files, plus a 99-line ZTS test. The cmd/ztest.c changes are mechanical — just adding a NULL parameter to each spa_create() call to match the new signature.

    Testing

    I tested on an Arch Linux VM running kernel 6.18.9-arch1-2 with ZFS built from source. The test environment used loopback devices, which is the standard approach in the ZFS Test Suite — the kernel code path is identical regardless of the underlying block device.

    Duplicate device — device-specific error

    bash$ truncate -s 128M /tmp/vdev1
    $ sudo losetup /dev/loop10 /tmp/vdev1
    $ sudo losetup /dev/loop12 /tmp/vdev1   # same backing file
    $ sudo zpool create testpool1 mirror /dev/loop10 /dev/loop12
    cannot create 'testpool1': device '/dev/loop12' is part of active pool 'testpool1'

    Normal creation — no regression

    bash$ truncate -s 128M /tmp/vdev1 /tmp/vdev2
    $ sudo zpool create testpool1 mirror /tmp/vdev1 /tmp/vdev2
    $ sudo zpool status testpool1
      pool: testpool1
     state: ONLINE
    config:
    
            NAME            STATE     READ WRITE CKSUM
            testpool1       ONLINE       0     0     0
              mirror-0      ONLINE       0     0     0
                /tmp/vdev1  ONLINE       0     0     0
                /tmp/vdev2  ONLINE       0     0     0

    ZTS test

    A new negative test (zpool_create_errinfo_001_neg) creates two loopback devices backed by the same file and attempts a mirror pool creation. It verifies three things: the command fails, the error names the specific device, and the error mentions the active pool.

    ZTS$ zfs-tests.sh -vx -t cli_root/zpool_create/zpool_create_errinfo_001_neg
    
    Test: zpool_create_errinfo_001_neg (run as root) [00:00] [PASS]
    
    Results Summary
    PASS       1
    Running Time:  00:00:00
    Percent passed: 100.0%

    CI checkstyle passes on all platforms (Ubuntu 22/24, Debian 12/13, CentOS Stream 9, AlmaLinux 8/10, FreeBSD 14). Clean build with no compiler warnings.

    Design trade-offs

    Only the first failing device is recorded. If multiple vdevs conflict, only the first one goes into spa_create_errlist. You need to fix the first problem before you can see the next one anyway, and it keeps the implementation simple.

    The label is read twice. vdev_inuse() already reads the on-disk label and frees it before returning. We read it again with vdev_label_read_config() to extract the pool name. Modifying vdev_inuse() to optionally return the label would avoid this, but changing that function signature affects many callers — a much larger change for a follow-up.

    The errlist field lives on spa_t permanently. It’s only used during spa_create(), but the field exists on every pool in memory. This costs 8 bytes per pool (one pointer, always NULL during normal operation) — negligible.

    Only one error path is covered. The mechanism only fires for the vdev_inuse() EBUSY case inside vdev_label_init(). Other failures (open errors, size mismatches) still produce generic messages. The spa_create_errlist infrastructure is there for future extension.

    What’s next

    This is a focused first step. The spa_create_errlist mechanism could be extended to cover more error paths — vdev_open() failures, size mismatches, GUID conflicts. The infrastructure is in place; it just needs more callsites.

    The PR is at openzfs/zfs #18213. Feedback welcome.

    • openzfs
    • zfs
    • kernel
    • c
    • linux
    • storage
    • open-source
    • nvlist
    • ioctl
  • How to Contribute to the Immich EXIF Dataset – Help Improve Open Source Photo Management

    February 11th, 2026

    Immich, the popular open-source, self-hosted photo and video management solution, has launched a community-driven initiative to improve its metadata handling capabilities. Through the new EXIF Dataset project, users can contribute their photos to help train and improve Immich’s EXIF parsing and metadata extraction features. I recently contributed some of my own photos to the project, […]

    How to Contribute to the Immich EXIF Dataset – Help Improve Open Source Photo Management
  • ArchLinux on QEMU for dev testing

    February 10th, 2026

    Tutorial · February 2026 · 15 min read

    QEMU on Arch Linux: A Practical Guide to Virtual Machine Testing

    From cloud images and package building to kernel module debugging and cross-platform validation — all from the command line.

    Contents

    01 Why QEMU?

    02 Spinning Up Arch Linux Cloud Images

    03 Running FreeBSD in QEMU

    04 Testing OpenZFS with QEMU

    05 Sharing Files Between Host and Guest

    06 Networking Options

    07 Testing Real Hardware Drivers

    08 Quick Reference

    Why QEMU?

    QEMU combined with KVM turns your Linux host into a bare-metal hypervisor. Unlike VirtualBox or VMware, QEMU offers direct access to hardware emulation options, PCI passthrough, and granular control over every aspect of the virtual machine. On Arch Linux, setup is minimal.

    $ sudo pacman -S qemu-full
    
    # Verify KVM support
    $ lsmod | grep kvm
    kvm_amd               200704  0
    kvm                  1302528  1 kvm_amd

    You should see kvm_amd or kvm_intel loaded. That’s it — you’re ready to run VMs at near-native performance.

    Spinning Up Arch Linux Cloud Images

    The fastest path to a working Arch Linux VM is the official cloud image — a pre-built qcow2 disk designed for automated provisioning with cloud-init.

    Download and Prepare

    $ curl -LO https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2
    $ qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 20G

    The image ships at a minimal size. Resizing to 20G gives room for package building, compilation, and development work.

    Cloud-Init Configuration

    Cloud images expect a cloud-init seed to configure users, packages, and system settings on first boot. Install cloud-utils on your host:

    $ sudo pacman -S cloud-utils

    Create a user-data file. Note the unquoted heredoc — this ensures shell variables expand correctly:

    SSH_KEY=$(cat ~/.ssh/id_ed25519.pub 2>/dev/null || cat ~/.ssh/id_rsa.pub)
    cat > user-data <<EOF
    #cloud-config
    users:
      - name: chris
        sudo: ALL=(ALL) NOPASSWD:ALL
        shell: /bin/bash
        lock_passwd: false
        plain_text_passwd: changeme
        ssh_authorized_keys:
          - ${SSH_KEY}
    packages:
      - base-devel
      - git
      - vim
      - devtools
      - namcap
    growpart:
      mode: auto
      devices: ['/']
    EOF

    ⚠ Common Pitfall

    Using 'EOF' (single-quoted) prevents variable expansion, so ${SSH_KEY} becomes a literal string. Always use unquoted EOF when you need variable substitution.

    Generate the seed ISO and launch:

    $ cloud-localds seed.iso user-data
    
    $ qemu-system-x86_64 \
      -enable-kvm \
      -m 4G \
      -smp 4 \
      -drive file=Arch-Linux-x86_64-cloudimg.qcow2,if=virtio \
      -drive file=seed.iso,format=raw,if=virtio \
      -nographic

    Cloud-Init Runs Once

    Cloud-init marks itself as complete after the first boot. If you modify user-data and rebuild seed.iso, the existing image ignores it. You must download a fresh qcow2 image before applying new configuration.

    Use Ctrl+A, X to kill the VM.

    Running FreeBSD in QEMU

    FreeBSD provides pre-built VM images in qcow2 format. FreeBSD 15.0-RELEASE (December 2025) is the latest stable release, while 16.0-CURRENT snapshots are available for testing bleeding-edge features.

    Download

    # FreeBSD 15.0 stable
    $ curl -LO https://download.freebsd.org/releases/VM-IMAGES/15.0-RELEASE/amd64/Latest/FreeBSD-15.0-RELEASE-amd64-ufs.qcow2.xz
    $ xz -d FreeBSD-15.0-RELEASE-amd64-ufs.qcow2.xz
    
    # FreeBSD 16.0-CURRENT (development snapshot)
    $ curl -LO https://download.freebsd.org/snapshots/VM-IMAGES/16.0-CURRENT/amd64/Latest/FreeBSD-16.0-CURRENT-amd64-ufs.qcow2.xz
    $ xz -d FreeBSD-16.0-CURRENT-amd64-ufs.qcow2.xz
    
    $ qemu-img resize FreeBSD-15.0-RELEASE-amd64-ufs.qcow2 20G

    The Serial Console Challenge

    Unlike Linux cloud images, FreeBSD VM images default to VGA console output. Launching with -nographic appears to hang — the system is actually booting, but sending output to the emulated display.

    Boot with VGA first to configure serial:

    $ qemu-system-x86_64 \
      -enable-kvm \
      -m 4G \
      -smp 4 \
      -hda FreeBSD-15.0-RELEASE-amd64-ufs.qcow2 \
      -vga std

    Login as root (no password), then enable serial console permanently:

    # echo 'console="comconsole"' >> /boot/loader.conf
    # poweroff

    All subsequent boots work with -nographic. Alternatively, at the FreeBSD boot menu, press 3 to escape to the loader prompt and type set console=comconsole then boot.

    Disk Interface Note

    If FreeBSD fails to boot with if=virtio, fall back to IDE emulation using -hda instead. IDE is universally supported.

    Testing OpenZFS with QEMU

    One of the most powerful uses of QEMU on Arch Linux is building and testing OpenZFS against new kernels. Arch’s rolling release model means kernel updates arrive frequently, and out-of-tree modules like ZFS need validation after every update.

    Build Environment

    $ git clone https://github.com/openzfs/zfs.git
    $ cd zfs
    $ ./autogen.sh
    $ ./configure --enable-debug
    $ make -j$(nproc)
    $ sudo make install
    $ sudo ldconfig
    $ sudo modprobe zfs

    Running the ZFS Test Suite

    Before running the test suite, a critical and often-missed step — install the test helpers:

    $ sudo ~/zfs/scripts/zfs-helpers.sh -i
    
    # Create loop devices for virtual disks
    for i in $(seq 0 15); do
      sudo mknod -m 0660 /dev/loop$i b 7 $i 2>/dev/null
    done
    
    # Run sanity tests
    $ ~/zfs/scripts/zfs-tests.sh -v -r sanity

    Real-World Debugging: From 18% to 97.6%

    Testing OpenZFS 2.4.99 on kernel 6.18.8-arch2-1 revealed two cascading issues that dropped the pass rate dramatically. Here’s what happened and how to fix it.

    18% Before fixes
    97.6% After fixes
    808 Tests passed

    Problem 1: Permission denied for ephemeral users. The test suite creates temporary users (staff1, staff2) for permission testing. If your ZFS source directory is under a home directory with restrictive permissions, these users can’t traverse the path:

    err: env: 'ksh': Permission denied
    staff2 doesn't have permissions on /home/arch/zfs/tests/zfs-tests/bin
    $ chmod o+x /home/arch
    $ chmod -R o+rx /home/arch/zfs
    $ sudo chmod o+rw /dev/zfs

    Problem 2: Leftover test pools cascade failures. If a previous test run left a ZFS pool mounted, every subsequent setup script fails with “Device or resource busy”:

    $ sudo zfs destroy -r testpool/testfs
    $ sudo zpool destroy testpool
    $ rm -rf /var/tmp/testdir

    ✓ Result

    After fixing both issues, the sanity suite completed in 15 minutes: 808 PASS, 6 FAIL, 14 SKIP. The remaining 6 failures were all environment-related (missing packages) — zero kernel compatibility regressions.

    Sharing Files Between Host and Guest

    QEMU’s 9p virtfs protocol allows sharing a host directory with the guest without network configuration — ideal for an edit-on-host, build-in-guest workflow:

    $ qemu-system-x86_64 \
      -enable-kvm \
      -m 4G \
      -smp 4 \
      -drive file=Arch-Linux-x86_64-cloudimg.qcow2,if=virtio \
      -virtfs local,path=/home/chris/shared,mount_tag=host_share,security_model=mapped-xattr,id=host_share \
      -nographic

    Inside the guest:

    $ sudo mount -t 9p -o trans=virtio host_share /mnt/shared

    Networking Options

    QEMU’s user-mode networking (-nic user) is the simplest setup — it provides NAT-based internet access and port forwarding without any host configuration:

    # Forward host port 2222 to guest SSH
    -nic user,hostfwd=tcp::2222-:22

    This is sufficient for most development and testing workflows. For bridged or TAP networking, consult the QEMU documentation.

    Testing Real Hardware Drivers

    QEMU emulates standard hardware (e1000 NICs, emulated VGA), not your actual devices. If you need to test drivers against real hardware — such as a Realtek Ethernet controller or an AMD GPU — you have two options:

    PCI Passthrough (VFIO): Bind a real PCI device to the vfio-pci driver and pass it directly to the VM. This requires IOMMU support (amd_iommu=on in the kernel command line) and removes the device from the host for the duration.

    Native Boot from USB: Write a live image to a USB stick and boot your physical machine directly. For driver testing, this is almost always the better choice:

    $ sudo dd if=FreeBSD-16.0-CURRENT-amd64-memstick.img of=/dev/sdX bs=4M status=progress

    Quick Reference

    Task Command
    Start Arch VMqemu-system-x86_64 -enable-kvm -m 4G -smp 4 -drive file=arch.qcow2,if=virtio -drive file=seed.iso,format=raw,if=virtio -nographic
    Start FreeBSD (VGA)qemu-system-x86_64 -enable-kvm -m 4G -smp 4 -hda freebsd.qcow2 -vga std
    Start FreeBSD (serial)qemu-system-x86_64 -enable-kvm -m 4G -smp 4 -hda freebsd.qcow2 -nographic
    Kill VMCtrl+A, X
    Resize diskqemu-img resize image.qcow2 20G
    Create seed ISOcloud-localds seed.iso user-data

    QEMU Arch Linux FreeBSD OpenZFS KVM

    Written from real-world testing on AMD Ryzen 9 9900X · Arch Linux · Kernel 6.18.8

  • WordPress and <style> tags

    February 9th, 2026

    WordPress.com strips <style> tags from posts. Here’s how to work around that and create beautifully styled technical articles with custom typography, code blocks, and layout components — without a self-hosted installation.

    If you’ve ever tried to write a technical blog post on WordPress.com and found the default styling lacking — ugly code blocks, no control over fonts, tables that look like spreadsheets from 2005 — you’ve hit the platform’s biggest limitation. WordPress.com sanitizes post HTML aggressively, stripping out <style> and <link> tags for security reasons.

    This post documents the approach I use to get full control over article styling on a WordPress.com Premium plan, without needing a self-hosted WordPress installation.

    The Problem

    WordPress.com’s block editor (Gutenberg) gives you paragraphs, headings, images, and code blocks. But the built-in styling is generic — it inherits your theme’s defaults, which are designed for broad appeal, not for technical writing. Specifically:

    Code blocks use your theme’s monospace font with minimal contrast. No syntax highlighting, no language labels, no dark background that signals “this is code” to a scanning reader.

    Tables get basic browser defaults — no header styling, inconsistent padding, no visual hierarchy between header and data rows.

    Callout boxes don’t exist natively. You can use a Quote block, but it looks like a quote, not like a tip or warning. There’s no way to add a colored left border with a label.

    Typography is locked to your theme. If your theme uses a system font stack, every post looks like a Google Doc.

    The obvious fix — adding a <style> block to your post HTML — doesn’t work. WordPress.com strips it on save.

    The Solution: Additional CSS + Custom HTML Blocks

    The approach splits styling from content across two places:

    Additional CSS
    site-wide styles
    +
    Custom HTML Block
    post content with classes
    =
    Styled Post
    fonts, colors, layout

    Additional CSS lives in the WordPress Customizer (Appearance → Customize → Additional CSS, or via the Site Editor on block themes). It’s injected into every page’s <head> as a legitimate <style> block. WordPress.com allows this on paid plans (Personal and above) because it’s a controlled environment — you’re editing a dedicated CSS field, not injecting arbitrary HTML into post content.

    Custom HTML blocks in the post editor accept raw HTML with class attributes. WordPress.com doesn’t strip class from elements, so your post HTML can reference any class defined in Additional CSS.

    The result: your CSS lives in one place and applies to all posts. Your post content is clean, semantic HTML with descriptive class names. No inline styles, no duplication, no fighting the sanitizer.

    Setting Up the CSS

    I scope everything under a single wrapper class — .rss-post — to avoid polluting the rest of the site. Every selector starts with .rss-post, so the styles only apply inside posts that use the wrapper div.

    Fonts

    The CSS imports three fonts from Google Fonts via @import:

    css@import url('https://fonts.googleapis.com/css2?family=Newsreader:ital,opsz,wght@0,6..72,400;0,6..72,600;1,6..72,400&family=JetBrains+Mono:wght@400;500&family=DM+Sans:wght@400;500;600&display=swap');

    Newsreader is an optical-size serif that works beautifully for body text — it adjusts its weight and contrast based on font size, so headings and body text both look sharp without manual tweaking. JetBrains Mono is a purpose-built coding font with ligatures and distinct characters for 0/O and 1/l/I. DM Sans handles UI elements like labels, table headers, and info box titles — places where a clean sans-serif reads better than a serif.

    The accent system

    A single accent color (#e07a2f, a warm amber) ties the design together. It appears in four places: the left border on headings, the left border on info boxes, the info box label text, and link hover states. This creates visual consistency without overwhelming the page.

    css.rss-post h2 {
      position: relative;
      padding-left: 1.1rem;
    }
    .rss-post h2::before {
      content: '';
      position: absolute;
      left: 0;
      top: 0.15em;
      bottom: 0.15em;
      width: 3.5px;
      background: #e07a2f;
      border-radius: 2px;
    }

    The ::before pseudo-element creates the accent bar. This is one of the things you can’t do with inline styles — pseudo-elements only work in stylesheets, which is exactly why Additional CSS is necessary.

    Code blocks

    The default WordPress code block is functional but visually flat. The custom CSS gives code blocks a dark background (#1e1e2e, matching the Catppuccin Mocha palette), a subtle border, and generous padding. A floating language label in the top-right corner uses a <span class="label"> inside the <pre> block:

    css.rss-post pre {
      background: #1e1e2e;
      color: #cdd6f4;
      font-family: 'JetBrains Mono', monospace;
      font-size: 0.82rem;
      line-height: 1.7;
      padding: 1.4rem 1.6rem;
      border-radius: 8px;
      border: 1px solid #313244;
    }
    
    .rss-post pre .label {
      position: absolute;
      top: 0; right: 0;
      font-family: 'DM Sans', sans-serif;
      font-size: 0.62rem;
      text-transform: uppercase;
      color: #6c7086;
      background: #313244;
      padding: 0.2em 0.8em;
      border-radius: 0 8px 0 6px;
    }

    Inline code gets a light warm background (#edebe6) that’s visible without being distracting.

    Info boxes

    Tips, warnings, and gotchas use the .infobox class — a light background with an amber left border and an uppercase label:

    Example This is what an info box looks like. The label is a <strong> element styled with uppercase text and the accent color. The background is a warm off-white that distinguishes it from the main content without creating harsh contrast.

    The HTML for this is minimal:

    html<div class="infobox">
      <strong>Tip</strong>
      Your message here.
    </div>

    Flow diagrams

    For simple architecture or process diagrams, the .flow class creates a horizontal flex layout with dark boxes and arrow separators:

    Step 1
    →
    Step 2
    →
    Step 3

    The .accent modifier highlights one box in amber. On mobile, the flex container wraps naturally.

    Writing a Post

    The workflow for creating a styled post is:

    1. Create a new post in the WordPress editor.

    2. Add a Custom HTML block (not a Paragraph, not a Code block). Click the + button, search for “Custom HTML”.

    3. Paste your HTML wrapped in <div class="rss-post">. Use standard HTML tags — <h2>, <p>, <pre><code>, <table> — with the custom classes where needed (.infobox, .flow, .label, .lead).

    4. Preview and publish. The Additional CSS applies automatically.

    Important Use a single Custom HTML block for the entire post, not multiple blocks. If you mix Custom HTML blocks with regular Paragraph or Heading blocks, the regular blocks won’t be inside the .rss-post wrapper and won’t get the custom styling.

    Why Not Use the Block Editor Natively?

    A reasonable question. Gutenberg’s blocks do offer some styling — you can set colors, font sizes, and spacing per block. But there are real limitations:

    No custom fonts. You’re limited to what your theme provides plus WordPress.com’s font library. Want JetBrains Mono for code? Not an option through the block editor.

    No pseudo-elements. The accent bar on headings uses ::before. There’s no block editor control for that.

    No reusable component patterns. An info box with a colored border, background, and styled label would need manual per-block styling every time. With CSS classes, it’s one <div class="infobox">.

    No code block theming. The built-in Code block doesn’t support dark themes, language labels, or custom fonts.

    Consistency. When all styling comes from a single CSS file, every post looks consistent. Per-block styling drifts over time.

    Available Components

    Here’s a quick reference for the CSS classes available in the current stylesheet:

    ClassElementPurpose
    .rss-post<div>Wrapper — all styles are scoped under this
    .lead<p>Subtitle / intro paragraph in muted gray
    .infobox<div>Tip / warning / note callout box
    .flow<div>Horizontal flow diagram container
    .flow-box<div>Individual box in a flow diagram
    .flow-box.accent<div>Highlighted (amber) flow box
    .flow-arrow<span>Arrow between flow boxes
    .label<span>Language label inside <pre> blocks
    .tag-list<ul>Horizontal tag/category pills

    The Tradeoff

    This approach is not without downsides. You’re writing raw HTML instead of using the visual editor, which is slower and more error-prone. The post editor’s preview won’t show the custom styles (you need to use the site preview or publish as draft). And if you ever change themes, the Additional CSS carries over but may need adjustments to avoid conflicts with the new theme’s styles.

    For me, the tradeoff is worth it. I write technical content with code blocks, tables, and diagrams. The default WordPress.com styling doesn’t serve that content well, and this approach gives me full control without needing to self-host WordPress or pay for a Business plan with plugin access.

    One CSS file. Clean HTML with classes. Posts that actually look the way you want them to.

    • WordPress
    • CSS
    • Web Design
    • Blogging
    • Technical Writing
  • Reclaiming RSS: Self-Hosting FreshRSS with RSS‑Bridge on TrueNAS

    February 9th, 2026

    How to bring back RSS feeds for sites that removed them, scrape full article content, and unify everything in a single self-hosted reader.

    RSS isn’t dead — it’s just been abandoned by publishers chasing engagement metrics and walled gardens. Websites that once offered clean XML feeds now force you into newsletters, push notifications, or algorithmic timelines. But with a bit of self-hosting, you can take that control back.

    This post walks through my setup: FreshRSS as the reader, RSS-Bridge as the scraper for sites that killed their feeds, all running on TrueNAS Scale with Docker and exposed through Tailscale for secure remote access.

    The Architecture

    The data flow is straightforward:

    Website
    (no RSS)
    →
    RSS-Bridge
    scrapes & generates feed
    →
    FreshRSS
    polls & displays
    →
    You
    browser / app

    For sites that still offer RSS, FreshRSS subscribes directly. For sites that removed their feeds, RSS-Bridge sits in between — it loads the page, parses the HTML with CSS selectors, and generates a standard Atom feed that FreshRSS can consume like any other subscription.

    Why RSS-Bridge Over Alternatives

    There are several tools that can generate feeds from websites. I chose RSS-Bridge for a few reasons:

    Lightweight. RSS-Bridge is PHP-based and runs in about 50 MB of RAM. Compare that with RSSHub (Node.js, 300 MB+) or Huginn (Ruby, even heavier). On a NAS where every container counts, this matters.

    FreshRSS integration. There’s a native FreshRSS extension (xExtension-RssBridge) if you want tight integration, though the simpler approach — just subscribing to the generated feed URL — works perfectly and survives app updates.

    CssSelectorBridge. This built-in bridge is incredibly flexible. Give it a URL, tell it which CSS selectors match your articles, and it produces a feed. No coding required, no custom JavaScript routes to maintain.

    Deploying RSS-Bridge on TrueNAS

    I run RSS-Bridge as a Docker container through Portainer. First, create the config directory and enable all bridges:

    bash# Create config directory on ZFS
    sudo mkdir -p /mnt/zfs_tank/docker/rss-bridge
    
    # Enable all bridges
    sudo tee /mnt/zfs_tank/docker/rss-bridge/config.ini.php << 'EOF'
    [system]
    enabled_bridges[] = *
    EOF

    Then deploy the stack in Portainer:

    docker-composeversion: "3"
    services:
      rss-bridge:
        image: rssbridge/rss-bridge:latest
        container_name: rss-bridge
        restart: unless-stopped
        ports:
          - "3001:80"
        volumes:
          - /mnt/zfs_tank/docker/rss-bridge:/config

    RSS-Bridge is now accessible at http://<truenas-ip>:3001.

    Remote Access with Tailscale Serve

    If you already run a Tailscale container on your TrueNAS box, you can expose RSS-Bridge through it:

    bashdocker exec ix-tailscale-tailscale-1 tailscale serve --bg --https 3001 http://localhost:3001

    This makes RSS-Bridge available at https://your-machine.tailnet-name.ts.net:3001/ from any device on your tailnet. Use a non-443 port to avoid overwriting your TrueNAS GUI’s Tailscale Serve config.

    Tip When adding feed URLs to FreshRSS, use the local IP (e.g. http://192.168.0.13:3001/...) rather than the Tailscale hostname. Both services run on the same box, so going through the LAN is faster and more reliable — and the FreshRSS container may not have Tailscale DNS available.

    Scraping a Site: A Real Example

    The Greek tech blog techblog.gr removed its RSS feed during a 2025 site redesign. Here’s how I brought it back.

    Step 1 — Identify the selectors

    Open the site, right-click an article title, and choose Inspect. On techblog.gr, each article title is an <a> inside an <h3>. On article pages, the content lives inside div.article-content.

    Step 2 — Configure CssSelectorBridge

    In the RSS-Bridge web UI, find CSS Selector and fill in:

    FieldValue
    Site URLhttps://techblog.gr/
    Selector for article linksh3 a
    URL pattern(empty)
    Expand article content.article-content
    Content cleanup(empty)
    Title cleanup| Techblog.gr
    Limit20

    Step 3 — Generate and subscribe

    Click Generate feed, right-click the Atom button and copy the link. In FreshRSS, go to Subscription management → Add a feed and paste the URL. Done — full article content in your reader, from a site with no RSS feed.

    Finding the Right CSS Selectors

    For the article link selector: On the homepage, right-click an article title → Inspect. Look at the tag structure. Common patterns are h2 a, h3 a, or .post-title a. If the site uses generic <a> tags everywhere, combine with a URL pattern to filter (e.g. /blog/202 to match only blog post URLs).

    For the content selector: Open any individual article, right-click the body text → Inspect. Look at the parent <div> wrapping all the paragraphs. WordPress sites typically use .entry-content or .article-content. Drupal sites often use .field-name-body or .node-content.

    Gotcha: Iframes Some sites (especially job boards) load content inside iframes. RSS-Bridge can only parse the main page HTML — if the content is in an iframe, you’re limited to titles and links. Check your browser’s inspector for <iframe> elements if the content selector doesn’t seem to work.

    Setting Sensible Limits

    The Limit field controls how many items RSS-Bridge returns per request. Since FreshRSS remembers articles it has already seen, you only need enough to cover new posts between polling intervals:

    Feed typeLimitReasoning
    News sites20High frequency, many posts per day
    Blogs10Weekly or monthly posts
    Job boards10Few listings, slow turnover

    What About Paywalled Sites?

    RSS-Bridge has limits. If a site blocks automated requests (returning 403 errors) or loads content via JavaScript that requires authentication, RSS-Bridge can’t help. This applies to most academic journals and some major news outlets.

    For journals like NEJM, the publisher’s RSS feed is your only option — and it often contains just titles and volume/page references, no abstracts. A useful workaround for medical journals: use PubMed’s RSS feeds instead. PubMed indexes the same articles and includes full abstracts. Search for a journal, save the search, and create an RSS feed from the results.

    Unifying Multiple Feed Readers

    If you’re migrating from a desktop reader like Akregator to a self-hosted FreshRSS instance, both support OPML import/export. Export from both, then compare the feed URLs to identify:

    Feeds in both — already synced, nothing to do.

    Feeds only in the old reader — evaluate whether to add them to FreshRSS or drop them.

    Feeds only in FreshRSS — typically your newer RSS-Bridge feeds replacing broken native feeds.

    Watch for feeds that exist in both but with different URLs — same source, different CDN, or an old Politepol/feed proxy URL that you’ve since replaced with RSS-Bridge.

    Closing Thoughts

    This setup takes about 30 minutes to deploy and configure. What you get in return is a single, self-hosted interface for consuming content from any website — with or without their cooperation. No algorithms deciding what you see, no newsletters cluttering your inbox, no tracking pixels following you around.

    RSS never died. It just needs a little infrastructure.

    • FreshRSS
    • RSS-Bridge
    • TrueNAS
    • Docker
    • Self-Hosting
    • Tailscale
    • RSS
  • Testing OpenZFS on Arch Linux with QEMU/KVM: A Contributor’s Guide

    February 9th, 2026

    How to set up a disposable VM for running the ZFS test suite on bleeding-edge kernels


    Why This Matters

    OpenZFS supports a wide range of Linux kernels, but regressions can slip through on newer ones. Arch Linux ships the latest stable kernels (6.18+ at the time of writing), making it an ideal platform for catching issues before they hit other distributions.

    The ZFS test suite is the project’s primary quality gate — it exercises thousands of scenarios across pool creation, send/receive, snapshots, encryption, scrub, and more. Running it on your kernel version and reporting results is one of the most valuable contributions you can make, even without writing any code.

    Why a VM, Not Docker?

    This is the key architectural decision. ZFS is a kernel module — the test suite needs to:

    • Load and unload spl.ko and zfs.ko kernel modules
    • Create and destroy loopback block devices for test zpools
    • Exercise kernel-level filesystem operations (mount, unmount, I/O)
    • Potentially crash the kernel if a bug is triggered

    Docker containers share the host kernel. If you load ZFS modules inside a container, they affect your entire host system. A crashing test could take down your workstation. With a QEMU/KVM virtual machine, you get a fully isolated kernel — crashes stay inside the VM, and you can just reboot it.

    ┌─────────────────────────────────────────────────┐
    │ HOST (your workstation) │
    │ Arch Linux · Kernel 6.18.8 · Your ZFS pools │
    │ │
    │ ┌───────────────────────────────────────────┐ │
    │ │ QEMU/KVM VM │ │
    │ │ Arch Linux · Kernel 6.18.7 │ │
    │ │ │ │
    │ │ ┌─────────────┐ ┌───────────────────┐ │ │
    │ │ │ spl.ko │ │ ZFS Test Suite │ │ │
    │ │ │ zfs.ko │ │ (file-backed │ │ │
    │ │ │ (from src) │ │ loopback vdevs) │ │ │
    │ │ └─────────────┘ └───────────────────┘ │ │
    │ │ │ │
    │ │ If something crashes → only VM affected │ │
    │ └──────────────────────────────────┬────────┘ │
    │ SSH :2222 ←┘ │
    └─────────────────────────────────────────────────┘

    What Is the Arch Linux Cloud Image?

    We use the official Arch Linux cloud image — a minimal, pre-built qcow2 disk image maintained by the Arch Linux project. It’s designed for cloud/VM environments and includes:

    • A minimal Arch Linux installation (no GUI, no bloat)
    • cloud-init support for automated provisioning (user creation, SSH keys, hostname)
    • A growable root filesystem (we resize it to 40G)
    • systemd-networkd for automatic DHCP networking

    This is NOT the “archzfs” project (archzfs.com provides prebuilt ZFS packages). We named our VM hostname “archzfs” for convenience, but we build ZFS entirely from source.

    The cloud-init seed image is a tiny ISO that tells cloud-init how to configure the VM on first boot — what user to create, what password to set, what hostname to use. On a real cloud provider, this comes from the metadata service; for local QEMU, we create it manually.

    Step-by-Step Setup

    Prerequisites (Host)

    # Install QEMU and tools
    sudo pacman -S qemu-full cdrtools
    # Optional: virt-manager for GUI management
    sudo pacman -S virt-manager libvirt dnsmasq
    sudo systemctl enable --now libvirtd
    sudo usermod -aG libvirt $USER

    1. Download and Prepare the Cloud Image

    mkdir ~/zfs-testvm && cd ~/zfs-testvm
    # Download the latest Arch Linux cloud image
    wget https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2
    # Resize to 40G (ZFS tests need space for file-backed vdevs)
    qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 40G

    2. Create the Cloud-Init Seed

    mkdir -p /tmp/seed
    # User configuration
    cat > /tmp/seed/user-data << 'EOF'
    #cloud-config
    hostname: archzfs
    users:
    - name: arch
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL
    lock_passwd: false
    plain_text_passwd: test123
    ssh_pwauth: true
    EOF
    # Instance metadata
    cat > /tmp/seed/meta-data << 'EOF'
    instance-id: archzfs-001
    local-hostname: archzfs
    EOF
    # Build the seed ISO
    mkisofs -output seed.img -volid cidata -joliet -rock /tmp/seed/

    3. Boot the VM

    qemu-system-x86_64 \
    -enable-kvm \
    -m 8G \
    -smp 8 \
    -drive file=Arch-Linux-x86_64-cloudimg.qcow2,if=virtio \
    -drive file=seed.img,if=virtio,format=raw \
    -nic user,hostfwd=tcp::2222-:22 \
    -nographic

    What each flag does:

    FlagPurpose
    -enable-kvmUse hardware virtualization (huge performance gain)
    -m 8G8GB RAM (ZFS ARC cache benefits from more)
    -smp 88 virtual CPUs (adjust to your host)
    -drive ...qcow2,if=virtioBoot disk with virtio for best I/O
    -drive ...seed.imgCloud-init configuration
    -nic user,hostfwd=...User-mode networking with SSH port forward
    -nographicSerial console (no GUI window needed)

    Login will appear on the serial console. Credentials: arch / test123.

    You can also SSH from another terminal:

    ssh -p 2222 arch@localhost

    4. Install Build Dependencies (Inside VM)

    sudo pacman -Syu --noconfirm \
    base-devel git autoconf automake libtool python \
    linux-headers libelf libaio openssl zlib \
    ksh bc cpio fio inetutils sysstat jq pax rsync \
    nfs-utils lsscsi xfsprogs parted perf

    5. Clone and Build ZFS

    # Clone YOUR fork (replace with your GitHub username)
    git clone https://github.com/YOUR_USERNAME/zfs.git
    cd zfs
    # Build everything
    ./autogen.sh
    ./configure --enable-debug
    make -j$(nproc)

    The build compiles:

    • Kernel modules (spl.ko, zfs.ko) against the running kernel headers
    • Userspace tools (zpool, zfs, zdb, etc.)
    • Test binaries and test scripts

    Build time: ~5-10 minutes with 8 vCPUs.

    Note: You’ll see many objtool warnings about spl_panic() and luaD_throw() missing __noreturn. These are known issues on newer kernels and don’t affect functionality.

    6. Load Modules and Run Tests

    # Load the ZFS kernel modules
    sudo scripts/zfs.sh
    # Verify modules are loaded
    lsmod | grep zfs
    # Run the FULL test suite (4-8 hours)
    scripts/zfs-tests.sh -v 2>&1 | tee /tmp/zts-full.txt
    # Or run a single test (for quick validation)
    scripts/zfs-tests.sh -v \
    -t /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_001_pos.ksh

    Important notes on zfs-tests.sh:

    • Do NOT run as root — the script uses sudo internally
    • The -t flag requires absolute paths to individual .ksh test files
    • Missing utilities net and pamtester are okay — only NFS/PAM tests will skip
    • The “Permission denied” warning at startup is harmless

    7. Extract and Analyze Results

    From your host machine:

    # Copy the summary log
    scp -P 2222 arch@localhost:/tmp/zts-full.txt ~/zts-full.txt
    # Copy detailed per-test logs
    scp -r -P 2222 arch@localhost:/var/tmp/test_results/ ~/zfs-test-results/

    Understanding the Results

    The test results summary looks like:

    Results Summary
    PASS 2847
    FAIL 12
    SKIP 43
    Running Time: 05:23:17

    What to look for:

    1. Compare against known failures — check the ZFS Test Suite Failures wiki
    2. Identify NEW failures — any FAIL not on the known list for your kernel version
    3. Check the detailed logs — in /var/tmp/test_results/<timestamp>/ each test has stdout/stderr output

    Reporting Results

    If you find new failures, file a GitHub issue at openzfs/zfs with:

    Title: Test failure: <test_name> on Linux 6.18.7 (Arch Linux)
    **Environment:**
    - OS: Arch Linux (cloud image)
    - Kernel: 6.18.7-arch1-1
    - ZFS: built from master (commit <hash>)
    - VM: QEMU/KVM, 8 vCPU, 8GB RAM
    **Failed test:**
    <test name and path>
    **Test output:**
    <paste relevant log output>
    **Expected behavior:**
    Test should PASS (passes on kernel X.Y.Z / other distro)

    Tips and Tricks

    Snapshot the VM after setup to avoid repeating the build:

    # On host, after VM is set up and ZFS is built
    qemu-img snapshot -c "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2
    # Restore later
    qemu-img snapshot -a "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2

    Run a subset of tests by test group:

    # All zpool tests
    for t in /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_*/*.ksh; do
    echo "$t"
    done
    # Run tests matching a pattern
    find /home/arch/zfs/tests/zfs-tests/tests/functional -name "*.ksh" | grep snapshot | head -5

    Increase disk space if tests fail with ENOSPC:

    # On host (VM must be stopped)
    qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 +20G
    # Inside VM after reboot
    sudo growpart /dev/vda 3 # or whichever partition
    sudo resize2fs /dev/vda3

    Suppress floppy drive errors (the harmless I/O error, dev fd0 messages):

    # Add to QEMU command line:
    -fda none

    This guide was written while setting up an OpenZFS test environment for kernel 6.18.7 on Arch Linux. The same approach works for any Linux distribution that provides cloud images — just swap the base image and package manager commands.

    OpenZFS Test VM Architecture

    QEMU/KVM + Arch Linux Cloud Image + ZFS from Source

    Host Machine
    Hardware Arch Linux · Kernel 6.18.8 · 24 cores
    Hypervisor QEMU 9.x + KVM (hardware virtualization)
    VM Disk Arch-Linux-x86_64-cloudimg.qcow2 (resized 40G)
    Cloud-Init Seed seed.img (ISO9660) → user, password, hostname
    Network User-mode networking · hostfwd :2222→:22
    Get Results scp -P 2222 arch@localhost:/var/tmp/test_results/ .
    SSH
    :2222
    ⇄
    serial
    ttyS0
    QEMU VM (archzfs)
    Guest OS Arch Linux · Kernel 6.18.7 · 8 vCPU · 8GB RAM
    Cloud-Init User: arch · Pass: test123 · NOPASSWD sudo
    ZFS Source (from fork) git clone github.com/YOUR_USER/zfs
    ./autogen.sh → ./configure –enable-debug → make -j8
    ZFS Kernel Modules scripts/zfs.sh → loads spl.ko + zfs.ko
    ZFS Test Suite scripts/zfs-tests.sh -v
    Uses loopback devices (file-vdev0..2)
    Test Results /var/tmp/test_results/YYYYMMDDTHHMMSS/
    Per-test logs with pass/fail/skip

    ⚠ Why a VM instead of Docker?

    ZFS tests need to load and unload kernel modules (spl.ko, zfs.ko). Docker containers share the host kernel — loading ZFS modules in a container affects your host system and could crash it. A QEMU/KVM VM has its own isolated kernel, so module crashes stay contained. The VM also provides loopback block devices for creating test zpools, which Docker can’t safely offer.

    Setup Flow

    1

    Get Cloud Image

    Download official Arch cloud image. Resize qcow2 to 40G with qemu-img resize.

    2

    Create Cloud-Init

    Write user-data + meta-data YAML. Build ISO seed with mkisofs.

    3

    Boot VM

    qemu-system-x86_64 -enable-kvm -m 8G -smp 8 with SSH forward on 2222.

    4

    Install Deps

    pacman -S base-devel git ksh bc fio linux-headers and test dependencies.

    5

    Build ZFS

    Clone fork → autogen.sh → configure → make -j8

    6

    Load & Test

    scripts/zfs.sh loads modules. zfs-tests.sh -v runs the suite (4-8h).

    7

    Extract Results

    SCP results to host. Compare against known failures. Report regressions on GitHub.

  • FreeBSD 14.4 BETA 1 available!

    February 8th, 2026
  • Git 2.53 Released: Performance Boosts and Rust Now Enabled by Default

    February 8th, 2026

    Git 2.53 has officially landed, bringing another round of performance optimizations, improved error messages, and bug fixes — all while inching closer to the anticipated Git 3.0 release, tentatively expected around the end of 2026.

    What’s New in Git 2.53

    This latest feature release delivers performance improvements across various Git sub-commands and operations, along with polished error messages and enhancements to several sub-commands. As with recent releases, the focus remains on refining the developer experience while laying the groundwork for the major 3.0 milestone.

    Rust Is Now Default-Enabled

    The most notable change in Git 2.53 is that both the Makefile and Meson build systems now enable Rust support by default. This means builds will fail out of the box if Rust isn’t available on the host — though developers can still explicitly disable it via build flags for the time being.

    This follows a deliberate three-stage rollout:

    1. Git 2.52 — Rust support was auto-detected by Meson and disabled in the Makefile, allowing the project to establish the initial infrastructure.
    2. Git 2.53 (current) — Both build systems default-enable Rust. Builds break without it unless explicitly disabled.
    3. Git 3.0 — Rust becomes mandatory. The opt-out build flags will be removed entirely.

    Why Rust?

    The push toward Rust in Git mirrors a broader trend across foundational open-source projects (the Linux kernel, Coreutils, zlib) that are adopting Rust for its memory safety guarantees and strong performance characteristics. For a tool as universally relied upon as Git, reducing the surface area for memory-related bugs is a significant long-term investment.

    Looking Ahead

    With Git 3.0 on the horizon, expect Rust to become a hard build requirement. If you maintain custom Git builds or packaging pipelines, now is the time to ensure your toolchains include a supported Rust compiler. The transition window is closing.

    Full release details are available in the official Git mailing list announcement.

  • TrueNAS Plans for 2026

    February 8th, 2026
    Summary · February 4, 2026

    TrueNAS Plans for 2026

    iXsystems lays out its roadmap for the year — an annual release cadence, cloud-style fleet management, and hardware pushing 1 PB per rack unit.

    ~500K
    Systems Deployed
    60%+
    Fortune 500 Usage
    1 PB
    NVMe per 1U
    📍

    Where TrueNAS Stands Today

    25.10 “Goldeye” is the recommended version for new deployments, now at GA. 25.04 “Fangtooth” remains best for mission-critical stability. 24.x & 13.0 are end-of-life — no further updates.

    🚀

    TrueNAS 26 — Annual Releases, No More Fish

    A shift to annual releases with simple version numbers (26.1, 26.2…) instead of fish code names. Beta arrives in April 2026 with an extended development cycle for more thorough testing and predictable upgrades.

    OpenZFS 2.4 Hybrid Pools Ransomware Detection LXC Containers Webshare Search Kernel 6.18 LTS
    ☁️

    TrueNAS Connect — Cloud-Style Fleet Management

    Unified management for multiple TrueNAS systems, data stays on-prem. Three tiers rolling out through the year:

    Foundation (free) — headless setup & config. Plus (Q1, subscription) — replication, Webshare, ransomware protection. Business (Q2) — HA systems, large fleets, MSPs. Early adopters get 50% off the first year.

    ⚡

    Hardware — Terabit Networking & Petabyte Density

    The R60 brings 5th-gen hardware with 400GbE and RDMA for AI, video editing, and data science. H-Series hybrid systems mix NVMe and HDDs at 80% lower cost per TB than all-flash.

    OpenZFS 2.4 adds intelligent tiering — hot data pinned to flash, cold data on spinning disk. With 122TB SSDs now available, a single 1U can house over 1 PB of NVMe storage.

    🎯

    The Bottom Line

    The theme is clear: own your data. Predictable costs, no vendor lock-in, open-source foundations you can verify. TrueNAS 26 simplifies the release model, Connect simplifies fleet management, and the hardware lineup covers everything from edge deployments to petabyte-scale AI workloads.

    Original Article
    TrueNAS Plans for 2026: Building on Your Success — truenas.com
    →
  • TrueNAS 26

    February 7th, 2026
  • Anki Sync Server Enhanced — Now Available as a TrueNAS App

    February 7th, 2026
    🎴 Open Source · Self-Hosted

    Take full control of your Anki flashcard syncing. A self-hosted sync server with user management, TLS, backups, and metrics — packaged for one-click install on TrueNAS.

    Why Self-Host Your Anki Sync?

    Anki is the gold standard for spaced repetition learning — used by medical students, language learners, and lifelong learners worldwide. By default, Anki syncs through AnkiWeb, Anki’s official cloud service. But there are good reasons to run your own sync server: full ownership of your data, no upload limits, the ability to share a server with a study group, and the peace of mind that comes with keeping everything on your own hardware.

    Anki Sync Server Enhanced wraps the official Anki sync binary in a production-ready Docker image with features you’d expect from a proper self-hosted service — and it’s now submitted to the TrueNAS Community App Catalog for one-click deployment.

    What’s Included

    🔐
    User Management

    Create sync accounts via environment variables. No database setup required.

    🔒
    Optional TLS

    Built-in Caddy reverse proxy for automatic HTTPS with Let’s Encrypt or custom certs.

    💾
    Automated Backups

    Scheduled backups with configurable retention and S3-compatible storage support.

    📊
    Metrics & Dashboard

    Prometheus-compatible metrics endpoint and optional web dashboard for monitoring.

    🐳
    Docker Native

    Lightweight Debian-based image. Runs as non-root. Healthcheck included.

    ⚡
    TrueNAS Ready

    Submitted to the Community App Catalog. Persistent storage, configurable ports, resource limits.

    How It Works

    Anki Desktop / Mobile → Anki Sync Server Enhanced → TrueNAS Storage

    Your Anki clients sync directly to your TrueNAS server over your local network or via Tailscale/WireGuard.

    The server runs the official anki-sync-server Rust binary — the same code that powers AnkiWeb — inside a hardened container. Point your Anki desktop or mobile app at your server’s URL, and syncing works exactly like it does with AnkiWeb, just on your own infrastructure.

    TrueNAS Installation

    Once the app is accepted into the Community train, installation is straightforward from the TrueNAS UI. In the meantime, you can deploy it as a Custom App using the Docker image directly.

    PR Status: The app has been submitted to the TrueNAS Community App Catalog via PR #4282 and is awaiting review. Track progress on the app request issue #4281.

    To deploy as a Custom App right now, use these settings:

    Setting Value
    Image chrislongros/anki-sync-server-enhanced
    Tag 25.09.2-1
    Port 8080 (or any available port)
    Environment: SYNC_USER1 yourname:yourpassword
    Environment: SYNC_PORT Must match your chosen port
    Storage: /data Host path or dataset for persistent data

    Connecting Your Anki Client

    After the server is running, configure your Anki client to use it. In Anki Desktop, go to Tools → Preferences → Syncing and set the custom sync URL to your server address, for example http://your-truenas-ip:8080. On AnkiDroid, the setting is under Settings → Sync → Custom sync server. On AnkiMobile (iOS), look under Settings → Syncing → Custom Server.

    Then simply sync as usual — your Anki client will talk to your self-hosted server instead of AnkiWeb.

    Building It: Lessons from TrueNAS App Development

    Packaging a Docker image as a TrueNAS app turned out to involve a few surprises worth sharing for anyone considering contributing to the catalog.

    TrueNAS apps use a Jinja2 templating system backed by a Python rendering library — not raw docker-compose files. Your template calls methods like Render(values), c1.add_port(), and c1.healthcheck.set_test() which generate a validated compose file at deploy time. This means you get built-in support for permissions init containers, resource limits, and security hardening for free.

    One gotcha: TrueNAS runs containers as UID/GID 568 (the apps user), not root. If your entrypoint writes to files owned by a different user, it will fail silently or crash. We hit this with a start_time.txt write and had to make it non-fatal. Another: the Anki sync server returns a 404 on / (it has no landing page), so the default curl --fail healthcheck marks the container as unhealthy. Switching to a TCP healthcheck solved it cleanly.

    The TrueNAS CI tooling is solid — a single ci.py script renders your template, validates the compose output, spins up containers, and checks health status. If the healthcheck fails, it dumps full container logs and inspect data, making debugging fast.

    Get Involved

    Ready to Self-Host Your Anki Sync?

    Deploy it on TrueNAS today or star the project on GitHub to follow development.

    GitHub Repository Docker Hub TrueNAS PR #4282
    Anki TrueNAS Self-Hosted Docker Spaced Repetition Open Source Homelab
←Previous Page
1 … 3 4 5 6 7 … 139
Next Page→

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
 

Loading Comments...
 

    • Subscribe Subscribed
      • /root
      • Already have a WordPress.com account? Log in now.
      • /root
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar