Tag: TrueNAS
-
TrueNAS 25.10.2 Released: What’s New
iXsystems has released TrueNAS 25.10.2, a maintenance update to the 25.10 branch. If you’re running TrueNAS Scale on the Early Adopter channel, this is a recommended update — it fixes several critical issues including an upgrade path bug that could leave systems unbootable.
Critical Fixes
Upgrade failure fix (NAS-139541). Some systems upgrading from TrueNAS 25.04 to 25.10 encountered a “No space left on device” error during boot variable preparation, leaving the system unbootable after the failed attempt. This is fixed in 25.10.2.
SMB service startup after upgrade (NAS-139076). Systems with legacy ACL configurations from older TrueNAS versions could not start the SMB service after upgrading to 25.10.1. The update now automatically converts legacy permission formats during service initialization.
Disk replacement validation (NAS-138678). A frustrating bug rejected replacement drives with identical capacity to the failed drive, showing a “device is too small” error. Fixed — identical capacity replacements now work correctly.
Performance Improvements
NFS performance for NFSv4 clients (NAS-139128). Support for
STATX_CHANGE_COOKIEhas been added, surfacing ZFS sequence numbers to NFS clients via knfsd. Previously, the system synthesized change IDs based on ctime, which could fail to increment consistently due to kernel timer coarseness. This improves client attribute cache invalidation and reduces unnecessary server requests.ZFS pool import performance (NAS-138879). Async destroy operations — which can run during pool import — now have a time limit per transaction group. Pool imports that previously stalled due to prolonged async destroy operations will complete significantly faster.
Containerized app CPU usage (NAS-139089). Background CPU usage from Docker stats collection and YAML processing has been reduced by optimizing asyncio_loop operations that were holding the Global Interpreter Lock during repeated container inspections.
Networking
Network configuration lockout fix (NAS-139575). Invalid IPv6 route entries in the routing table could block access to network settings, app management, and bug reporting. The system now handles invalid route entries gracefully.
Network bridge creation fix (NAS-139196). Pydantic validation errors were preventing bridge creation through the standard workflow of removing IPs from an interface, creating a bridge, and reassigning those IPs.
IPv6 Kerberos fix (NAS-139734). Active Directory authentication failed when using IPv6 addresses for Kerberos Distribution Centers. IPv6 addresses are now properly formatted with square brackets in
krb5.conf.SMB Hosts Allow/Deny controls (NAS-138814). IP-based access restrictions are now available for SMB shares across all relevant purpose presets. Also adds the ability to synchronize Kerberos keytab SPNs with Active Directory updates.
UI and Cloud
Dashboard storage widget (NAS-138705). Secondary storage pools were showing “Unknown” for used and free space in the Dashboard widget. Fixed.
Cloud Sync tasks invisible after CORE → SCALE upgrade (NAS-138886). Tasks were functional via CLI but invisible in the web UI due to a data inconsistency where the
bwlimitfield contained empty objects instead of empty arrays.S3 endpoint validation (NAS-138903). Cloud Sync tasks now validate that S3 endpoints include the required
https://protocol prefix upfront, with a clear error message instead of the unhelpful “Invalid endpoint” response.Session expiry fix (NAS-138467). Users were being unexpectedly logged out during active operations despite configured session timeout settings. Page refresh (F5) was also triggering the login screen during active sessions. Both are now fixed.
Error notifications showing placeholder text (NAS-139010). Error notifications were displaying “%(err)s Warning” instead of actual error messages.
Users page now shows Directory Services users by default (NAS-139073). Directory Services users now appear in the default view without requiring a manual filter change.
SSH access removal fix (NAS-139130). Clearing the SSH Access option appeared to save successfully but the SSH indicator persisted in the user list. Now properly disabled through the UI.
Certificate management for large DNs (NAS-139056). Certificates with Distinguished Names exceeding 1024 characters — typically those with many Subject Alternative Names — can now be properly imported and managed.
Notable Security Change
The root account’s group membership is now locked to
builtin_administratorsand cannot be modified through the UI. This prevents accidental removal of privileges that could break scheduled tasks, cloud sync, and cron jobs. To disable root UI access, use the Disable Password option in Credentials → Local Users instead.Upgrade
Update via System → Update in the web UI, or download from truenas.com. Full release notes and changelog are available at the TrueNAS Documentation Hub.



https://forums.truenas.com/t/truenas-25-10-2-is-now-available/63778
-
How a failed nightly update left my TrueNAS server booting into an empty filesystem — and the two bugs responsible.
I run TrueNAS Scale on an Aoostar WTR Max as my homelab server, with dozens of Docker containers for everything from Immich to Jellyfin. I like to stay on the nightly builds to get early access to new features and contribute bug reports when things go wrong. Today, things went very wrong.
The Update Failure
It started innocently enough. I kicked off the nightly update from the TrueNAS UI, updating from
26.04.0-MASTER-20260210-020233to the latest20260213build. Instead of a smooth update, I got this:error
[EFAULT] Error: Command ['zfs', 'destroy', '-r', 'boot-pool/ROOT/26.04.0-MASTER-20260213-020146-1'] failed with exit code 1: cannot unmount '/tmp/tmpo8dbr91e': pool or dataset is busyThe update process was trying to clean up a previous boot environment but couldn’t unmount a temporary directory it had created. No big deal, I thought — I’ll just clean it up manually.
Down the Rabbit Hole
I checked what was holding the mount open:
bash
$ fuser -m /tmp/tmpo8dbr91e # nothing $ lsof +D /tmp/tmpo8dbr91e # nothing (just Docker overlay warnings)Nothing was using it. A force unmount also failed:
bash
$ sudo umount -f /tmp/tmpo8dbr91e umount: /tmp/tmpo8dbr91e: target is busy.Only a lazy unmount worked:
bash
$ sudo umount -l /tmp/tmpo8dbr91eSo I unmounted it and destroyed the stale boot environment manually. Then I retried the update. Same error, different temp path. Unmount, destroy, retry. Same error again. Each attempt, the updater would mount a new temporary directory, fail to unmount it, and bail out.
I even tried stopping Docker before the update, thinking the overlay mounts might be interfering. No luck.
The Real Problem
Frustrated, I rebooted the server thinking a clean slate might help. The server didn’t come back. After 10 minutes of pinging with no response, I plugged in a monitor and saw this:
console
Mounting 'boot-pool/ROOT/26.04.0-MASTER-20260213-020146' on '/root/' ... done. Begin: Running /scripts/local-bottom ... done. Begin: Running /scripts/nfs-bottom ... done. run-init: can't execute '/sbin/init': No such file or directory Target filesystem doesn't have requested /sbin/init. run-init: can't execute '/etc/init': No such file or directory run-init: can't execute '/bin/init': No such file or directory run-init: can't execute '/bin/sh': No such file or directory No init found. Try passing init= bootarg. BusyBox v1.37.0 (Debian 1:1.37.0-6+b3) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs)The system had booted into the incomplete boot environment from the failed update — an empty shell with no operating system in it. The update process had set this as the default boot environment before it was fully built.
The Recovery
Fortunately, ZFS boot environments make this recoverable. I rebooted again, caught the GRUB menu, and selected my previous working boot environment (
20260210-020233). After booting successfully, I locked in the correct boot environment as the default:bash
$ sudo zpool set bootfs=boot-pool/ROOT/26.04.0-MASTER-20260210-020233 boot-poolThen cleaned up the broken environment:
bash
$ sudo zfs destroy -r boot-pool/ROOT/26.04.0-MASTER-20260213-020146Server back to normal.
Two Bugs, One Update
There are actually two separate bugs here:
Bug 1 — Stale Mount Cleanup The update process mounts the boot environment into a temp directory but can’t clean it up when something fails.umount -fdoesn’t work; onlyumount -ldoes. And since each retry creates a new temp mount, the problem is self-perpetuating.Bug 2 — Premature Bootfs Switch (Critical) This is the dangerous one. The updater sets the new boot environment as the GRUB default before it’s fully populated. When the update fails mid-way, you’re left with a system that will boot into an empty filesystem on the next reboot. If you don’t have physical console access and a keyboard handy, you could be in serious trouble.What Happens During a Failed Update
Update starts→Sets new bootfs→Build fails→Reboot = initramfsThe Fix Should Be Simple
The updater should only set the new boot environment as the default after the update is verified complete. And it should use
umount -las a fallback whenumount -ffails, since the standard force unmount clearly isn’t sufficient here.I’ve filed this as NAS-139794 on the TrueNAS Jira. If you’re running nightly builds, be aware of this issue — and make sure you have console access to your server in case you need to select a different boot environment from GRUB.
Lessons Learned
Running nightly builds is inherently risky, and I accept that. But an update failure should never leave a system unbootable. The whole point of ZFS boot environments is to provide a safety net — but that net has a hole when the updater switches the default before the new environment is ready.
In the meantime, keep a monitor and keyboard accessible for your TrueNAS box, and remember: if you ever drop to an initramfs shell after an update, your data is fine. Just reboot into GRUB and pick the previous boot environment.
-
How to set up a disposable VM for running the ZFS test suite on bleeding-edge kernels
Why This Matters
OpenZFS supports a wide range of Linux kernels, but regressions can slip through on newer ones. Arch Linux ships the latest stable kernels (6.18+ at the time of writing), making it an ideal platform for catching issues before they hit other distributions.
The ZFS test suite is the project’s primary quality gate — it exercises thousands of scenarios across pool creation, send/receive, snapshots, encryption, scrub, and more. Running it on your kernel version and reporting results is one of the most valuable contributions you can make, even without writing any code.
Why a VM, Not Docker?
This is the key architectural decision. ZFS is a kernel module — the test suite needs to:
- Load and unload
spl.koandzfs.kokernel modules - Create and destroy loopback block devices for test zpools
- Exercise kernel-level filesystem operations (mount, unmount, I/O)
- Potentially crash the kernel if a bug is triggered
Docker containers share the host kernel. If you load ZFS modules inside a container, they affect your entire host system. A crashing test could take down your workstation. With a QEMU/KVM virtual machine, you get a fully isolated kernel — crashes stay inside the VM, and you can just reboot it.
┌─────────────────────────────────────────────────┐│ HOST (your workstation) ││ Arch Linux · Kernel 6.18.8 · Your ZFS pools ││ ││ ┌───────────────────────────────────────────┐ ││ │ QEMU/KVM VM │ ││ │ Arch Linux · Kernel 6.18.7 │ ││ │ │ ││ │ ┌─────────────┐ ┌───────────────────┐ │ ││ │ │ spl.ko │ │ ZFS Test Suite │ │ ││ │ │ zfs.ko │ │ (file-backed │ │ ││ │ │ (from src) │ │ loopback vdevs) │ │ ││ │ └─────────────┘ └───────────────────┘ │ ││ │ │ ││ │ If something crashes → only VM affected │ ││ └──────────────────────────────────┬────────┘ ││ SSH :2222 ←┘ │└─────────────────────────────────────────────────┘What Is the Arch Linux Cloud Image?
We use the official Arch Linux cloud image — a minimal, pre-built qcow2 disk image maintained by the Arch Linux project. It’s designed for cloud/VM environments and includes:
- A minimal Arch Linux installation (no GUI, no bloat)
- cloud-init support for automated provisioning (user creation, SSH keys, hostname)
- A growable root filesystem (we resize it to 40G)
- systemd-networkd for automatic DHCP networking
This is NOT the “archzfs” project (archzfs.com provides prebuilt ZFS packages). We named our VM hostname “archzfs” for convenience, but we build ZFS entirely from source.
The cloud-init seed image is a tiny ISO that tells cloud-init how to configure the VM on first boot — what user to create, what password to set, what hostname to use. On a real cloud provider, this comes from the metadata service; for local QEMU, we create it manually.
Step-by-Step Setup
Prerequisites (Host)
# Install QEMU and toolssudo pacman -S qemu-full cdrtools# Optional: virt-manager for GUI managementsudo pacman -S virt-manager libvirt dnsmasqsudo systemctl enable --now libvirtdsudo usermod -aG libvirt $USER1. Download and Prepare the Cloud Image
mkdir ~/zfs-testvm && cd ~/zfs-testvm# Download the latest Arch Linux cloud imagewget https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2# Resize to 40G (ZFS tests need space for file-backed vdevs)qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 40G2. Create the Cloud-Init Seed
mkdir -p /tmp/seed# User configurationcat > /tmp/seed/user-data << 'EOF'#cloud-confighostname: archzfsusers:- name: archshell: /bin/bashsudo: ALL=(ALL) NOPASSWD:ALLlock_passwd: falseplain_text_passwd: test123ssh_pwauth: trueEOF# Instance metadatacat > /tmp/seed/meta-data << 'EOF'instance-id: archzfs-001local-hostname: archzfsEOF# Build the seed ISOmkisofs -output seed.img -volid cidata -joliet -rock /tmp/seed/3. Boot the VM
qemu-system-x86_64 \-enable-kvm \-m 8G \-smp 8 \-drive file=Arch-Linux-x86_64-cloudimg.qcow2,if=virtio \-drive file=seed.img,if=virtio,format=raw \-nic user,hostfwd=tcp::2222-:22 \-nographicWhat each flag does:
Flag Purpose -enable-kvmUse hardware virtualization (huge performance gain) -m 8G8GB RAM (ZFS ARC cache benefits from more) -smp 88 virtual CPUs (adjust to your host) -drive ...qcow2,if=virtioBoot disk with virtio for best I/O -drive ...seed.imgCloud-init configuration -nic user,hostfwd=...User-mode networking with SSH port forward -nographicSerial console (no GUI window needed) Login will appear on the serial console. Credentials:
arch/test123.You can also SSH from another terminal:
ssh -p 2222 arch@localhost4. Install Build Dependencies (Inside VM)
sudo pacman -Syu --noconfirm \base-devel git autoconf automake libtool python \linux-headers libelf libaio openssl zlib \ksh bc cpio fio inetutils sysstat jq pax rsync \nfs-utils lsscsi xfsprogs parted perf5. Clone and Build ZFS
# Clone YOUR fork (replace with your GitHub username)git clone https://github.com/YOUR_USERNAME/zfs.gitcd zfs# Build everything./autogen.sh./configure --enable-debugmake -j$(nproc)The build compiles:
- Kernel modules (
spl.ko,zfs.ko) against the running kernel headers - Userspace tools (
zpool,zfs,zdb, etc.) - Test binaries and test scripts
Build time: ~5-10 minutes with 8 vCPUs.
Note: You’ll see many
objtoolwarnings aboutspl_panic()andluaD_throw()missing__noreturn. These are known issues on newer kernels and don’t affect functionality.6. Load Modules and Run Tests
# Load the ZFS kernel modulessudo scripts/zfs.sh# Verify modules are loadedlsmod | grep zfs# Run the FULL test suite (4-8 hours)scripts/zfs-tests.sh -v 2>&1 | tee /tmp/zts-full.txt# Or run a single test (for quick validation)scripts/zfs-tests.sh -v \-t /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_001_pos.kshImportant notes on
zfs-tests.sh:- Do NOT run as root — the script uses sudo internally
- The
-tflag requires absolute paths to individual.kshtest files - Missing utilities
netandpamtesterare okay — only NFS/PAM tests will skip - The “Permission denied” warning at startup is harmless
7. Extract and Analyze Results
From your host machine:
# Copy the summary logscp -P 2222 arch@localhost:/tmp/zts-full.txt ~/zts-full.txt# Copy detailed per-test logsscp -r -P 2222 arch@localhost:/var/tmp/test_results/ ~/zfs-test-results/Understanding the Results
The test results summary looks like:
Results SummaryPASS 2847FAIL 12SKIP 43Running Time: 05:23:17What to look for:
- Compare against known failures — check the ZFS Test Suite Failures wiki
- Identify NEW failures — any FAIL not on the known list for your kernel version
- Check the detailed logs — in
/var/tmp/test_results/<timestamp>/each test has stdout/stderr output
Reporting Results
If you find new failures, file a GitHub issue at openzfs/zfs with:
Title: Test failure: <test_name> on Linux 6.18.7 (Arch Linux)**Environment:**- OS: Arch Linux (cloud image)- Kernel: 6.18.7-arch1-1- ZFS: built from master (commit <hash>)- VM: QEMU/KVM, 8 vCPU, 8GB RAM**Failed test:**<test name and path>**Test output:**<paste relevant log output>**Expected behavior:**Test should PASS (passes on kernel X.Y.Z / other distro)Tips and Tricks
Snapshot the VM after setup to avoid repeating the build:
# On host, after VM is set up and ZFS is builtqemu-img snapshot -c "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2# Restore laterqemu-img snapshot -a "zfs-built" Arch-Linux-x86_64-cloudimg.qcow2Run a subset of tests by test group:
# All zpool testsfor t in /home/arch/zfs/tests/zfs-tests/tests/functional/cli_root/zpool_*/*.ksh; doecho "$t"done# Run tests matching a patternfind /home/arch/zfs/tests/zfs-tests/tests/functional -name "*.ksh" | grep snapshot | head -5Increase disk space if tests fail with ENOSPC:
# On host (VM must be stopped)qemu-img resize Arch-Linux-x86_64-cloudimg.qcow2 +20G# Inside VM after rebootsudo growpart /dev/vda 3 # or whichever partitionsudo resize2fs /dev/vda3Suppress floppy drive errors (the harmless
I/O error, dev fd0messages):# Add to QEMU command line:-fda none
This guide was written while setting up an OpenZFS test environment for kernel 6.18.7 on Arch Linux. The same approach works for any Linux distribution that provides cloud images — just swap the base image and package manager commands.
OpenZFS Test VM Architecture
QEMU/KVM + Arch Linux Cloud Image + ZFS from Source
Host MachineHardware Arch Linux · Kernel 6.18.8 · 24 coresHypervisor QEMU 9.x + KVM (hardware virtualization)VM Disk Arch-Linux-x86_64-cloudimg.qcow2 (resized 40G)Cloud-Init Seed seed.img (ISO9660) → user, password, hostnameNetwork User-mode networking · hostfwd :2222→:22Get Results scp -P 2222 arch@localhost:/var/tmp/test_results/ .SSH
:2222 ⇄ serial
ttyS0QEMU VM (archzfs)Guest OS Arch Linux · Kernel 6.18.7 · 8 vCPU · 8GB RAMCloud-Init User: arch · Pass: test123 · NOPASSWD sudoZFS Source (from fork) git clone github.com/YOUR_USER/zfs
./autogen.sh → ./configure –enable-debug → make -j8ZFS Kernel Modules scripts/zfs.sh → loads spl.ko + zfs.koZFS Test Suite scripts/zfs-tests.sh -v
Uses loopback devices (file-vdev0..2)Test Results /var/tmp/test_results/YYYYMMDDTHHMMSS/
Per-test logs with pass/fail/skip⚠ Why a VM instead of Docker?
ZFS tests need to load and unload kernel modules (spl.ko, zfs.ko). Docker containers share the host kernel — loading ZFS modules in a container affects your host system and could crash it. A QEMU/KVM VM has its own isolated kernel, so module crashes stay contained. The VM also provides loopback block devices for creating test zpools, which Docker can’t safely offer.
Setup Flow
1Get Cloud Image
Download official Arch cloud image. Resize qcow2 to 40G with
qemu-img resize.2Create Cloud-Init
Write user-data + meta-data YAML. Build ISO seed with
mkisofs.3Boot VM
qemu-system-x86_64 -enable-kvm -m 8G -smp 8with SSH forward on 2222.4Install Deps
pacman -S base-devel git ksh bc fio linux-headersand test dependencies.5Build ZFS
Clone fork →
autogen.sh→configure→make -j86Load & Test
scripts/zfs.shloads modules.zfs-tests.sh -vruns the suite (4-8h).7Extract Results
SCP results to host. Compare against known failures. Report regressions on GitHub.
- Load and unload
-
I recently ran into a performance issue on my TrueNAS SCALE 25.10.1 system where the server felt sluggish despite low CPU usage. The system was running Docker-based applications, and at first glance nothing obvious looked wrong. The real problem turned out to be high iowait.
What iowait actually means
In Linux,
iowaitrepresents the percentage of time the CPU is idle while waiting for I/O operations (usually disk). High iowait doesn’t mean the CPU is busy — it means the CPU is stuck waiting on storage.In
top, this appears aswa:%Cpu(s): 1.8 us, 1.7 sy, 0.0 ni, 95.5 id, 0.2 wa, 0.0 hi, 0.8 si, 0.0 stUnder normal conditions, iowait should stay very low (usually under 1–2%). When it starts climbing higher, the system can feel slow even if CPU usage looks fine.
Confirming the issue with iostat
To get a clearer picture, I used
iostat, which shows per-disk activity and latency:iostat -x 1This immediately showed the problem. One or more disks had:
- Very high
%util(near or at 100%) - Elevated
awaittimes - Consistent read/write pressure
At that point it was clear the bottleneck was storage I/O, not CPU or memory.
Tracking it down to Docker services
This system runs several Docker-based services. Using
topalongsideiostat, I noticed disk activity drop immediately when certain services were stopped.In particular, high I/O was coming from applications that:
- Continuously read/write large files
- Perform frequent metadata operations
- Maintain large active datasets
Examples included downloaders, media managers, and backup-related containers.
Stopping services to confirm
To confirm the cause, I stopped Docker services one at a time and watched disk metrics:
iostat -x 1Each time a heavy I/O service was stopped, iowait dropped immediately. Once the worst offender was stopped, iowait returned to normal levels and the system became responsive again.
Why the system looked “fine” at first
This was tricky because:
- CPU usage was low
- Memory usage looked reasonable
- The web UI was responsive but sluggish
Without checking
iostat, it would have been easy to misdiagnose this as a CPU or RAM issue.Lessons learned
- High iowait can cripple performance even when CPU is idle
topalone is not enough — useiostat -x- Docker workloads can silently saturate disks
- Stopping services one by one is an effective diagnostic technique
Final takeaway
On TrueNAS SCALE 25.10.1 with Docker, high iowait was the real cause of my performance issues. The fix wasn’t a reboot, more CPU, or more RAM — it was identifying and controlling disk-heavy services.
If your TrueNAS server feels slow but CPU usage looks fine, check iowait and run
iostat. The disk may be the real bottleneck. - Very high
-
New version that is “battlefield tested” on my home server !!

https://github.com/chrislongros/docker-tailscale-serve-preserve/releases/tag/v1.1.0
https://github.com/chrislongros/docker-tailscale-serve-preserve/tree/main
-
I would like to share an AI generated script that I successfully used to automate my TrueNAS certificates with Tailscale.
This guide shows how to automatically use a Tailscale HTTPS certificate for the TrueNAS SCALE Web UI, when Tailscale runs inside a Docker container.
Overview
What this does
- Runs
tailscale certinside a Docker container - Writes the cert/key to a host bind-mount
- Imports the cert into TrueNAS
- Applies it to the Web UI
- Restarts the UI
- Runs automatically via cron
Requirements
- TrueNAS SCALE
- Docker
- A running Tailscale container (
tailscaled) - A host directory bind-mounted into the container at
/certs
Step 1 – Create a cert directory on the host
Create a dataset or folder on your pool (example):
mkdir -p /mnt//Applications/tailscale-certs
chmod 700 /mnt//Applications/tailscale-certsStep 2 – Bind-mount it into the Tailscale container
Your Tailscale container must mount the host directory to
/certs.Example (conceptually):
Host path: /mnt//Applications/tailscale-certs
Container: /certsThis is required for
tailscale certto write files that TrueNAS can read.Step 3 – Create the automation script (generic)
Save this as:
/mnt/<pool>/scripts/import_tailscale_cert.sh
Script:
#!/bin/bash
set -euo pipefail=========================
USER CONFIG (REQUIRED)
=========================
CONTAINER_NAME=“TAILSCALE_CONTAINER_NAME”
TS_HOSTNAME=“TAILSCALE_DNS_NAME”
HOST_CERT_DIR=“HOST_CERT_DIR”
LOG_FILE=“LOG_FILE”
TRUENAS_CERT_NAME=“TRUENAS_CERT_NAME”=========================
CRT=“${HOST_CERT_DIR}/ts.crt”
KEY=“${HOST_CERT_DIR}/ts.key”export PATH=“/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin”
mkdir -p “$(dirname “$LOG_FILE”)”
touch “$LOG_FILE”
exec >>“$LOG_FILE” 2>&1
echo “—– $(date -Is) starting Tailscale cert import —–”command -v docker >/dev/null || { echo “ERROR: docker not found”; exit 2; }
command -v jq >/dev/null || { echo “ERROR: jq not found”; exit 2; }
command -v midclt >/dev/null || { echo “ERROR: midclt not found”; exit 2; }docker ps –format ‘{{.Names}}’ | grep -qx “$CONTAINER_NAME” || {
echo “ERROR: container not running: $CONTAINER_NAME”
exit 2
}docker exec “$CONTAINER_NAME” sh -lc ‘test -d /certs’ || {
echo “ERROR: /certs not mounted in container”
exit 2
}docker exec “$CONTAINER_NAME” sh -lc
“tailscale cert –cert-file /certs/ts.crt –key-file /certs/ts.key “$TS_HOSTNAME””[[ -s “$CRT” && -s “$KEY” ]] || {
echo “ERROR: certificate files missing”
exit 2
}midclt call certificate.create “$(jq -n
–arg n “$TRUENAS_CERT_NAME”
–rawfile c “$CRT”
–rawfile k “$KEY”
‘{name:$n, create_type:“CERTIFICATE_CREATE_IMPORTED”, certificate:$c, privatekey:$k}’)” >/dev/null || trueCERT_ID=“$(midclt call certificate.query | jq -r
–arg n “$TRUENAS_CERT_NAME” ‘. | select(.name==$n) | .id’ | tail -n 1)”[[ -n “$CERT_ID” ]] || {
echo “ERROR: failed to locate imported certificate”
exit 2
}midclt call system.general.update “$(jq -n –argjson id “$CERT_ID”
‘{ui_certificate:$id, ui_restart_delay:1}’)” >/dev/null
midclt call system.general.ui_restart >/dev/nullecho “SUCCESS: Web UI certificate updated”
Step 4 – Make it executable
chmod 700 /mnt//scripts/import_tailscale_cert.sh
Step 5 – Run once manually
/usr/bin/bash /mnt//scripts/import_tailscale_cert.sh
You will briefly disconnect from the Web UI — this is expected.
Step 6 – Verify certificate in UI
Go to:
System Settings → Certificates
Confirm the new certificate exists and uses your Tailscale hostname.
Also check:
System Settings → General → GUI
→ Web Interface HTTPS CertificateStep 7 – Create the cron job
TrueNAS UI → System Settings → Advanced → Cron Jobs → Add
/usr/bin/bash /mnt//scripts/import_tailscale_cert.sh
You can find the script on my Github repository:
https://github.com/chrislongros/truenas-tailscale-cert-automation
- Runs
-
I use Watchtower to automatically update my Docker applications on TrueNAS SCALE. At the same time, I use Tailscale Serve as a reverse proxy to provide secure HTTPS access to my home lab services.
This setup works well most of the time — except during updates.
The problem
When Watchtower updates a container, it stops and recreates it.
If the container exposes ports such as 80 or 443, the restart can fail because Tailscale Serve is already bound to those ports.The result is:
- failed container restarts,
- services going offline,
- and manual intervention required.
The solution
The solution is to temporarily disable Tailscale Serve, run Watchtower once, and then restore Tailscale Serve afterward.
On TrueNAS SCALE, Tailscale runs inside its own Docker container (for example:
ix-tailscale-tailscale-1). This makes it possible to control Serve usingdocker exec.The script below does exactly that:
- Backs up the current Tailscale Serve configuration
- Stops all Tailscale Serve listeners (freeing ports)
- Runs Watchtower in
--run-oncemode - Restores Tailscale Serve safely
Public Script: Pause Tailscale Serve During Watchtower Updates
Save as
/mnt/zfs_tank/scripts/watchtower-with-tailscale-serve.sh
(Adjust the pool name if yours is not
zfs_tank.)#!/usr/bin/env bash set -euo pipefail # ------------------------------------------------------------------------------ # watchtower-with-tailscale-serve.sh # # Purpose: # Prevent port conflicts between Watchtower and Tailscale Serve by: # 1. Backing up the current Tailscale Serve configuration # 2. Temporarily disabling Tailscale Serve # 3. Running Watchtower once # 4. Restoring Tailscale Serve # # Designed for: # - TrueNAS SCALE # - Tailscale running in a Docker container (TrueNAS app) # ------------------------------------------------------------------------------ # ========================= # CONFIGURATION # ========================= # Name of the Tailscale container (TrueNAS default shown here) TS_CONTAINER_NAME="ix-tailscale-tailscale-1" # Persistent directory for backups (must survive reboots/updates) STATE_DIR="/mnt/zfs_tank/scripts/state" # Watchtower image WATCHTOWER_IMAGE="nickfedor/watchtower" # Watchtower environment variables WATCHTOWER_ENV=( "-e" "TZ=Europe/Berlin" "-e" "WATCHTOWER_CLEANUP=true" "-e" "WATCHTOWER_INCLUDE_STOPPED=true" ) mkdir -p "$STATE_DIR" SERVE_JSON="${STATE_DIR}/tailscale-serve.json" # ========================= # FUNCTIONS # ========================= ts() { docker exec "$TS_CONTAINER_NAME" tailscale "$@" } # ========================= # MAIN # ========================= echo "==> Using Tailscale container: $TS_CONTAINER_NAME" # Ensure Tailscale container exists docker inspect "$TS_CONTAINER_NAME" >/dev/null # 1) Backup current Serve configuration (CLI-managed Serve) echo "==> Backing up Tailscale Serve configuration" if ts serve status --json > "${SERVE_JSON}.tmp" 2>/dev/null; then mv "${SERVE_JSON}.tmp" "$SERVE_JSON" else rm -f "${SERVE_JSON}.tmp" || true echo "WARN: No Serve configuration exported (may be file-managed or empty)." fi # 2) Stop all Serve listeners echo "==> Stopping Tailscale Serve" ts serve reset || true # 3) Run Watchtower once echo "==> Running Watchtower" docker run --rm \ -v /var/run/docker.sock:/var/run/docker.sock \ "${WATCHTOWER_ENV[@]}" \ "$WATCHTOWER_IMAGE" --run-once # 4) Restore Serve configuration (if present and non-empty) echo "==> Restoring Tailscale Serve" if [[ -s "$SERVE_JSON" ]] && [[ "$(cat "$SERVE_JSON")" != "{}" ]]; then docker exec -i "$TS_CONTAINER_NAME" tailscale serve set-raw < "$SERVE_JSON" || true else echo "INFO: No Serve configuration to restore." fi echo "==> Done"
Make the script executable
chmod +x /mnt/zfs_tank/scripts/watchtower-with-tailscale-serve.sh
How to run it manually
sudo /mnt/zfs_tank/scripts/watchtower-with-tailscale-serve.sh
Scheduling (recommended)
In the TrueNAS UI:
- Go to System Settings → Advanced → Cron Jobs
- Command:
/mnt/zfs_tank/scripts/watchtower-with-tailscale-serve.sh
User:
rootSchedule: daily (for example, 03:00)
Disable Watchtower’s internal schedule (
WATCHTOWER_SCHEDULE) to avoid conflicts
The repository containing the code:
https://github.com/chrislongros/watchtower-with-tailscale-serve
-
It is quite annoying getting notifications for unencrypted http connections on my home servers. One solution for Synology is to set a scheduled task with the following command:
tailscale configure synology-cert
This command issues a Let’s Encrypt certificate with 90 day expiration that get’s automatically renewed depending on the task frequency.
For my TrueNAS I use tailscale serve to securely expose my services and apps (immich etc) through a HTTPS website in my tailnet. Enabling HTTPS via tailscale admin panel is required.
The next step it to execute: tailscale serve –https=port_number localhost:port_number
You can execute the command in the background with –bg or in foreground and interrupt it with Ctrl+C.

-

It is important to note that diskover indexes file metadata and does not access file contents ! sist2 is another solution that provides elasticsearch while also accessing file contents.
My docker compose file configuration I uses in my TrueNAS server in Portainer:
version: ‘2’
services:
diskover:
image: lscr.io/linuxserver/diskover
container_name: diskover
environment:
– PUID=1000
– PGID=1000
– TZ=Europe/Berlin
– ES_HOST=elasticsearch
– ES_PORT=9200
volumes:
– /mnt/zfs_tank/Applications/diskover/config/:/config
– /mnt/zfs_tank/:/data
ports:
– 8085:80
mem_limit: 4096m
restart: unless-stopped
depends_on:
– elasticsearch
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.22
environment:
– discovery.type=single-node
– xpack.security.enabled=false
– bootstrap.memory_lock=true
– “ES_JAVA_OPTS=-Xms1g -Xmx1g”
ulimits:
memlock:
soft: -1
hard: -1
volumes:
– /mnt/zfs_tank/Applications/diskover/data/:/usr/share/elasticsearch/data
ports:
– 9200:9200
depends_on:
– elasticsearch-helper
restart: unless-stopped
elasticsearch-helper:
image: alpine
command: sh -c “sysctl -w vm.max_map_count=262144”
privileged: true










