Category: TrueNAS
-
If you’re using Tailscale with Mullvad VPN (either via the native Tailscale integration or standalone) and Firefox’s DNS over HTTPS (DoH), you might suddenly find yourself unable to access your Tailscale services via their
*.ts.nethostnames—even though everything worked fine before.The symptoms are frustrating:
tailscale pingworks,digresolves the hostname correctly, but Firefox just refuses to connect.Why This Happens
When you enable DNS over HTTPS in Firefox (especially with “Max Protection” mode), Firefox bypasses your system’s DNS resolver entirely and sends all DNS queries directly to your chosen DoH provider—in this case, Mullvad’s DNS server at
https://base.dns.mullvad.net/dns-query.The problem? Mullvad’s public DNS server has no idea what
my-server.my-tailnet.ts.netis. That’s a private hostname that only Tailscale’s MagicDNS (running at100.100.100.100) knows how to resolve.So while your system can resolve the hostname just fine:
$ dig my-server.my-tailnet.ts.net;; ANSWER SECTION:my-server.my-tailnet.ts.net. 600 IN A 100.x.x.x;; SERVER: 100.100.100.100#53(100.100.100.100) (UDP)Firefox completely ignores this and asks Mullvad instead, which returns nothing.
The Solution
Firefox provides a way to exclude specific domains from DoH, forcing it to fall back to system DNS for those domains. Here’s how to set it up:
- Open Firefox and navigate to
about:config - Search for
network.trr.excluded-domains - Add
ts.netto the list (comma-separated if there are existing entries)
For example:
ts.netOr if you have other exclusions:
example.local, ts.netThis tells Firefox: “For any domain ending in
.ts.net, use the system DNS resolver instead of DoH.” Since your system DNS is controlled by Tailscale’s MagicDNS, the hostname will resolve correctly.The Gotcha: Old Tailnet Names
Here’s a subtle issue that can trip you up: if you previously had a different Tailscale account or renamed your tailnet, you might have an old, specific exclusion that no longer applies.
For example, you might have:
my-nas.old-tailnet.ts.netBut your current tailnet is
new-tailnet.ts.net. The old exclusion does nothing for your new tailnet!The fix is simple: instead of excluding specific tailnet hostnames, just exclude the entire
ts.netdomain. This covers all Tailscale hostnames, regardless of your tailnet name, now and in the future.Verifying the Fix
After making the change, you can verify everything is working:
- Test Tailscale connectivity (should already work):
tailscale ping your-machine-name - Test DNS resolution from the command line:
dig your-machine-name.your-tailnet.ts.net - Test in Firefox: Navigate to your Tailscale hostname—it should now load.
Summary
If you’re combining Firefox DoH with Tailscale:
- Firefox’s DoH bypasses Tailscale’s MagicDNS
- Add
ts.nettonetwork.trr.excluded-domainsinabout:config - Use
ts.net(not a specific tailnet name) to future-proof the setting
This gives you the best of both worlds: private DNS for general browsing via Mullvad, and working hostname resolution for your Tailscale network.
- Open Firefox and navigate to
-
I recently ran into a performance issue on my TrueNAS SCALE 25.10.1 system where the server felt sluggish despite low CPU usage. The system was running Docker-based applications, and at first glance nothing obvious looked wrong. The real problem turned out to be high iowait.
What iowait actually means
In Linux,
iowaitrepresents the percentage of time the CPU is idle while waiting for I/O operations (usually disk). High iowait doesn’t mean the CPU is busy — it means the CPU is stuck waiting on storage.In
top, this appears aswa:%Cpu(s): 1.8 us, 1.7 sy, 0.0 ni, 95.5 id, 0.2 wa, 0.0 hi, 0.8 si, 0.0 stUnder normal conditions, iowait should stay very low (usually under 1–2%). When it starts climbing higher, the system can feel slow even if CPU usage looks fine.
Confirming the issue with iostat
To get a clearer picture, I used
iostat, which shows per-disk activity and latency:iostat -x 1This immediately showed the problem. One or more disks had:
- Very high
%util(near or at 100%) - Elevated
awaittimes - Consistent read/write pressure
At that point it was clear the bottleneck was storage I/O, not CPU or memory.
Tracking it down to Docker services
This system runs several Docker-based services. Using
topalongsideiostat, I noticed disk activity drop immediately when certain services were stopped.In particular, high I/O was coming from applications that:
- Continuously read/write large files
- Perform frequent metadata operations
- Maintain large active datasets
Examples included downloaders, media managers, and backup-related containers.
Stopping services to confirm
To confirm the cause, I stopped Docker services one at a time and watched disk metrics:
iostat -x 1Each time a heavy I/O service was stopped, iowait dropped immediately. Once the worst offender was stopped, iowait returned to normal levels and the system became responsive again.
Why the system looked “fine” at first
This was tricky because:
- CPU usage was low
- Memory usage looked reasonable
- The web UI was responsive but sluggish
Without checking
iostat, it would have been easy to misdiagnose this as a CPU or RAM issue.Lessons learned
- High iowait can cripple performance even when CPU is idle
topalone is not enough — useiostat -x- Docker workloads can silently saturate disks
- Stopping services one by one is an effective diagnostic technique
Final takeaway
On TrueNAS SCALE 25.10.1 with Docker, high iowait was the real cause of my performance issues. The fix wasn’t a reboot, more CPU, or more RAM — it was identifying and controlling disk-heavy services.
If your TrueNAS server feels slow but CPU usage looks fine, check iowait and run
iostat. The disk may be the real bottleneck. - Very high
-
New version that is “battlefield tested” on my home server !!

https://github.com/chrislongros/docker-tailscale-serve-preserve/releases/tag/v1.1.0
https://github.com/chrislongros/docker-tailscale-serve-preserve/tree/main
-
I would like to share an AI generated script that I successfully used to automate my TrueNAS certificates with Tailscale.
This guide shows how to automatically use a Tailscale HTTPS certificate for the TrueNAS SCALE Web UI, when Tailscale runs inside a Docker container.
Overview
What this does
- Runs
tailscale certinside a Docker container - Writes the cert/key to a host bind-mount
- Imports the cert into TrueNAS
- Applies it to the Web UI
- Restarts the UI
- Runs automatically via cron
Requirements
- TrueNAS SCALE
- Docker
- A running Tailscale container (
tailscaled) - A host directory bind-mounted into the container at
/certs
Step 1 – Create a cert directory on the host
Create a dataset or folder on your pool (example):
mkdir -p /mnt//Applications/tailscale-certs
chmod 700 /mnt//Applications/tailscale-certsStep 2 – Bind-mount it into the Tailscale container
Your Tailscale container must mount the host directory to
/certs.Example (conceptually):
Host path: /mnt//Applications/tailscale-certs
Container: /certsThis is required for
tailscale certto write files that TrueNAS can read.Step 3 – Create the automation script (generic)
Save this as:
/mnt/<pool>/scripts/import_tailscale_cert.sh
Script:
#!/bin/bash
set -euo pipefail=========================
USER CONFIG (REQUIRED)
=========================
CONTAINER_NAME=“TAILSCALE_CONTAINER_NAME”
TS_HOSTNAME=“TAILSCALE_DNS_NAME”
HOST_CERT_DIR=“HOST_CERT_DIR”
LOG_FILE=“LOG_FILE”
TRUENAS_CERT_NAME=“TRUENAS_CERT_NAME”=========================
CRT=“${HOST_CERT_DIR}/ts.crt”
KEY=“${HOST_CERT_DIR}/ts.key”export PATH=“/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin”
mkdir -p “$(dirname “$LOG_FILE”)”
touch “$LOG_FILE”
exec >>“$LOG_FILE” 2>&1
echo “—– $(date -Is) starting Tailscale cert import —–”command -v docker >/dev/null || { echo “ERROR: docker not found”; exit 2; }
command -v jq >/dev/null || { echo “ERROR: jq not found”; exit 2; }
command -v midclt >/dev/null || { echo “ERROR: midclt not found”; exit 2; }docker ps –format ‘{{.Names}}’ | grep -qx “$CONTAINER_NAME” || {
echo “ERROR: container not running: $CONTAINER_NAME”
exit 2
}docker exec “$CONTAINER_NAME” sh -lc ‘test -d /certs’ || {
echo “ERROR: /certs not mounted in container”
exit 2
}docker exec “$CONTAINER_NAME” sh -lc
“tailscale cert –cert-file /certs/ts.crt –key-file /certs/ts.key “$TS_HOSTNAME””[[ -s “$CRT” && -s “$KEY” ]] || {
echo “ERROR: certificate files missing”
exit 2
}midclt call certificate.create “$(jq -n
–arg n “$TRUENAS_CERT_NAME”
–rawfile c “$CRT”
–rawfile k “$KEY”
‘{name:$n, create_type:“CERTIFICATE_CREATE_IMPORTED”, certificate:$c, privatekey:$k}’)” >/dev/null || trueCERT_ID=“$(midclt call certificate.query | jq -r
–arg n “$TRUENAS_CERT_NAME” ‘. | select(.name==$n) | .id’ | tail -n 1)”[[ -n “$CERT_ID” ]] || {
echo “ERROR: failed to locate imported certificate”
exit 2
}midclt call system.general.update “$(jq -n –argjson id “$CERT_ID”
‘{ui_certificate:$id, ui_restart_delay:1}’)” >/dev/null
midclt call system.general.ui_restart >/dev/nullecho “SUCCESS: Web UI certificate updated”
Step 4 – Make it executable
chmod 700 /mnt//scripts/import_tailscale_cert.sh
Step 5 – Run once manually
/usr/bin/bash /mnt//scripts/import_tailscale_cert.sh
You will briefly disconnect from the Web UI — this is expected.
Step 6 – Verify certificate in UI
Go to:
System Settings → Certificates
Confirm the new certificate exists and uses your Tailscale hostname.
Also check:
System Settings → General → GUI
→ Web Interface HTTPS CertificateStep 7 – Create the cron job
TrueNAS UI → System Settings → Advanced → Cron Jobs → Add
/usr/bin/bash /mnt//scripts/import_tailscale_cert.sh
You can find the script on my Github repository:
https://github.com/chrislongros/truenas-tailscale-cert-automation
- Runs
-

It is important to note that diskover indexes file metadata and does not access file contents ! sist2 is another solution that provides elasticsearch while also accessing file contents.
My docker compose file configuration I uses in my TrueNAS server in Portainer:
version: ‘2’
services:
diskover:
image: lscr.io/linuxserver/diskover
container_name: diskover
environment:
– PUID=1000
– PGID=1000
– TZ=Europe/Berlin
– ES_HOST=elasticsearch
– ES_PORT=9200
volumes:
– /mnt/zfs_tank/Applications/diskover/config/:/config
– /mnt/zfs_tank/:/data
ports:
– 8085:80
mem_limit: 4096m
restart: unless-stopped
depends_on:
– elasticsearch
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.22
environment:
– discovery.type=single-node
– xpack.security.enabled=false
– bootstrap.memory_lock=true
– “ES_JAVA_OPTS=-Xms1g -Xmx1g”
ulimits:
memlock:
soft: -1
hard: -1
volumes:
– /mnt/zfs_tank/Applications/diskover/data/:/usr/share/elasticsearch/data
ports:
– 9200:9200
depends_on:
– elasticsearch-helper
restart: unless-stopped
elasticsearch-helper:
image: alpine
command: sh -c “sysctl -w vm.max_map_count=262144”
privileged: true







