Category: TrueNAS
-
Running TrueNAS in VirtualBox is a great way to test configurations, experiment with ZFS pools, or learn the TrueNAS UI before deploying on real hardware. As of February 2026, the latest stable version is TrueNAS 25.10.2.1 (Goldeye), with TrueNAS 26 beta planned for April 2026.
VM Settings
- Type: BSD, FreeBSD (64-bit)
- RAM: 8 GB minimum (ZFS needs memory)
- CPU: 2+ cores
- Disk 1: 16 GB (boot drive)
- Disk 2-4: Create additional virtual disks for your ZFS pool (e.g., 3x 20 GB for a RAIDZ1)
- Network: Bridged adapter (so TrueNAS gets its own IP on your LAN)
Important VirtualBox Settings
Under System > Processor, make sure to enable PAE/NX. Under System > Acceleration, enable VT-x/AMD-V and Nested Paging.
For the disk controller, use AHCI (not IDE) for better performance and compatibility.
Note: If you’re on an AMD system and get a VERR_SVM_IN_USE error, you may need to unload the KVM modules first — see my post on VirtualBox AMD-V fix.
Installation
- Download the TrueNAS 25.10 ISO from truenas.com
- Mount the ISO in VirtualBox’s optical drive
- Boot the VM and follow the installer
- Install to the 16 GB boot disk
- Remove the ISO and reboot
After Installation
Once TrueNAS boots, it will display the web UI address on the console. Open it in your browser and create your ZFS pool using the additional virtual disks.

This setup is perfect for testing pool configurations, snapshots, replication, and apps before committing to production hardware.
-
The latest episode of TrueNAS Tech Talk (T3) — Episode 56 — dropped on March 6, 2026, and it’s packed with news that every TrueNAS homelab enthusiast and sysadmin will want to hear. Hosts Kris Moore and Chris Peredun (the TrueNAS HoneyBadger) cover the upcoming TrueNAS 26 release schedule, a deep dive into the new dataset tiering feature, and tackle eight viewer questions.
TrueNAS 26: A (Tentative!) Release Timeline
The big headline this week is that Kris and Chris finally lay out the tentative roadmap from the first TrueNAS 26 BETA release all the way through to the .0 general availability. If you’ve been waiting to know when you can get your hands on the next generation of TrueNAS software, this episode gives you the clearest picture yet. No more codenames, no more decimal versioning — as the team confirmed back in Ep. 52, TrueNAS is moving to a clean annual release cycle, and 26 is the first major fruit of that shift.
Dataset Tiering: Hybrid Storage Gets Smarter
One of the standout features coming to TrueNAS 26 is dataset tiering — the ability to mix fast flash and spinning-disk pools and automatically tier datasets (or shares) between them. This is an Enterprise-tier feature, meaning it won’t land in the Community Edition, but the architecture is fascinating for anyone interested in how ZFS and TrueNAS manage data placement at scale. Since this is implemented at the TrueNAS layer rather than directly in OpenZFS, pools remain compatible with standard OpenZFS if you ever need to migrate away, though some caveats may apply.
For those of us running pure Community Edition homelabs — Docker stacks, S3-compatible storage, and all — it’s still a great signal of the direction TrueNAS engineering is heading.
Eight Viewer Questions
As always, Kris and Chris close out the episode with a batch of community questions — likely touching on storage configuration, upgrade paths, and follow-up on ZFS AnyRaid and Spotlight search (truesearch) from recent episodes.
Why This Episode Matters for Homelab Users
If you’re self-hosting on TrueNAS Scale — running Docker containers, managing snapshots over Tailscale, or experimenting with S3-compatible backends like RustFS or Garage — TrueNAS 26 is a significant milestone. The annual cadence promises more predictable upgrade windows, and features like dataset tiering give a window into where the platform’s storage smarts are heading.
Watch the full episode on the TrueNAS blog or on YouTube.
T3 Tech Talk is a weekly podcast from the TrueNAS team. New episodes drop every Thursday.
So …
- Beta 1 at the end of March
- Beta 2 at the end of May
- RC in July
- Official release in September
-

Notable changes:
- Improves NFS performance for NFSv4 clients (NAS-139128). Adds support for STATX_CHANGE_COOKIE to properly surface ZFS sequence numbers to NFS clients via knfsd. The NFS change_info4 structure now accurately tracks directory and file changes, which reduces unnecessary server requests. Client attribute cache invalidation is also improved. Previously, the system synthesized change IDs based on ctime, which could fail to increment consistently due to kernel timer coarseness.
- Fixes NIC bonding configuration disrupted after a system update (NAS-139889). Resolves an issue where network interface bond configurations could break after a TrueNAS 25.10.2 update. Affected systems could lose network connectivity on bonded interfaces.
- Fixes SMB Legacy Share validation errors that broke share management UI forms (NAS-139892). Resolves an issue where SMB shares using the Legacy Share preset with certain
path_suffixvariable substitutions failed middleware validation. The SMB share configuration forms became unusable in the web interface as a result. - Fixes API result serialization failures caused by unhandled validation errors (NAS-139896). Resolves an issue where certain Pydantic validation errors were not caught during API result serialization. This caused unexpected errors to appear in the web interface instead of proper error messages.
- Fixes SSL certificate connection failure error handling (NAS-139938). Resolves an AttributeError that occurred when an HTTPS connection failed due to a certificate error. Cloud sync tasks, replication, or other SSL-dependent network operations could surface a secondary AttributeError instead of the original connection failure message.
https://forums.truenas.com/t/truenas-25-10-2-1-is-now-available/63998
-
How the Model Context Protocol turns your NAS into a conversational system
What is MCP?
The Model Context Protocol (MCP) is an open standard developed by Anthropic that allows AI assistants like Claude to connect to external tools, services, and data sources. Think of it as a universal plugin system for AI — instead of copy-pasting terminal output into a chat window, you give the AI a live, structured connection to your systems so it can query and act on them directly.
MCP servers are small programs that speak a standardized JSON-RPC protocol. The AI client (Claude Desktop, Claude Code, etc.) spawns the server process and communicates with it over stdio. The server translates AI requests into real API calls — in this case, against the TrueNAS middleware WebSocket API.
The TrueNAS MCP Connector
TrueNAS Research Labs recently released an official MCP server for TrueNAS systems. It is a single native Go binary that runs on your desktop or workstation, connects to your TrueNAS over an encrypted WebSocket (
wss://), authenticates with an API key, and exposes the full TrueNAS middleware API to any MCP-compatible AI client.Crucially, nothing is installed on the NAS itself. The binary runs entirely on your local machine.
What it can do
The connector covers essentially the full surface area of TrueNAS management:
Storage — query pool health, list datasets with utilization, manage snapshots, configure SMB/NFS/iSCSI shares. Ask “which datasets are above 80% quota?” and get a direct answer.
System monitoring — real-time CPU, memory, disk I/O, and network metrics. Active alerts, system version, hardware info. The kind of overview that normally requires clicking through several pages of the web UI.
Maintenance — check for available updates, scrub status, boot environment management, last backup timestamps.
Application management — list, install, upgrade, and monitor the status of TrueNAS applications (Docker containers on SCALE).
Virtual machines — full VM lifecycle: create, start, stop, monitor resource usage.
Capacity planning — utilization trends, forecasting, and recommendations. Ask “how long until my main pool is full at current growth rate?” and get a reasoned answer.
Directory services — Active Directory, LDAP, and FreeIPA integration status and management.
Safety features
The connector includes a dry-run mode that previews any destructive operation before executing it, showing estimated execution time and a diff of what would change. Built-in validation blocks dangerous operations automatically. Long-running tasks (scrubs, migrations, upgrades) are tracked in the background with progress updates.
Why This Matters
Traditional NAS management is a context-switching problem. You have a question — “why is this pool degraded?” — and answering it means opening the web UI, navigating to storage, cross-referencing the alert log, checking disk SMART data, and reading documentation. Each step is manual.
With MCP, the AI holds all of that context simultaneously. A single question like “my pool has an error, what should I do?” triggers the AI to query pool status, check SMART data, look at recent alerts, and synthesize a diagnosis — in one response, with no tab-switching.
This is especially powerful for complex homelab setups with many datasets, containers, and services. Instead of maintaining mental models of your storage layout, you can just ask.
Getting Started
The setup takes about five minutes:
- Download the binary from the GitHub releases page and place it in your PATH.
- Generate an API key in TrueNAS under System Settings → API Keys.
- Configure your MCP client — Claude Desktop (
~/.config/claude/claude_desktop_config.json) or Claude Code (claude mcp add ...). - Restart the client and start asking questions.
The binary supports self-signed certificates (pass
-insecurefor typical TrueNAS setups) and works over Tailscale or any network path to your NAS.Example queries you can use right away
- “What is the health status of all my pools?”
- “Show me all datasets and their current usage”
- “Are there any active alerts I should know about?”
- “Which of my containers are not running?”
- “Preview creating a new dataset for backups with lz4 compression”
- “When was the last scrub on my main pool, and did it find errors?”
- “What TrueNAS version am I running and are updates available?”
Current Status
The TrueNAS MCP connector is a research preview (currently v0.0.4). It is functional and comprehensive, but not yet recommended for production-critical automation. It is well-suited for monitoring, querying, and exploratory management. Treat destructive operations (dataset deletion, VM reconfiguration) with the same care you would in the web UI — use dry-run mode first.
The project is open source and actively developed. Given that this is an official TrueNAS Labs project, it is likely to become a supported feature in future TrueNAS releases.
Broader Implications
The TrueNAS MCP connector is an early example of a pattern that will become common: infrastructure that exposes a semantic API layer for AI consumption, not just a REST API for human-written scripts. The difference is significant. A REST API tells you what the data looks like. An MCP server tells the AI what operations are possible, what they mean, and how to chain them safely.
As more homelab and enterprise tools adopt MCP, the practical vision of a conversational infrastructure layer — where you describe intent and the AI handles execution — becomes genuinely achievable, not just a demo.
The TrueNAS MCP connector is available at github.com/truenas/truenas-mcp. Setup documentation is at the TrueNAS Research Labs page.
Sample screenshots!!







-
TrueNAS 25.10.2 Released: What’s New
iXsystems has released TrueNAS 25.10.2, a maintenance update to the 25.10 branch. If you’re running TrueNAS Scale on the Early Adopter channel, this is a recommended update — it fixes several critical issues including an upgrade path bug that could leave systems unbootable.
Critical Fixes
Upgrade failure fix (NAS-139541). Some systems upgrading from TrueNAS 25.04 to 25.10 encountered a “No space left on device” error during boot variable preparation, leaving the system unbootable after the failed attempt. This is fixed in 25.10.2.
SMB service startup after upgrade (NAS-139076). Systems with legacy ACL configurations from older TrueNAS versions could not start the SMB service after upgrading to 25.10.1. The update now automatically converts legacy permission formats during service initialization.
Disk replacement validation (NAS-138678). A frustrating bug rejected replacement drives with identical capacity to the failed drive, showing a “device is too small” error. Fixed — identical capacity replacements now work correctly.
Performance Improvements
NFS performance for NFSv4 clients (NAS-139128). Support for
STATX_CHANGE_COOKIEhas been added, surfacing ZFS sequence numbers to NFS clients via knfsd. Previously, the system synthesized change IDs based on ctime, which could fail to increment consistently due to kernel timer coarseness. This improves client attribute cache invalidation and reduces unnecessary server requests.ZFS pool import performance (NAS-138879). Async destroy operations — which can run during pool import — now have a time limit per transaction group. Pool imports that previously stalled due to prolonged async destroy operations will complete significantly faster.
Containerized app CPU usage (NAS-139089). Background CPU usage from Docker stats collection and YAML processing has been reduced by optimizing asyncio_loop operations that were holding the Global Interpreter Lock during repeated container inspections.
Networking
Network configuration lockout fix (NAS-139575). Invalid IPv6 route entries in the routing table could block access to network settings, app management, and bug reporting. The system now handles invalid route entries gracefully.
Network bridge creation fix (NAS-139196). Pydantic validation errors were preventing bridge creation through the standard workflow of removing IPs from an interface, creating a bridge, and reassigning those IPs.
IPv6 Kerberos fix (NAS-139734). Active Directory authentication failed when using IPv6 addresses for Kerberos Distribution Centers. IPv6 addresses are now properly formatted with square brackets in
krb5.conf.SMB Hosts Allow/Deny controls (NAS-138814). IP-based access restrictions are now available for SMB shares across all relevant purpose presets. Also adds the ability to synchronize Kerberos keytab SPNs with Active Directory updates.
UI and Cloud
Dashboard storage widget (NAS-138705). Secondary storage pools were showing “Unknown” for used and free space in the Dashboard widget. Fixed.
Cloud Sync tasks invisible after CORE → SCALE upgrade (NAS-138886). Tasks were functional via CLI but invisible in the web UI due to a data inconsistency where the
bwlimitfield contained empty objects instead of empty arrays.S3 endpoint validation (NAS-138903). Cloud Sync tasks now validate that S3 endpoints include the required
https://protocol prefix upfront, with a clear error message instead of the unhelpful “Invalid endpoint” response.Session expiry fix (NAS-138467). Users were being unexpectedly logged out during active operations despite configured session timeout settings. Page refresh (F5) was also triggering the login screen during active sessions. Both are now fixed.
Error notifications showing placeholder text (NAS-139010). Error notifications were displaying “%(err)s Warning” instead of actual error messages.
Users page now shows Directory Services users by default (NAS-139073). Directory Services users now appear in the default view without requiring a manual filter change.
SSH access removal fix (NAS-139130). Clearing the SSH Access option appeared to save successfully but the SSH indicator persisted in the user list. Now properly disabled through the UI.
Certificate management for large DNs (NAS-139056). Certificates with Distinguished Names exceeding 1024 characters — typically those with many Subject Alternative Names — can now be properly imported and managed.
Notable Security Change
The root account’s group membership is now locked to
builtin_administratorsand cannot be modified through the UI. This prevents accidental removal of privileges that could break scheduled tasks, cloud sync, and cron jobs. To disable root UI access, use the Disable Password option in Credentials → Local Users instead.Upgrade
Update via System → Update in the web UI, or download from truenas.com. Full release notes and changelog are available at the TrueNAS Documentation Hub.



https://forums.truenas.com/t/truenas-25-10-2-is-now-available/63778
-
How a failed nightly update left my TrueNAS server booting into an empty filesystem — and the two bugs responsible.
I run TrueNAS Scale on an Aoostar WTR Max as my homelab server, with dozens of Docker containers for everything from Immich to Jellyfin. I like to stay on the nightly builds to get early access to new features and contribute bug reports when things go wrong. Today, things went very wrong.
The Update Failure
It started innocently enough. I kicked off the nightly update from the TrueNAS UI, updating from
26.04.0-MASTER-20260210-020233to the latest20260213build. Instead of a smooth update, I got this:error
[EFAULT] Error: Command ['zfs', 'destroy', '-r', 'boot-pool/ROOT/26.04.0-MASTER-20260213-020146-1'] failed with exit code 1: cannot unmount '/tmp/tmpo8dbr91e': pool or dataset is busyThe update process was trying to clean up a previous boot environment but couldn’t unmount a temporary directory it had created. No big deal, I thought — I’ll just clean it up manually.
Down the Rabbit Hole
I checked what was holding the mount open:
bash
$ fuser -m /tmp/tmpo8dbr91e # nothing $ lsof +D /tmp/tmpo8dbr91e # nothing (just Docker overlay warnings)Nothing was using it. A force unmount also failed:
bash
$ sudo umount -f /tmp/tmpo8dbr91e umount: /tmp/tmpo8dbr91e: target is busy.Only a lazy unmount worked:
bash
$ sudo umount -l /tmp/tmpo8dbr91eSo I unmounted it and destroyed the stale boot environment manually. Then I retried the update. Same error, different temp path. Unmount, destroy, retry. Same error again. Each attempt, the updater would mount a new temporary directory, fail to unmount it, and bail out.
I even tried stopping Docker before the update, thinking the overlay mounts might be interfering. No luck.
The Real Problem
Frustrated, I rebooted the server thinking a clean slate might help. The server didn’t come back. After 10 minutes of pinging with no response, I plugged in a monitor and saw this:
console
Mounting 'boot-pool/ROOT/26.04.0-MASTER-20260213-020146' on '/root/' ... done. Begin: Running /scripts/local-bottom ... done. Begin: Running /scripts/nfs-bottom ... done. run-init: can't execute '/sbin/init': No such file or directory Target filesystem doesn't have requested /sbin/init. run-init: can't execute '/etc/init': No such file or directory run-init: can't execute '/bin/init': No such file or directory run-init: can't execute '/bin/sh': No such file or directory No init found. Try passing init= bootarg. BusyBox v1.37.0 (Debian 1:1.37.0-6+b3) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs)The system had booted into the incomplete boot environment from the failed update — an empty shell with no operating system in it. The update process had set this as the default boot environment before it was fully built.
The Recovery
Fortunately, ZFS boot environments make this recoverable. I rebooted again, caught the GRUB menu, and selected my previous working boot environment (
20260210-020233). After booting successfully, I locked in the correct boot environment as the default:bash
$ sudo zpool set bootfs=boot-pool/ROOT/26.04.0-MASTER-20260210-020233 boot-poolThen cleaned up the broken environment:
bash
$ sudo zfs destroy -r boot-pool/ROOT/26.04.0-MASTER-20260213-020146Server back to normal.
Two Bugs, One Update
There are actually two separate bugs here:
Bug 1 — Stale Mount Cleanup The update process mounts the boot environment into a temp directory but can’t clean it up when something fails.umount -fdoesn’t work; onlyumount -ldoes. And since each retry creates a new temp mount, the problem is self-perpetuating.Bug 2 — Premature Bootfs Switch (Critical) This is the dangerous one. The updater sets the new boot environment as the GRUB default before it’s fully populated. When the update fails mid-way, you’re left with a system that will boot into an empty filesystem on the next reboot. If you don’t have physical console access and a keyboard handy, you could be in serious trouble.What Happens During a Failed Update
Update starts→Sets new bootfs→Build fails→Reboot = initramfsThe Fix Should Be Simple
The updater should only set the new boot environment as the default after the update is verified complete. And it should use
umount -las a fallback whenumount -ffails, since the standard force unmount clearly isn’t sufficient here.I’ve filed this as NAS-139794 on the TrueNAS Jira. If you’re running nightly builds, be aware of this issue — and make sure you have console access to your server in case you need to select a different boot environment from GRUB.
Lessons Learned
Running nightly builds is inherently risky, and I accept that. But an update failure should never leave a system unbootable. The whole point of ZFS boot environments is to provide a safety net — but that net has a hole when the updater switches the default before the new environment is ready.
In the meantime, keep a monitor and keyboard accessible for your TrueNAS box, and remember: if you ever drop to an initramfs shell after an update, your data is fine. Just reboot into GRUB and pick the previous boot environment.
-
How to bring back RSS feeds for sites that removed them, scrape full article content, and unify everything in a single self-hosted reader.
RSS isn’t dead — it’s just been abandoned by publishers chasing engagement metrics and walled gardens. Websites that once offered clean XML feeds now force you into newsletters, push notifications, or algorithmic timelines. But with a bit of self-hosting, you can take that control back.
This post walks through my setup: FreshRSS as the reader, RSS-Bridge as the scraper for sites that killed their feeds, all running on TrueNAS Scale with Docker and exposed through Tailscale for secure remote access.
The Architecture
The data flow is straightforward:
Website→
(no RSS)RSS-Bridge→
scrapes & generates feedFreshRSS→
polls & displaysYou
browser / appFor sites that still offer RSS, FreshRSS subscribes directly. For sites that removed their feeds, RSS-Bridge sits in between — it loads the page, parses the HTML with CSS selectors, and generates a standard Atom feed that FreshRSS can consume like any other subscription.
Why RSS-Bridge Over Alternatives
There are several tools that can generate feeds from websites. I chose RSS-Bridge for a few reasons:
Lightweight. RSS-Bridge is PHP-based and runs in about 50 MB of RAM. Compare that with RSSHub (Node.js, 300 MB+) or Huginn (Ruby, even heavier). On a NAS where every container counts, this matters.
FreshRSS integration. There’s a native FreshRSS extension (
xExtension-RssBridge) if you want tight integration, though the simpler approach — just subscribing to the generated feed URL — works perfectly and survives app updates.CssSelectorBridge. This built-in bridge is incredibly flexible. Give it a URL, tell it which CSS selectors match your articles, and it produces a feed. No coding required, no custom JavaScript routes to maintain.
Deploying RSS-Bridge on TrueNAS
I run RSS-Bridge as a Docker container through Portainer. First, create the config directory and enable all bridges:
bash# Create config directory on ZFS sudo mkdir -p /mnt/zfs_tank/docker/rss-bridge # Enable all bridges sudo tee /mnt/zfs_tank/docker/rss-bridge/config.ini.php << 'EOF' [system] enabled_bridges[] = * EOFThen deploy the stack in Portainer:
docker-composeversion: "3" services: rss-bridge: image: rssbridge/rss-bridge:latest container_name: rss-bridge restart: unless-stopped ports: - "3001:80" volumes: - /mnt/zfs_tank/docker/rss-bridge:/configRSS-Bridge is now accessible at
http://<truenas-ip>:3001.Remote Access with Tailscale Serve
If you already run a Tailscale container on your TrueNAS box, you can expose RSS-Bridge through it:
bashdocker exec ix-tailscale-tailscale-1 tailscale serve --bg --https 3001 http://localhost:3001This makes RSS-Bridge available at
https://your-machine.tailnet-name.ts.net:3001/from any device on your tailnet. Use a non-443 port to avoid overwriting your TrueNAS GUI’s Tailscale Serve config.Tip When adding feed URLs to FreshRSS, use the local IP (e.g.http://192.168.0.13:3001/...) rather than the Tailscale hostname. Both services run on the same box, so going through the LAN is faster and more reliable — and the FreshRSS container may not have Tailscale DNS available.Scraping a Site: A Real Example
The Greek tech blog techblog.gr removed its RSS feed during a 2025 site redesign. Here’s how I brought it back.
Step 1 — Identify the selectors
Open the site, right-click an article title, and choose Inspect. On techblog.gr, each article title is an
<a>inside an<h3>. On article pages, the content lives insidediv.article-content.Step 2 — Configure CssSelectorBridge
In the RSS-Bridge web UI, find CSS Selector and fill in:
Field Value Site URL https://techblog.gr/Selector for article links h3 aURL pattern (empty) Expand article content .article-contentContent cleanup (empty) Title cleanup | Techblog.grLimit 20Step 3 — Generate and subscribe
Click Generate feed, right-click the Atom button and copy the link. In FreshRSS, go to Subscription management → Add a feed and paste the URL. Done — full article content in your reader, from a site with no RSS feed.
Finding the Right CSS Selectors
For the article link selector: On the homepage, right-click an article title → Inspect. Look at the tag structure. Common patterns are
h2 a,h3 a, or.post-title a. If the site uses generic<a>tags everywhere, combine with a URL pattern to filter (e.g./blog/202to match only blog post URLs).For the content selector: Open any individual article, right-click the body text → Inspect. Look at the parent
<div>wrapping all the paragraphs. WordPress sites typically use.entry-contentor.article-content. Drupal sites often use.field-name-bodyor.node-content.Gotcha: Iframes Some sites (especially job boards) load content inside iframes. RSS-Bridge can only parse the main page HTML — if the content is in an iframe, you’re limited to titles and links. Check your browser’s inspector for<iframe>elements if the content selector doesn’t seem to work.Setting Sensible Limits
The Limit field controls how many items RSS-Bridge returns per request. Since FreshRSS remembers articles it has already seen, you only need enough to cover new posts between polling intervals:
Feed type Limit Reasoning News sites 20 High frequency, many posts per day Blogs 10 Weekly or monthly posts Job boards 10 Few listings, slow turnover What About Paywalled Sites?
RSS-Bridge has limits. If a site blocks automated requests (returning 403 errors) or loads content via JavaScript that requires authentication, RSS-Bridge can’t help. This applies to most academic journals and some major news outlets.
For journals like NEJM, the publisher’s RSS feed is your only option — and it often contains just titles and volume/page references, no abstracts. A useful workaround for medical journals: use PubMed’s RSS feeds instead. PubMed indexes the same articles and includes full abstracts. Search for a journal, save the search, and create an RSS feed from the results.
Unifying Multiple Feed Readers
If you’re migrating from a desktop reader like Akregator to a self-hosted FreshRSS instance, both support OPML import/export. Export from both, then compare the feed URLs to identify:
Feeds in both — already synced, nothing to do.
Feeds only in the old reader — evaluate whether to add them to FreshRSS or drop them.
Feeds only in FreshRSS — typically your newer RSS-Bridge feeds replacing broken native feeds.
Watch for feeds that exist in both but with different URLs — same source, different CDN, or an old Politepol/feed proxy URL that you’ve since replaced with RSS-Bridge.
Closing Thoughts
This setup takes about 30 minutes to deploy and configure. What you get in return is a single, self-hosted interface for consuming content from any website — with or without their cooperation. No algorithms deciding what you see, no newsletters cluttering your inbox, no tracking pixels following you around.
RSS never died. It just needs a little infrastructure.








