Earlier this year, I started moving the content from my site into my PDS. My site has taken a bit of a back seat this year as I’ve focused on @atexplore.social and a newer project, but that shift gave me an opportunity to rethink how and where my content lives.
Rather than treating my site as something backed by a traditional CMS, I’ve been moving toward a model where my PDS is the source of truth and my site is effectively a read-optimized view over that data.
I’m pretty close to having all the content on my site pulling from my PDS. Earlier this week, I switched out blogs for Leaflet. I just made some changes to pull the about section from my PDS.
Why store content in a PDS
Keeping my content on my PDS has several advantages over using a standard CMS.
The most immediate benefit is reduced operational overhead. Previously, I ran my blog on Ghost, which meant running and maintaining infrastructure myself and paying the associated costs. By contrast, storing content in my PDS removes the need to operate a separate CMS stack.
There’s also a strong ownership aspect. I own these records directly, and they aren’t tied to a specific third-party service. While my account still runs on Bluesky’s servers today, I plan to migrate fully to my own PDS. Once that happens, both the data and the infrastructure are entirely under my control.
How content is stored
Most of the content on my site now lives in my PDS, either through my own custom lexicons or existing service lexicons. In the past, I used Sanity as a CMS and pulled all site content from there. That dependency is now mostly gone.
Instead, my site pulls data directly from the AT Protocol via my PDS. The PDS acts as the canonical store for everything the site needs: pages, metadata, and blog post references.
Caching and cache invalidation
To avoid querying the PDS on every request, I cache all site content in Redis. Redis acts as a read-optimized layer in front of the PDS. On an initial request, after a cache restart, or when an entry has expired, content is fetched from the PDS and written back to Redis.
Each cached entry also has a 24-hour TTL. This acts as a safety net to ensure the cache eventually self-heals even if an invalidation event is missed.
In addition to TTL-based expiration, I run a separate service that consumes the AT Protocol firehose. This service listens for events related to my DID and filters for changes on the lexicons that back my site’s content. When a relevant record is created, updated, or deleted, the service proactively refreshes or invalidates the corresponding entries in Redis.
In steady state, most reads are served directly from Redis. Content updates propagate through the firehose, and the cache is refreshed immediately when changes occur, while the TTL provides a fallback mechanism rather than the primary update strategy.
Blog migration
As part of this work, I migrated my blog from Ghost to Leaflet. Posts are now pulled into my site through my PDS, with links back to Leaflet for the full content. Over time, I may add functionality to render the full content natively on my site instead of redirecting.
One of the key benefits of this setup is durability. If Leaflet were to shut down for any reason, the posts would still exist as records in my PDS. The site wouldn’t lose its data; at worst, I’d need to change how that data is rendered.
Where this leaves things
At this point, my PDS acts as the backbone of my site. Content is stored once, owned by me, distributed via the AT Protocol, and served efficiently through a Redis-backed cache that stays in sync via the firehose.
There’s still more I want to build on top of this—particularly rendering more content natively—but the core architecture feels like a solid foundation moving forward.