If you’ve read my blogs recently you’d notice something reoccurring. Me always saying I want to write more. But now I’ve put the structures in place that makes that possible. My first blog post was because of HNG in 2024. They said we should write an article as our stage 0 task. In so much loved the experience of writing on Hashnode but didn’t write another article till December 2024.
Hashnode had all I wanted but I wanted my blog on my webiste. That led me to research and eventually I found out I could use it as a headless CMS to manage, write and store all my articles. This blog posts written on Hashnode gives a high level explanation on how I did it and how you can too.
Architecture Overview
I already had my website up and running so my work was simple. Just fetching posts from the endpoint hashnode would provide and then rendering it on the website.

How the Content Flows
So here’s how this thing actually works in practice.
I write and publish all my posts on Hashnode. I don’t use it as a “blog platform” in the traditional sense it’s just the place where content lives. Hashnode handles drafts, publishing, and storing the content, and it also serves that content from its own edge servers, so reads are already geographically close and fast.
My site itself never serves content directly from Hashnode though.
When someone opens a blog post on my site, the request hits my server, not Hashnode. From there, an Astro API Route kicks in (same idea as a Next.js API Route). That API route runs on the server and makes a GraphQL request to Hashnode to fetch the post data.
Important detail: this fetch is server-side only. The browser never talks to Hashnode. No public GraphQL calls from the client, no exposed tokens, no client-side fetching.
Once the data comes back, the Astro server renders the full HTML for that post on the spot. That’s what server-side rendering means here: the server literally builds the final HTML after you request the page, not at build time, and not in the browser with JavaScript.
The response that gets sent back is already complete HTML. The browser just displays it. There’s no “loading state”, no client-side render step required just to read a post.
To avoid hammering Hashnode and to avoid paying the SSR cost on every request, the rendered HTML is cached on Vercel’s server layer for 24 hours. So the first request after the cache expires does the fetch + render, and everything after that just gets the cached HTML.
Benefits of This Approach
I was able to optimize heavily for SEO here with dynamic meta tags, Open Graph images, and content-specific descriptions generated directly from each blog post. All of that HTML is present before the response ever reaches the browser.
This would not have worked well with a client-side rendered approach. In CSR, JavaScript has to load, execute, and fetch data before meaningful HTML exists. Search engines can execute JavaScript now, but they do it later, inconsistently, and with stricter resource limits. You’re basically hoping the crawler sticks around long enough to render your page. That’s not a reliable SEO strategy.
With SSR, there’s no guessing. The crawler gets exactly the same HTML a user gets.
Another benefit is predictable performance. The browser doesn’t need to download a JS bundle just to read text. The page is readable immediately. JavaScript becomes optional instead of mandatory, which matters a lot on slow devices or bad networks.
Caching makes this approach practical. The expensive part fetching from Hashnode and rendering HTML only happens on cache misses. Even without caching, Hashnode’s edge servers are fast enough and the API limits are generous enough that this would still work just fine in practice as long as there are no bad actors (coughs Hacker) or I am not a celebrity. One day
Everything else is just serving static HTML from Vercel’s edge, which is cheap and fast. Without caching, SSR would be an unnecessary overhead.
Drawbacks of this Approach
As there are advantages there are also disadvantages. The most prominent one being everything being coupled to Hashnode’s Api. If their API is slow or unreachable and the cache is cold, the page doesn't render. Caching hides this most of the time (cause I don't think any serious platform would be down for 24h) but the dependency is still there.
The cache itself is a tradeoff. A 24 hour time to leave (TTL) means content can be stale. If I update a post or fix a typo, users will see the old version until the cache is expired or manually invalidated. That's acceptable for a blog but not too great for time sensitive content.
From a security perspective, SSR reduces exposure but doesn’t eliminate risk. Because each request can trigger backend work, the system is still vulnerable to traffic-based abuse. Without proper rate limiting, someone can force repeated cache misses and amplify load on both your server and Hashnode’s API.
Finally, this setup is very opinionated. It works because the problem is simple: read-only blog content. The moment you add users, comments, personalization, or real-time features, this architecture stops being the right tool. Will I add all these features?? I DON'T KNOW, but they are fun to think about.
Conclusion
If you stayed till the end. Thanks a lot.Fun fact this will be my 2nd technical article and I will be writing more deep dives about things I love and things I'm working on so stay tuned ( not like I made that possible lmao). Till we meet again.