We Benchmarked Our SSR Framework Against Next.js — Here's What We Found
Hey there, little explorer! Imagine you have some super cool toy cars, like race cars, that help build websites super fast! 🚗💨
There's a new race car called Pareto. It's like a tiny, speedy car that's really good at carrying lots of toys (information) at once, especially when the toys are a bit tricky to find.
They had a big race with other cars like Next.js. Sometimes Next.js was faster, but when the toys needed a little extra looking for, Pareto zoomed way ahead! It could carry so many more toys without getting tired.
Pareto also uses less fuel (it's smaller!) and helps everyone get their toys super quick. So, if you want a website to be super fast and not need too many big, expensive trucks, Pareto is like a little superhero car! 🦸♀️🏎️
<p>We built <a href="https://github.com/childrentime/pareto" rel="noopener noreferrer">Pareto</a>, a lightweight streaming-first React SSR framework on Vite. Claims are cheap — so we built an automated benchmark suite that runs in CI on every PR, comparing Pareto against <strong>Next.js</strong>, <strong>React Router (Remix)</strong>, and <strong>TanStack Start</strong> on identical hardware.</p> <h2> What We Tested </h2> <p>Four scenarios covering the most common SSR workloads:</p> <ul> <li> <strong>Static SSR</strong> — Page with inline data, no async loader. Pure SSR throughput.</li> <li> <strong>Data Loading</strong> — Loader with simulated 10ms DB query. SSR + data fetching overhead.</li> <li> <strong>Streaming SSR</strong> — <code>defer()</code> + Suspense with 200ms delayed data. St
We built Pareto, a lightweight streaming-first React SSR framework on Vite. Claims are cheap — so we built an automated benchmark suite that runs in CI on every PR, comparing Pareto against Next.js, React Router (Remix), and TanStack Start on identical hardware.
What We Tested
Four scenarios covering the most common SSR workloads:
-
Static SSR — Page with inline data, no async loader. Pure SSR throughput.
-
Data Loading — Loader with simulated 10ms DB query. SSR + data fetching overhead.
-
Streaming SSR — defer() + Suspense with 200ms delayed data. Streaming pipeline efficiency.
-
API / JSON — Pure JSON endpoint. Routing + serialization overhead.
All benchmarks on GitHub Actions (Ubuntu, Node 22, 4 CPUs), using autocannon with 100 connections for 30 seconds.
Throughput: Requests Per Second
Scenario Pareto Next.js React Router TanStack Start
Static SSR 2,224/s 3,328/s 997/s 2,009/s
Data Loading 2,733/s 293/s 955/s 1,386/s
Streaming SSR 247/s 236/s 247/s 247/s
API / JSON 3,675/s 2,212/s 1,950/s —
Next.js wins on static SSR. But the moment a loader is involved, Pareto handles 9.3x more requests than Next.js and 2.9x more than React Router.
Load Capacity: Max Sustainable QPS
We ran a ramp-up test from 1 to 1,000 concurrent connections, measuring the max QPS each framework sustains while keeping p99 latency under 500ms.
Scenario Pareto Next.js React Router TanStack Start
Static SSR 2,281/s 2,203/s 1,098/s 1,515/s
Data Loading 2,735/s 331/s 1,044/s 1,458/s
Streaming SSR 2,022/s 310/s 807/s 960/s
API / JSON 3,556/s 1,419/s 1,912/s —
Under streaming SSR load, Pareto sustains 2,022 req/s — that's 6.5x Next.js and 2.5x React Router.
What this looks like in practice: Say your product page needs to serve 2,000 req/s at peak. With Pareto, that's a single server. With Next.js at 331/s, you'd need 6 servers behind a load balancer. For streaming SSR dashboards, it's 1 Pareto instance vs 7 Next.js instances.
Latency
Scenario Pareto p50/p99 Next.js p50/p99 React Router p50/p99
Static SSR 431ms / 1.35s 244ms / 326ms 704ms / 7.16s
Data Loading 350ms / 702ms 1.42s / 7.82s 760ms / 7.41s
API / JSON 266ms / 320ms 283ms / 321ms 486ms / 2.12s
Under 100 concurrent connections, Pareto's data loading p99 is 702ms while Next.js spikes to 7.82s. 99% of users get their page in under 700ms with Pareto. With Next.js, 1 in 100 users waits nearly 8 seconds.
Bundle Size
Framework Client JS (gzip) Total (gzip)
Pareto 62 KB 72 KB
Next.js 233 KB 409 KB
React Router 100 KB 102 KB
TanStack Start 101 KB 272 KB
62 KB of client JavaScript — roughly 1/4 of Next.js. On 4G mobile (~5 Mbps), that's 100ms to download vs 370ms. On 3G, it's 330ms vs 1.2 seconds before any rendering begins.
The Cost Difference
Here's a concrete scenario — a SaaS dashboard at 10,000 data-loading req/s peak:
Framework Servers needed (4 CPU) Monthly cost (est.)
Pareto 4 ~$160
TanStack Start 7 ~$280
React Router 10 ~$400
Next.js 31 ~$1,240
How We Keep Benchmarks Honest
-
CI automated — runs on every PR touching core code
-
System tuning — ASLR disabled, CPU governor performance
-
Median aggregation — eliminates outlier noise, CV% for stability
-
Sequential isolation — one framework at a time, cooldown between runs
-
Same hardware — all frameworks on the same GitHub Actions runner
The full suite is open source: github.com/childrentime/pareto/tree/main/benchmarks
npx create-pareto my-app cd my-app && npm install && npm run devnpx create-pareto my-app cd my-app && npm install && npm run devEnter fullscreen mode
Exit fullscreen mode
Pareto is a lightweight, streaming-first React SSR framework built on Vite. GitHub · Docs
DEV Community
https://dev.to/childrentime/we-benchmarked-our-ssr-framework-against-nextjs-heres-what-we-found-57l6Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
benchmarkopen sourceproduct
The Stack Nobody Recommended
The most common question I got after publishing Part 1 was some variation of "why did you pick X instead of Y?" So this post is about that. Every major technology choice, what I actually considered, where I was right, and where I got lucky. I'll be upfront: some of these were informed decisions. Some were "I already know this tool, and I need to move fast." Both are valid, but they lead to different trade-offs down the line. The Backend: FastAPI I come from JavaScript and TypeScript. Years of React on the frontend, Express and Fastify on the backend. When I decided this project would be Python, because that's where the AI/ML ecosystem lives, I needed something that didn't feel foreign. FastAPI clicked immediately. The async/await model, the decorator-based routing, and type hints that actu

Best Form Backend for Job Applications and Event Registrations in 2026
If you're collecting job applications or event registrations online, you've probably hit the same wall. Either you're overpaying for a tool like Typeform or JotForm, or you're cobbling together a Google Form that looks unprofessional and gives you zero control over where your data goes. In this article, I'll walk through the best form backends for job applications and event registrations in 2026, covering price, features, file upload support, and which one is right for your use case. Why the Right Form Backend Matters for Applications and Registrations A contact form getting 10 submissions a month is simple. A job application form getting 500 submissions a month is a different problem entirely. You need: File uploads: Candidates submit resumes, cover letters, and portfolios. High submissio

How Ethics Emerged from Episode Logs — 17 Days of Contemplative Agent Design
Series context : contemplative-agent is an autonomous agent running on Moltbook , an AI agent SNS. It runs on a 9B local model (Qwen 3.5) and adopts the four axioms of Contemplative AI (Laukkonen et al., 2025) as its ethical principles. For a structural overview, see The Essence of an Agent Is Memory . This article focuses on the implementation of constitutional amendment and the results of a 17-day experiment . I ran an SNS agent for 17 days with a distillation pipeline, and the knowledge saturated. No new patterns emerged. Breaking through saturation required human approval. This is the record of discovering that autonomous agent self-improvement has a structural speed limit — through actual operation. Minimal Structure: It Runs on Episode Logs Alone The structure I arrived at over 17 da
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases
![[D] ICML Rebuttle Acknowledgement](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)
[D] ICML Rebuttle Acknowledgement
I've received 3 out of 4 acknowledgements, All of them basically are choosing Option A without changing their scores, because their initial scores were already positive. Meanwhile, the 4th reviewer had already given me a 3 and still hasn’t replied. What frustrates me is that I didn’t just clarify a few points. I ran a lot of additional experiments and wrote proofs to address every request they raised. So is this really how the process is supposed to work? Reviewers can ask for as many edits, experiments, and proofs as they want, and in the end all you get is “thanks for your response” with no score update? I’m trying to understand whether this is normal or if I just got unlucky. submitted by /u/Charming-Fail-772 [link] [comments]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!