Back to Blog
From the Trenches
January 10, 2026
10 min read

If Request B Doesn't Need A, Stop Waiting

#System Design
#Performance

There's a very specific kind of pain you only understand after you've shipped real software:

The app isn't "broken"… but it feels broken.

Everything technically works. No red errors. No crash. No major downtime. But users open a page… and wait. And wait.

And then that little voice starts in their head:

"Is this stuck?"

This is the story of how we took one of our slowest, heaviest pages and made it feel fast, predictable, and honestly… kind of satisfying.

And yes, it started like most great engineering stories: At 3AM.


The Setup: One Page, Four Worlds, Infinite Delay

We had a detail page that looked simple on the surface. But behind the scenes, it was pulling data from four different domains: Complaints, Cases, Users, Chats.

A "normal" page for the user. A mini distributed system for the backend.

And the worst part? The requests were happening in a perfect little chain of suffering:

fetchComplaints()
  .then(fetchCases)
  .then(fetchUsers)
  .then(fetchChats)

So if anything slowed down… everything slowed down.

Result: 4–6 seconds before the page showed anything, zero partial rendering, users staring at a blank screen, questioning life decisions.

And here's the thing people forget: Users don't measure performance in milliseconds. They measure it in "Can I start reading yet?"


The 3AM Moment: Don't Panic, Don't Nuke Everything

The first time this problem became "real real"… was during a production incident. One of those moments where the system isn't fully down, but it's unstable enough to make you sweat.

And at that time, your brain offers you exactly two solutions: restart everything, or become a monk and quit tech.

But panic is expensive.

So instead, I followed the one rule that saves your system and your sanity:

Restore service first. Find root cause second.

Because "permanent fixes" are useless if your users can't load the page right now.

So at 3AM, my checklist was basic: What's the blast radius? What was the last known good state? What changed recently? What are the metrics saying? CPU? Memory? latency? errors? disk?

At this point I wasn't looking for genius. I was looking for signal.


What We Found (The Real Bug): Over-Fetching and Waterfall Loading

Once things stabilized, the real issue became obvious.

Problem 1: We were shipping the entire filing cabinet

Our "All Complaints" admin table was pulling 80+ fields per row. Including history logs, heavy nested objects, things nobody could even see.

We were sending over 1.2MB–1.5MB of JSON to render a table showing like 5 columns.

Bandwidth wasn't even the problem. The browser wasn't "downloading slow". It was parsing, allocating memory, and choking on useless work.

The client was gasping to process data it would never show.

Problem 2: Everything was sequential

Even worse, we made every domain wait for the previous one: Complaint, then Case, then User, then Chat.

Which meant the user got nothing until the slowest dependency finished. This is the classic waterfall loading trap.


Fix Part 1: Lite Endpoints (Ship Only What the User Can See)

First fix was not fancy. It was just… disciplined.

For list views and summary pages, we created "lite" endpoints. And in DB queries, we used projections.

// Before: Heavy Fetch
const complaints = await Complaint.find({});

// After: Lite Projection
const complaints = await Complaint.find({})
  .select('id status title createdAt');

That one change made the system instantly feel less bloated.

Because now list views were lightweight, detail views fetched full data only when needed, and the UI stopped "holding its breath" on initial load.


Fix Part 2: Stop Making the Frontend Guess Meaning

This one was sneaky.

Some of our frontend code was doing "smart UI logic" like scanning records, inferring states, deciding whether something was finalized, setting icons based on computed rules.

Which sounds harmless… until you do it across 100 rows and your laptop fan becomes a helicopter.

So we moved that expensive logic to the backend and returned computed fields.

Frontend became simple: "if this field says finalized, show the badge" not "loop through history logs and try to understand life"

Basically: Icons should be dumb renderers, not miniature rule engines.


Fix Part 3: Waterfall to Stream (Make the Page Feel Instant)

Now for the bigger win.

This wasn't about reducing payload. This was about changing the user experience.

Instead of treating the page like one giant request we broke it into domain-driven boxes: Complaint box, Case box, User box, Chat box.

Each box could load independently. Each one could render its skeleton immediately. Each one had its own "ready" moment.

Then the backend stopped acting like a script and started acting like a scheduler:

const [complaint, caseData, user, chats] = await Promise.all([
  getComplaint(),
  getCase(),
  getUser(),
  getChats()
]);

So the new rule became:

If Request B doesn't strictly need Request A, they should never be sequential.


Fix Part 4: Skeleton Streaming (Users Get Value Immediately)

The final polish was making the page feel responsive. Not by lying. But by showing progress like a good UX citizen.

Using Suspense boundaries and skeleton loading: page shell renders in under 50ms, lightweight domain sections appear quickly, heavy ones (like chats) stream in separately.

This one change does something magical: It turns a page from "broken slow" into "fast and loading more".

Users forgive loading. They don't forgive nothingness.


Results (The Only Part That Matters)

Here's what actually improved after the changes:

Payload (list views) dropped 1.2MB to 200KB

TTI dropped 70% on the detail page

Mobile performance improved massively

Less CPU churn, fewer freezes, smoother scrolling

Page feels predictive, not reactive

And the real win? The app stopped feeling "heavy".


The Senior Dev Lesson Here

This wasn't a rewrite. This wasn't "let's migrate the architecture".

This was a classic "engineering maturity" fix: measure the pain, identify where time is wasted, reduce useless work, ship in smaller chunks, stream what you can, validate results.

And yeah… Sometimes the biggest performance upgrade isn't a new framework. It's just not sending an entire filing cabinet to render five columns.


Rules I'm Keeping Forever

If you're building pages like this, here are my new non-negotiables:

Ship only what the user can see. A summary list is a glance, not a deep dive.

Parallelize anything that isn't dependent. Waterfalls are for tourist spots, not production UI.

Mitigate first, perfect later. Incidents are not your architecture review meeting.

Fast doesn't mean instant. It means progressive. Give the user something quickly, then build the rest around it.

Enjoyed this article?

I write about practical engineering lessons, software architecture, and problem-solving foundations.

Browse More Articles