The First Warning Signs
Out of nowhere, my teammates started texting:
"Dashboard is logging me out every 2 minutes."
"The operator panel is completely dead for all users."
I opened the logs. Then the browser console. It was cursed.
Access to XMLHttpRequest at '/api/admin/complaints'
from origin 'http://localhost:3000' has been blocked by CORS policy
GET /api/admin/complaints net::ERR_FAILED 429 (Too Many Requests)
CORS errors across ALL user types, all endpoints, all tabs.
My first thought? "Great. Another Next.js proxy/CORS config headache."
But then I saw it—429 (Too Many Requests) buried in the network tab.
That wasn't a CORS problem. That was our brand new rate limiter kicking in.
And the wild part? We did have CORS configured... So why were the 429 responses missing CORS headers?
The Red Herring Hunt
First thing I checked: index.js.
app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 100 }));
app.use(cors({ origin: "http://localhost:3000", credentials: true }));
// ... other middleware
There it was. Rate limiting ran before CORS.
When the limiter rejected requests with 429, the response never reached CORS middleware. No Access-Control-Allow-Origin header → browser screams "CORS BLOCKED!"
Classic middleware order trap.
- Security instinct:
rateLimitfirst ✅ - Browser reality: CORS must run on every response, even errors ✅
After Some Time… the Real Problem Revealed Itself
Fixing middleware order solved the symptom. But why were we hitting rate limits so hard?
I opened 3 tabs:
- Admin dashboard
- Operator panel
- Complainant view
Each page loaded and immediately started polling like crazy:
- Admin complaints refresh (3s interval)
- Notification provider refresh (3s interval)
- My complaints refresh (3s interval)
- Auth session checks (background polling)
- Socket.IO handshake attempts (reconnect noise)
One page load = 5–10 API calls instantly.
3 tabs = 30–50 requests in ~30 seconds.
Rate limit: 100 requests / 15 mins / per IP bucket.
localhost → 1 bucket (127.0.0.1)
30s → ~50 requests consumed
90s → bucket almost dead
3min → rate limited everywhere
Auth refresh fails → mass logout → dashboard "dies"
This wasn't just downtime. It was a full-on "the app is broken" moment.
Tier 1 Fix: The 5-Minute Containment Fix
First priority: stop the bleeding.
// BEFORE (broken)
app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 100 }));
app.use(cors({ origin: "http://localhost:3000", credentials: true }));
// AFTER (deployed in 5 minutes)
app.use(cors({
origin: process.env.NODE_ENV === 'production'
? ['https://yourapp.com']
: 'http://localhost:3000',
credentials: true
}));
app.use(rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
skip: (req) => process.env.NODE_ENV !== "production" // Dev relief, prod safe
}));
Deployed. Tested. Instantly calmer.
Network tab now showed clean 429 Too Many Requests with proper CORS headers. Browser stopped shouting fake "CORS blocked" errors. Developers could refresh freely again.
But this was just containment. Production users behind office NAT (shared IP) would still hit the wall.
Tier 2 Fix: Production-Ready Rate Limiting (Per-User Buckets)
The real flaw: IP-based rate limiting where 50 developers = 1 public IP = 1 bucket. One F5 spammer kills everyone.
Solution: Rate-limit by req.user.id after authentication.
const ipLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100, // Strangers get strict limits
});
const userLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 1500, // Authenticated users get breathing room
keyGenerator: (req) => req.user?.id || req.ip, // User ID > IP fallback
});
const authLimiter = rateLimit({ // Brute force protection
windowMs: 15 * 60 * 1000,
max: 50,
});
// CRITICAL: authMiddleware must run BEFORE userLimiter
app.use("/api/v1/public", ipLimiter);
app.use("/api/v1/auth", authLimiter);
app.use("/api/v1/admin", authMiddleware, userLimiter);
app.use("/api/v1/dashboard", authMiddleware, userLimiter);
Deployed to staging. 3 tabs. Aggressive polling. No rate limiting. Each userId gets their own 1500-request bucket. Office NAT solved. Production-ready.
But polling was still expensive. We made it survive. Time to make it thrive.
Tier 3 Fix: Stop Paying the Polling Tax
Tier 1/2 fixed stability. Tier 3 fixed economics.
POLLING: 50 users × 20 polls/min × 60 mins = 60,000 requests/hour (99% empty)
WEBSOCKETS: 50 connections, minimal overhead, updates only on change
Before (death by 1000 polling cuts):
useEffect(() => {
const interval = setInterval(() => {
getAllComplaints(); // 3s
getUnreadCount(); // 3s
checkAuthStatus(); // 5s
}, 3000);
return () => clearInterval(interval);
}, []);
After (push-based, zero polling):
useEffect(() => {
socket.on("complaints", setComplaints);
socket.on("notifications", setNotifications);
socket.on("authStatus", setAuthStatus);
return () => {
socket.off("complaints");
socket.off("notifications");
socket.off("authStatus");
};
}, []);
Backend pushes only when needed:
io.to(userId).emit("notifications", { count: 3 });
io.to("adminRoom").emit("complaints", complaintData);
Socket.IO isn't free—connection lifecycle, reconnections, horizontal scaling (Redis adapter). But 95% traffic reduction made it non-negotiable.
Trade-offs We Actually Weighed
Tier 1: 5min fix, instant relief | Doesn't survive office NAT
Tier 2: Production safe + fair | Polling still burns traffic
Tier 3: 95% traffic drop, instant UX | Socket.IO infra complexity
Three Things I'll Never Forget From This Fire
1. Console screaming "CORS" ≠ CORS is broken
Console: "CORS blocked"
Network: 429 status
= Missing CORS headers on error responses
Fix: middleware order
2. Middleware order is browser contract
Security instinct: rateLimit → cors ❌
Browser reality: cors → rateLimit ✅
3. Rate limiting = buckets per key
express-rate-limit default: req.ip
Corporate NAT: 50 users → 1 bucket = collective punishment
Per-user: req.user.id → 50 buckets = only spammers blocked
4. Polling death spiral is silent
Most dashboards die from "small" things accumulated over time:
- 3s polling here
- Session refresh there
- Notification count loop
Then rate limiting goes live → everything explodes.
The Results
Week 1 (Tier 1): No fake CORS noise, dev work unblocked
Week 2 (Tier 2): Office NAT fixed, 10x rate limit headroom
Week 4 (Tier 3): API traffic down 95%, sub-100ms updates
Closing Thoughts
Every React + Express dashboard hits this wall. Most teams:
- ❌ Disable rate limiting (DoS risk)
- ❌
Access-Control-Allow-Origin: *(security nightmare) - ❌ Blame React StrictMode
- ❌ Live with 3-second lag forever
We turned crisis → 100x performance win through systematic thinking over symptom chasing.
Next time you see:
✅ "CORS blocked" in console + 429s in network tab
Check this order:
- Middleware order (
corsbeforerateLimit) - Aggressive polling loops (
setIntervalhell) - Per-IP vs per-user rate limiting
- WebSocket migration roadmap
Dashboard's flying now. Coffee tastes better.