Your Day 1 task at a fintech startup: build the /users and /transactions API. Your tech lead load-tests it on Day 5. Ship something that survives 1,000 RPS.
Build this end to end. The PgBouncer + parameterized query + Zod + cursor pagination pattern appears in your first production code review at every backend role. Having built and load-tested it before means you understand the failure modes, not just the happy path.
Extend this project: add Redis caching for GET /users/{id} (cache-aside with 5 min TTL), add rate limiting per user ID (token bucket in Redis), add OpenTelemetry tracing for every database query. This is the version you present in system design interviews.
Your Day 1 task at a fintech startup: build the /users and /transactions API. Your tech lead load-tests it on Day 5. Ship something that survives 1,000 RPS.
CTO assigns task: /users + /transactions API, JWT auth, PostgreSQL, deployed by Friday
Engineer ships working code. All endpoints return correct data locally. Tests pass.
k6 load test: 1,000 VUs. POST /transactions fails at 450 RPS: "remaining connection slots are reserved"
CRITICALGET /transactions returns 2M rows — OOMKills the API process at 200 concurrent requests
CRITICALPgBouncer added, pagination implemented, parameterized queries in place. Load test passes at 1,200 RPS.
The question this raises
When local development hides the three classes of bugs that kill production APIs — connection limits, missing pagination, and injection vulnerabilities — what does "production-ready" actually mean for a REST API?
A POST /orders endpoint inserts a row to the orders table and then sends an email confirmation. The email service is slow (2 seconds). What is the minimum response time for the endpoint as written, and what is the correct fix?
Lesson outline
How this concept changes your thinking
Database connections under load
“app.db = new pg.Pool() with default settings — no max connections configured. At 1000 RPS, 1000 simultaneous connections. Postgres rejects all after max_connections.”
“PgBouncer in transaction mode: 10 Postgres connections serve 1000 application connections. pg.Pool({ max: 10 }) talking to PgBouncer on localhost:5432.”
API input handling
“db.query("SELECT * FROM users WHERE name = " + req.body.name) — SQL injection. req.body fields accepted as-is with no validation. Malformed JSON crashes the process.”
“Parameterized queries: db.query("SELECT * FROM users WHERE name = $1", [req.body.name]). Zod schema validation on every request body before touching the database.”
List endpoints
“SELECT * FROM transactions WHERE user_id = $1 — returns all rows. At 2M rows per user, this OOMKills the API. Response payload: 800MB.”
“Cursor-based pagination: WHERE id > $cursor ORDER BY id LIMIT 50. Response: 50 rows + nextCursor. Client paginates. Memory constant regardless of table size.”
k6 LOAD TEST RESULTS: 1000 VUs, 60 second test
POST /transactions: FAILED at 452 RPS | Error: remaining connection slots are reserved for non-replication superuser | GET /transactions: OOMKilled at 200 concurrent users | Error: JavaScript heap out of memory — response payload 847MB | POST /users: VULNERABLE | Input name="test'; DROP TABLE users;--" passed validation and reached the database
Three diagnostic outputs reveal the three problems: pg_stat_activity showing connection exhaustion, Node.js heap spiking to 2GB from an unbounded query, and a SQL injection payload reaching the database.
1-- 1. Connection exhaustion: pg_stat_activity shows 200 connections, all blockedAll 200 Postgres connections in use — new requests fail immediately2SELECT count(*), state, wait_event_type, wait_event3FROM pg_stat_activity4GROUP BY state, wait_event_type, wait_event5ORDER BY count DESC;6-- Result: 200 rows, state='idle in transaction', all waiting on connection slot78-- 2. Unbounded query: EXPLAIN shows full table scan returning 2M rowsSeq Scan returning 2M rows: API process OOMKills loading this into memory9EXPLAIN SELECT * FROM transactions WHERE user_id = '123';10-- Seq Scan on transactions (cost=0.00..218429.00 rows=2000000 width=156)11-- 2M rows * 156 bytes = 312MB just for the scan, before serialization1213-- 3. SQL injection: the payload that reached the databaseSQL injection: user input treated as SQL syntax, not a value14SELECT * FROM users WHERE name = 'test'; DROP TABLE users;--'15-- Without parameterized queries, this executes as TWO statements:16-- SELECT * FROM users WHERE name = 'test'17-- DROP TABLE users18-- Second statement drops the table.
| Requirement | Verification | Common failure mode |
|---|---|---|
| JWT auth middleware | curl without token returns 401. With expired token returns 401. With valid token returns 200 | Middleware skipped on some routes. Token payload not validated. |
| Parameterized queries | Inject ' OR 1=1 -- into every string field — no data returned | String concatenation in query. ORM misuse bypassing parameterization. |
| Input validation (Zod) | Send missing required field returns 400 with field-level error. Send wrong type returns 400 | No validation layer. rely on DB constraints alone — errors are 500s, not 400s. |
| Connection pool (PgBouncer) | k6: 1000 VUs for 60s — 0 connection errors | Direct pg.Pool to Postgres without PgBouncer. Pool max too high. |
| Cursor pagination | GET /transactions returns 50 rows + nextCursor. curl nextCursor returns next 50 rows. | LIMIT/OFFSET — breaks at high offsets. No cursor means full table scan. |
The 5-minute pre-deploy checklist
1. k6 at 100 VUs for 30s — watch for connection errors. 2. curl with SQL injection payloads — should return 400. 3. curl without auth token — must return 401. 4. curl list endpoint — response must have cursor field. 5. pg_stat_activity during load — must show fewer than 20 connections.
Questions you will be asked — strong vs weak answers
Connection pooling
📖 What the exam expects
A connection pool maintains a set of reusable database connections, reducing overhead from creating new connections per request.
Toggle between what certifications teach and what production actually requires
Take-home tests and live coding interviews for backend roles almost always include a CRUD API with auth. The difference between pass and fail is not whether the endpoints work — it is whether the engineer knows about connection pools, parameterized queries, and pagination without being told.
Common questions:
Strong answers include:
Red flags:
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.