System Design Canvas
URL Shortener (bit.ly)
10K writes/sec, 1M reads/sec
<10ms p99 for redirect
100M URLs × 500 bytes = 50 GB/year
7-char alphanumeric slug ≈ 3.5 trillion possible short URLs
Read:Write ratio ~100:1
Custom slugs required for premium users
✓Distributed (MD5/hash)
- +No single point of failure
- +Works across all servers
✗Centralised (auto-increment)
- −Possible collisions require retry logic
- −Longer computation time
✓301 Permanent
- +Browser caches — reduces server load dramatically
- +Lower latency for repeat visits
✗302 Temporary
- −Cannot track click analytics accurately (browser handles it)
- −Cannot update destination
10K writes/sec × 500 bytes = 5 MB/sec write throughput. 3.15 TB/year new URL data. 1M reads/sec → cache hit rate 80% → 200K actual DB reads/sec.
Strong signals ✓
Clarifies analytics requirement before choosing 301 vs 302
Explains collision probability and retry strategy for hash-based IDs
Designs separate write path and read path (CQRS pattern)
Considers custom slugs as a separate, lower-throughput write path
Has a cache invalidation strategy for when URLs are deleted
Red flags ✗
Single SQL database with auto-increment ID — bottleneck at scale
No caching strategy for the 100:1 read-heavy workload
Ignores analytics requirements entirely
Follow-up probes
How would you implement click analytics without slowing down the redirect?
How do you handle the thundering herd problem when a viral link first gets shared?
Your Redis cache fails. What is the blast radius and how do you degrade gracefully?