The data structure that makes insertion O(1) and random access O(n). When a Redis endpoint went from 2ms to 30 seconds, the cause was treating a linked list like an array.
Implement reversal, cycle detection, and find-middle. Know the O(1) vs O(n) operations for linked lists. Can explain why Redis LRANGE is O(n).
Design LRU cache using doubly linked list + hash map. Identify when a linked list is the right choice vs array vs hash map in system design.
Know how real systems (Redis, Java LinkedList, browser history) use linked lists. Recognize O(n) list operations disguised as "simple" API calls and flag them in code review.
The data structure that makes insertion O(1) and random access O(n). When a Redis endpoint went from 2ms to 30 seconds, the cause was treating a linked list like an array.
App launches with Redis LPUSH for writes, LRANGE 0 -1 for reads. 2ms response with <1,000 events.
List grows to 50,000 events. Feed load time climbs to 800ms. Engineers attribute it to "growing pains."
WARNINGList hits 1,000,000 events. LRANGE 0 -1 scans entire linked list — 30 second response time.
CRITICALEngineer checks Redis SLOWLOG — discovers LRANGE taking 28 seconds per call.
WARNINGMigrated to Redis sorted set with pagination (ZREVRANGE, max 50 items). Response time: 2ms again.
The question this raises
If the data structure you chose for its fast writes has O(n) reads that scale with data size — how do you spot this before your users do?
A data structure holds 1 million items. You need to add a new item to the front. How many steps does this take for an array vs a linked list?
Lesson outline
The treasure hunt that does not scale
You have a list of 1 million items stored as a linked list. Finding item 500,000 requires following 500,000 "next" pointers — one at a time. An array would compute the memory address in one step. This is why Redis LRANGE 0 -1 on a 1M-item list took 30 seconds: Redis lists are linked lists internally, and reading the whole thing requires walking every single node.
Linked List
node = { value: x, next: null }
head.next = node // O(1) prependUse for: Reach for this when you need O(1) insertion at the head or tail AND you never need random access by position — queues, LRU cache backing, undo history
The key insight: a linked list is fast where arrays are slow, and slow where arrays are fast. Arrays are O(1) by index, O(n) to insert (shift everything). Linked lists are O(1) to insert at head, O(n) by index. They are complementary — the right choice depends entirely on which operation your system does more.
How this concept changes your thinking
I need to add 1 million items to the front of a list, one at a time
“I would use an array and push to the front. Each push shifts all existing items right — that is O(n) per insert, O(n²) total for 1M inserts.”
“A linked list prepends in O(1) — just update the head pointer. 1M prepends = 1M x O(1) = O(n) total. Arrays are wrong here.”
I need to get the 500,000th item by position
“I would use a linked list because I just built it. I would call .get(500000) and it would return the value.”
“This traverses 500,000 nodes one by one — O(n). If I need random access, I should have used an array. Linked lists are the wrong choice here.”
I need a data structure for an LRU cache (least recently used)
“I would use an array. To remove an item from the middle (evict it), I shift everything after it left.”
“An LRU cache needs O(1) remove from anywhere and O(1) add to front. A doubly linked list + hash map achieves this: hash map gives O(1) lookup, doubly linked list gives O(1) remove (update prev and next pointers).”
Reversing a linked list: A → B → C → D → null
01
We start with: head → A → B → C → D → null. Goal: D → C → B → A → null.
02
Set up three pointers: prev = null (nothing comes before A yet), curr = A (we start here), next = null (we will set this each loop).
03
Step 1: Save next = B (save where curr.next points BEFORE overwriting it). Reverse the arrow: A.next = null (prev). Move forward: prev = A, curr = B.
04
Step 2: Save next = C. Reverse: B.next = A. Move forward: prev = B, curr = C.
05
Step 3: Save next = D. Reverse: C.next = B. Move forward: prev = C, curr = D.
06
Step 4: Save next = null. Reverse: D.next = C. Move forward: prev = D, curr = null.
07
curr is null — we are done. The new head is prev (D). Result: D → C → B → A → null. The critical move: always save next BEFORE overwriting curr.next, or you lose the rest of the list.
We start with: head → A → B → C → D → null. Goal: D → C → B → A → null.
Set up three pointers: prev = null (nothing comes before A yet), curr = A (we start here), next = null (we will set this each loop).
Step 1: Save next = B (save where curr.next points BEFORE overwriting it). Reverse the arrow: A.next = null (prev). Move forward: prev = A, curr = B.
Step 2: Save next = C. Reverse: B.next = A. Move forward: prev = B, curr = C.
Step 3: Save next = D. Reverse: C.next = B. Move forward: prev = C, curr = D.
Step 4: Save next = null. Reverse: D.next = C. Move forward: prev = D, curr = null.
curr is null — we are done. The new head is prev (D). Result: D → C → B → A → null. The critical move: always save next BEFORE overwriting curr.next, or you lose the rest of the list.
The reversal above in code — notice how the three-pointer dance matches the steps exactly:
1function reverseList(head) {2let prev = null;prev starts null because the new tail (old head) must point to null.3let curr = head;45while (curr !== null) {We stop when curr is null — we have processed the entire original list.6const next = curr.next; // save next FIRSTCRITICAL: save next before overwriting curr.next on the next line. If you forget this, you lose the rest of the list forever.7curr.next = prev; // reverse the arrowThis is the reversal — point the current node back to its previous neighbour instead of forward.8prev = curr; // advance prev9curr = next; // advance curr10}prev is the new head because it was the last node we processed — the old tail.1112return prev; // prev is the new head13}
When asked to detect a cycle in a linked list, the obvious approach uses extra memory. Floyd's algorithm does it in O(1) space.
O(n) space visited set vs O(1) Floyd's algorithm
// Naive — stores visited nodes
// O(n) space — may OOM on large lists
function hasCycle_slow(head) {
const visited = new Set();
let curr = head;
while (curr !== null) {
if (visited.has(curr)) return true;
visited.add(curr);
curr = curr.next;
}
return false;
}// Floyd's tortoise and hare
// O(n) time, O(1) space
function hasCycle(head) {
let slow = head;
let fast = head;
while (fast !== null && fast.next !== null) {
slow = slow.next;
fast = fast.next.next;
if (slow === fast) return true;
}
return false;
}The left version stores every visited node in a Set — if you visit a node twice, there is a cycle. This works but uses O(n) memory. Floyd's algorithm uses two pointers: slow moves one step, fast moves two steps. If there is a cycle, fast will eventually lap slow and they will meet. If there is no cycle, fast reaches null. O(1) space. The insight: in a cycle, a faster pointer must eventually catch a slower one, just like runners on a circular track.
Linked list or array?
| Operation | Linked List | Array |
|---|---|---|
| Access element by position | Slow (O(n)) — must traverse from head | Instant (O(1)) — direct address |
| Insert at head | Instant (O(1)) — update two pointers | Slow (O(n)) — shift all elements right |
| Insert at tail (with tail pointer) | Instant (O(1)) | Instant (O(1)) — if within capacity |
| Delete from middle (with node reference) | Instant (O(1)) — update prev and next pointers | Slow (O(n)) — shift elements left |
| Search for a value | Slow (O(n)) — linear scan | Slow (O(n)) — linear scan (or O(log n) if sorted) |
Linked list vs array
📖 What the exam expects
Arrays provide O(1) random access but O(n) insertion at arbitrary positions due to shifting. Linked lists provide O(1) insertion at known positions but O(n) access by index due to pointer traversal.
Toggle between what certifications teach and what production actually requires
Linked list problems test pointer manipulation under pressure: reverse a list, detect a cycle (Floyd's algorithm), find the middle node (slow/fast pointer), merge two sorted lists, remove the nth node from the end. The hidden evaluation: can you think with pointers without losing track of the chain?
Common questions:
Strong answer: names all three pointers before writing reversal / handles edge cases first: empty list, single node / knows Floyd's algorithm and can explain WHY two different speeds guarantees meeting in a cycle
Red flags: tries to reverse a linked list by collecting into an array (misses the O(1) space requirement) / cannot do cycle detection without extra memory / loses the next pointer before saving it (a very common pointer bug)
💡 Analogy
A linked list is a treasure hunt. Each clue (node) tells you only where the next clue is. To reach clue 500, you must follow clues 1 through 499 — there is no shortcut. Adding a new first clue is instant: write "next clue is the old first clue" on a new card and hand it out (O(1)). But counting all clues takes one step per clue (O(n)). Redis LRANGE 0 -1 on 1 million items is following 1 million treasure hunt clues.
⚡ Core Idea
A linked list trades random access for cheap insertion and deletion. Arrays store items at consecutive memory addresses — position 500 is address (start + 500 * item_size), instant lookup. Linked lists store a pointer at each node — to reach position 500 you follow 500 pointers. This is why LRANGE on a long Redis list is slow even though LPUSH is fast.
🎯 Why It Matters
Understanding linked lists explains why your Redis feed slowed down, why doubly linked lists power LRU caches (O(1) remove from middle), and why the interview question "reverse a linked list" is really testing pointer manipulation under pressure — the most common source of bugs in real systems code.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.