Skip to main content
Career Paths
Concepts
Bep Message Queues
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Message Queues: Kafka, RabbitMQ, and Async Communication

Decouple your services, absorb traffic spikes, and never lose an event

🎯Key Takeaways
Message queues decouple services, absorb traffic spikes, and enable reliable async processing.
Kafka: distributed log with retention and replay. Fan-out via consumer groups. High throughput.
RabbitMQ: message broker with complex routing, work queues, at-most-once delivery per queue.
Consumer groups enable fan-out: multiple services independently process the same events.
Implement idempotent consumers — at-least-once delivery means duplicates happen.
Use the Outbox Pattern to guarantee message publishing when services can crash mid-operation.

Message Queues: Kafka, RabbitMQ, and Async Communication

Decouple your services, absorb traffic spikes, and never lose an event

~5 min read
Be the first to complete!
What you'll learn
  • Message queues decouple services, absorb traffic spikes, and enable reliable async processing.
  • Kafka: distributed log with retention and replay. Fan-out via consumer groups. High throughput.
  • RabbitMQ: message broker with complex routing, work queues, at-most-once delivery per queue.
  • Consumer groups enable fan-out: multiple services independently process the same events.
  • Implement idempotent consumers — at-least-once delivery means duplicates happen.
  • Use the Outbox Pattern to guarantee message publishing when services can crash mid-operation.

The Problem With Synchronous Everything

LinkedIn in 2010 had a monolith that did everything synchronously: when a user updated their profile, it notified all connections, updated search indexes, and sent emails — all in the same request-response cycle. Average response time: 8 seconds. They built Kafka to fix this.

What Message Queues Solve

When you place an order on Amazon, you don't wait for inventory to update, the warehouse to be notified, fraud to be checked, and the email to send. These happen asynchronously. The order succeeds immediately; downstream systems process it at their own pace.

The Three Problems Message Queues Solve

  • ⚡Decoupling — Producer doesn't know about consumers. Add a new consumer without changing the producer.
  • 📈Load leveling — Traffic spike: queue absorbs the burst. Consumers process at their own rate. Without queues, spikes crash downstream services.
  • 🔁Reliability — Kafka retains events for days. If a consumer fails, it replays from its last checkpoint. No events lost.

Kafka vs RabbitMQ: Two Different Tools for Two Different Jobs

FeatureKafkaRabbitMQ
ModelDistributed commit logMessage broker
RetentionMessages retained for days/foreverMessages deleted after consumption
ConsumersConsumer groups; each group reads independentlyCompeting consumers; each message consumed once
ThroughputMillions of messages/secThousands to hundreds of thousands/sec
OrderingGuaranteed within a partitionFIFO within a queue
ReplayConsumers can replay from any offsetNot possible — once consumed, gone
RoutingSimple (topic-based)Complex (exchanges, routing keys, binding patterns)
Best ForEvent streaming, audit logs, analytics pipelinesTask queues, job processing, RPC patterns

When to Use Each

Kafka: event streaming, replay capability, multiple independent consumers, high throughput. RabbitMQ: work queue (each job processed exactly once), complex routing, or RPC over messaging. For simple reliable job queuing: RabbitMQ or AWS SQS.

kafka-producer-consumer.ts
1import { Kafka } from 'kafkajs';
2
3const kafka = new Kafka({
4 clientId: 'order-service',
Multiple brokers = high availability. One broker down = others continue
5 brokers: ['kafka-1:9092', 'kafka-2:9092', 'kafka-3:9092'],
6});
7
8// Producer: publish events
9const producer = kafka.producer();
10await producer.connect();
11
12await producer.send({
13 topic: 'order-events',
14 messages: [{
Partition key: all events for a user go to the same partition = ordered per user
15 key: order.userId, // Partition key — same user's orders → same partition (ordered)
16 value: JSON.stringify({
17 eventType: 'ORDER_PLACED',
18 orderId: order.id,
19 userId: order.userId,
20 totalCents: order.totalCents,
21 timestamp: new Date().toISOString(),
22 }),
23 }],
24});
25
26// Consumer: each service has its own consumer group
Consumer group ID: all instances of the same service share one group. Each event processed once per group
27const consumer = kafka.consumer({ groupId: 'inventory-service' });
28await consumer.connect();
29await consumer.subscribe({ topic: 'order-events', fromBeginning: false });
30
31await consumer.run({
32 eachMessage: async ({ message }) => {
Design for idempotency: network issues can cause the same message to be delivered twice
33 const event = JSON.parse(message.value!.toString());
34 if (event.eventType === 'ORDER_PLACED') {
35 await inventoryService.reserveItems(event.orderId);
36 // Kafka auto-commits offset on success — won't reprocess
37 }
38 },
39});
40
41// Multiple services consume the SAME topic independently:
42// inventory-service (groupId: 'inventory-service') → gets all events
43// email-service (groupId: 'email-service') → gets all events
44// fraud-service (groupId: 'fraud-service') → gets all events

Essential Patterns: Dead Letters, Idempotency, Outbox

Production Message Queue Patterns

  • 💀Dead Letter Queue (DLQ) — Messages that fail repeatedly (3 attempts) go to a DLQ for manual inspection. Without DLQ, failed messages block the entire queue.
  • 🔄Idempotent consumers — Networks cause duplicate delivery. Design consumers so processing the same message twice produces the same result. Use a processed-IDs set in the DB.
  • Backpressure — Consumers acknowledge when ready for the next message. RabbitMQ: prefetch count. Kafka: consumer lag monitoring.
  • 📋Outbox pattern — Write to database AND produce a message in the same DB transaction. Prevents event loss when service crashes between DB write and message publish.

The Outbox Pattern: Why Naive Publishing Loses Events

Without outbox: (1) write order to DB ✅ (2) crash 💥 (3) message never published — inventory never updated. With outbox: (1) write order + outbox record in one transaction (2) crash OK — outbox survives (3) outbox worker publishes — at-least-once delivery guaranteed.

How this might come up in interviews

Message queue questions test distributed systems consistency understanding. Key insight: "at-least-once vs exactly-once delivery" and idempotency.

Common questions:

  • When would you use a message queue vs direct service calls?
  • Explain the difference between Kafka and RabbitMQ
  • How do you handle duplicate message processing?
  • What is the Outbox Pattern and when would you use it?

Strong answers include:

  • Mentions consumer groups and fan-out pattern
  • Discusses idempotent consumers unprompted
  • Knows the Outbox Pattern for transactional message publishing
  • Understands at-least-once vs exactly-once semantics

Red flags:

  • Thinks Kafka is "just a queue"
  • No understanding of consumer groups
  • Can't explain idempotency in the context of message processing

Quick check · Message Queues: Kafka, RabbitMQ, and Async Communication

1 / 3

Multiple services (email, inventory, fraud) all need to process each order event. Which Kafka feature enables this?

Key takeaways

  • Message queues decouple services, absorb traffic spikes, and enable reliable async processing.
  • Kafka: distributed log with retention and replay. Fan-out via consumer groups. High throughput.
  • RabbitMQ: message broker with complex routing, work queues, at-most-once delivery per queue.
  • Consumer groups enable fan-out: multiple services independently process the same events.
  • Implement idempotent consumers — at-least-once delivery means duplicates happen.
  • Use the Outbox Pattern to guarantee message publishing when services can crash mid-operation.

From the books

Designing Data-Intensive Applications — Martin Kleppmann (2017)

Chapter 11: Stream Processing

The log (Kafka's data model) is one of the most important abstractions in distributed systems. It enables replayability, multiple consumers, and decoupled service architecture.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.