Kafka Isn't a Queue: Stop Designing It Like One
Kafka's commit log differs fundamentally from message queues. The mental model shift that prevents costly anti-patterns.

You've used RabbitMQ. You've used SQS. Messages go in, workers pull them out, messages disappear. Then you adopt Kafka and apply the same mental model.
Your architecture suffers.
I've watched teams create single-partition topics "for ordering," set aggressive retention "to clean up after themselves," and treat consumer lag as a bug to eliminate. Every time, they're fighting Kafka instead of leveraging it.
We spent six months treating Kafka like a faster RabbitMQ. Once we understood the commit log model, we rebuilt in three weeks and it actually worked.
Tech Lead at a logistics company
The Queue Mental Model (and Why It Breaks)
Traditional queues work like a mailbox:
- Producer drops a message
- Consumer picks it up
- Message gets deleted
- Next consumer gets the next message
Multiple workers race to grab messages from a shared pool. Once consumed, gone forever.
Kafka doesn't work this way.
The Commit Log Model
Kafka is an append-only, immutable log:
Partition 0: [rec0, rec1, rec2, rec3, rec4, rec5, ...]
↑ ↑
Consumer A Consumer B
(offset 1) (offset 4) Records are never deleted by consumers. They stay until retention expires. Consumers track their position independently. Multiple consumers can read the same data at their own pace.
This is not a queue. This is a replayable event stream.
Why the Difference Matters
Messages Don't Disappear
In Kafka, consuming changes nothing about the data. A bug in your processor? Fix it, reset offsets, rerun:
kafka-consumer-groups.sh --bootstrap-server localhost:9092 \
--group my-processor --topic events \
--reset-offsets --to-earliest --execute This command makes no sense in a queue world. In Kafka, it's a standard recovery operation.
Ordering Is Per-Partition
Kafka guarantees ordering only within a partition. You can have total ordering OR parallelism, not both.
| Approach | Ordering | Parallelism |
|---|---|---|
| Single partition | Total order | 1 consumer max |
| Key-based partitioning | Per-key order | Up to N consumers |
Consumer Groups Cooperate, Not Compete
In RabbitMQ, consumers compete for messages. In Kafka, consumers within a group divide partitions:
Partition 0 → Consumer 1 (all messages from P0)
Partition 1 → Consumer 2 (all messages from P1)
Partition 2 → Consumer 3 (all messages from P2) Adding more consumers than partitions means some sit idle. Partition count determines maximum parallelism.
Multiple Groups Read Independently
The killer feature queues can't match:
Topic: user-events
Consumer Group: analytics → builds dashboards
Consumer Group: fraud-detector → flags suspicious activity
Consumer Group: email-sender → triggers notifications Each group processes every message independently. No fan-out configuration. No duplication. New use cases don't require upstream changes. You can monitor all your consumer groups in one place to track this multi-consumer architecture.
The Anti-Patterns
Single partition "for ordering": Yes, you get total ordering. You also get 1 consumer max, a throughput ceiling, and a recovery nightmare. Use key-based partitioning instead.
Short retention "to clean up": Retention is a storage policy, not a consumption policy. A week is common. A year isn't unusual for audit logs.
Treating lag as a bug: Lag is information. 5000 messages behind could be a slow consumer (problem), a batch processor that runs hourly (expected), or a new consumer catching up (temporary). Alert on lag growth rate, not absolute lag.
One topic per consumer: This duplicates data and couples producers to consumers. One topic, multiple consumer groups.
The Right Mental Model
Stop thinking "Kafka is a faster RabbitMQ."
Start thinking "Kafka is a distributed, replayable event log that multiple consumers can read independently."
This leads to multi-consumer architectures, event sourcing patterns, decoupled systems, and recovery strategies based on replay rather than message resends.
The commit log is not a limitation to work around. It's the architecture to embrace.
Book a demo to see how Conduktor Console provides visibility into consumer groups across your Kafka estate.