Custom Partitioners: When Default Hashing Isn't Enough
Build custom Kafka partitioners for geographic routing, priority lanes, and hot key distribution. Java implementation with production-tested patterns.

Kafka's default partitioner uses murmur2 hashing to distribute keyed messages across partitions. Same key, same partition, guaranteed ordering.
But sometimes you need control the default doesn't give you: routing by geographic region, priority lanes, or spreading hot keys.
I've implemented custom partitioners for teams that needed geographic routing for GDPR compliance, priority lanes for payment processing, and hot key distribution for viral content. The pattern is the same each time.
Our default partitioner created a hot partition that handled 60% of traffic. A custom partitioner spreading our top 100 keys reduced p99 latency by 40%.
Platform Engineer at a media company
How Default Partitioning Works
With a key:
targetPartition = Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; Without a key (Kafka 2.4+): sticky partitioner batches to the same partition until the batch is full, then picks another.
When You Need Custom Logic
Geographic routing:
EU users -> partitions 0-1
US users -> partitions 2-3
APAC users -> partitions 4-5 Priority lanes:
CRITICAL -> partition 0 (dedicated consumer)
NORMAL -> partitions 1-3
BULK -> partitions 4-5 (throttled consumer) Hot key distribution: When a small number of keys dominate traffic, spread them across partitions instead of concentrating in one. Use topic monitoring to identify partition imbalances before they cause issues.
Basic Implementation
public class RegionPartitioner implements Partitioner {
private Map<String, Integer> regionToPartition;
@Override
public void configure(Map<String, ?> configs) {
String mapping = (String) configs.get("region.partition.map");
regionToPartition = parseMapping(mapping);
}
@Override
public int partition(String topic, Object key, byte[] keyBytes,
Object value, byte[] valueBytes, Cluster cluster) {
int numPartitions = cluster.partitionsForTopic(topic).size();
if (keyBytes == null) {
// WARNING: Random partition breaks ordering for null keys
// Consider sticky partitioner for better batching
return ThreadLocalRandom.current().nextInt(numPartitions);
}
String keyStr = new String(keyBytes);
String region = keyStr.split(":")[0].toUpperCase();
Integer basePartition = regionToPartition.get(region);
if (basePartition == null) {
return Math.abs(Utils.murmur2(keyBytes)) % numPartitions;
}
return basePartition % numPartitions;
}
} Configure:
partitioner.class=com.example.kafka.RegionPartitioner
region.partition.map=EU:0,US:2,APAC:4 Common Errors
Negative partition:
// WRONG: Integer.MIN_VALUE stays negative after Math.abs()
int partition = Math.abs(key.hashCode()) % numPartitions;
// CORRECT: Use bitwise AND
int partition = (key.hashCode() & Integer.MAX_VALUE) % numPartitions; Partition out of range: Always validate against cluster metadata:
return calculatedPartition % numPartitions; Not thread-safe: The producer calls partition() from multiple threads. Use AtomicInteger for counters.
When to Avoid Custom Partitioners
Custom partitioners add operational burden:
- All producers must use the same version
- Partition count changes may break logic
- Bugs cause routing issues that are hard to debug
Consider alternatives first:
| Requirement | Alternative |
|---|---|
| Geographic routing | Separate topics per region |
| Priority lanes | Separate topics per priority |
| Hot key distribution | Add entropy to keys (userId-timestamp) |
Book a demo to see how Conduktor Console shows message rates per partition to help you spot hot partitions before they become incidents.