Kafka Data Quality: Enforce at Write Time
Enforce Kafka data quality at write time with schema validation and policies. Garbage in at the producer means garbage out everywhere.

Fast data pipelines that process garbage are just fast garbage pipelines.
Kafka's strength is moving data at scale—millions of messages per second across distributed systems. But high throughput amplifies quality problems. A misconfigured producer sending malformed messages doesn't create one bad record—it creates millions before anyone notices.
Most teams discover data quality issues downstream: consumers crash on null values, analytics reports show incorrect aggregations, or compliance audits reveal PII in topics that should be anonymized. By then, bad data has propagated through the system, requiring expensive cleanup and reconciliation.
The solution isn't better consumer validation. It's preventing bad data from entering Kafka in the first place. Validate at write time (producer-side), reject invalid messages before they reach topics, and catch quality issues before they affect downstream consumers.
Common Data Quality Issues
Schema violations happen when messages don't match registered schemas. Producers might send messages missing required fields, using wrong types (string instead of integer), or including unexpected fields.
Without validation, these messages reach Kafka. Consumers expecting valid schemas crash during deserialization. Error handling logic (if it exists) dead-letters the messages, but damage is done—consumers lag, downstream systems receive incomplete data, and engineers spend hours debugging.
Null value problems occur when required fields are null. Schema might define userId as required, but some messages arrive with userId=null. Downstream consumers expecting non-null values crash or produce incorrect results.
This happens when:
- Producers don't validate before sending
- Schema evolution makes previously optional fields required
- Data source systems (databases, APIs) return unexpected nulls
Data type mismatches violate semantic expectations. A field defined as timestamp receives string values like "2025-01-15" instead of epoch milliseconds. Schema validation might pass (both are strings or numbers) but semantic validation fails.
Consumers parse timestamps expecting milliseconds, get garbage values, and produce incorrect time-based aggregations.
Late or out-of-order data violates timeliness expectations. Real-time pipelines expect events within seconds of occurrence. Batch jobs replay historical data, sending events timestamped days or weeks in the past.
Consumers processing real-time data get confused by historical events. Time-based windows (5-minute aggregations) include events from last week, producing incorrect results.
PII in non-PII topics creates compliance violations. Topics marked as "PII-free" accidentally receive messages containing email addresses, names, or credit card numbers because producers don't validate before sending.
Compliance audits discover violations months later, triggering expensive remediation (delete messages, notify affected users, file breach reports).
Schema Validation as Quality Control
Schema validation is the first line of defense. Before messages reach Kafka, producers validate against registered schemas. Invalid messages are rejected, logged, and routed to dead-letter queues.
Producer-side validation checks messages before sending:
try:
# Serialize with schema validation
serialized = avro_serializer.serialize(message, schema)
producer.send(topic, serialized)
except SerializationException as e:
# Invalid message, log and handle
logger.error(f"Schema validation failed: {e}")
send_to_dead_letter_queue(message, error=str(e)) This catches schema violations at source, before bad data propagates.
Schema Registry enforcement provides centralized validation. Producers register schemas before using them. Schema Registry checks compatibility (does new schema break existing consumers?). Incompatible schemas are rejected.
This prevents breaking changes from reaching production. Producers can't deploy schemas that violate compatibility mode.
Runtime schema validation happens at the Kafka layer when using proxy architectures. Conduktor Gateway, for example, validates messages against schemas at the proxy layer. Invalid messages are rejected before reaching brokers.
This provides defense in depth: even if producers bypass validation, the proxy catches violations.
Handling Invalid Data
When validation detects invalid messages, they need handling—not silent failure.
Dead letter queues (DLQ) capture messages that fail validation. Invalid messages go to a separate topic (e.g., orders-dlq) with metadata explaining why validation failed.
DLQs enable:
- Debugging (inspect failed messages to understand patterns)
- Reprocessing (fix issues, replay messages from DLQ to main topic)
- Monitoring (alert when DLQ volume exceeds threshold, indicating systemic issues)
Error logging records validation failures with context: which schema failed, what field was invalid, which producer sent the message. Centralized logging aggregates errors, enabling pattern analysis.
If 1000 messages fail with "field userId is null," the producer has a bug. If different messages fail with different errors, data source quality is degrading.
Alerting on validation failures catches issues early. If validation failure rate exceeds 1% of total messages, something is wrong—alert the producing team to investigate.
High failure rates indicate:
- Producer bugs (incorrect serialization logic)
- Schema evolution issues (new required fields without defaults)
- Data source problems (upstream APIs returning malformed data)
Monitoring Data Quality Metrics
Quality metrics extend beyond infrastructure health (broker CPU, consumer lag) to data characteristics.
Schema conformance rate measures percentage of messages passing schema validation. Target: 100%. If 98% pass validation, 2% of messages are malformed—investigate and fix.
Field completeness measures whether required fields are populated. If schema defines orderId as required, what percentage of messages include it? Gaps indicate producer or data source issues.
Value distribution analysis detects anomalies. If orderAmount is usually $10-$1000 but suddenly includes values like $0.01 or $1,000,000, data quality might be degrading. Outlier detection flags these for review.
Timeliness metrics measure time from event occurrence to Kafka arrival. If events are typically seconds old but suddenly hours old, upstream systems are lagging or batch jobs are replaying historical data.
Duplicate detection identifies repeated messages. Exactly-once semantics prevent duplicates from Kafka failures, but producer retries or data source issues can create duplicates. High duplicate rates indicate producer configuration problems.
Policy-Based Quality Enforcement
Beyond schema validation, policies enforce data quality rules at the platform level.
PII detection policies scan messages for sensitive patterns (email addresses, credit card numbers, SSNs). If PII is detected in topics marked as PII-free, messages are rejected or masked.
Example: a Gateway data quality policy using SQL-like syntax to reject messages with email patterns in non-PII topics:
SELECT * FROM "non-pii-events"
WHERE NOT email REGEXP '^[^@]+@[^@]+
Value RANGE validation enforces business rules. IF orderAmount should be BETWEEN $0.01 AND $100,000, reject messages outside this RANGE:
SELECT * FROM orders
WHERE amount_cents > 0
AND amount_cents < 10000000
Mandatory field policies require specific fields for compliance. If GDPR mandates consentTimestamp for customer data, reject messages missing it:
SELECT * FROM "customer-events"
WHERE consentTimestamp <> ''
AND consentTimestamp REGEXP '^20[2-9][0-9]-'
Policies enforce at write time, preventing non-compliant data from entering Kafka.
Real-Time vs. Batch Quality Checks
Quality validation adds latency. Real-time pipelines need fast validation (sub-millisecond overhead). Batch pipelines tolerate slower validation (can run comprehensive checks).
Real-time validation checks:
- Schema conformance (fast: schema validation is optimized)
- Required field presence (fast: null checks)
- Basic type validation (fast: integer vs. string)
Skip expensive checks like: complex regex matching, database lookups, external API calls. These add 10-100ms latency per message, unacceptable for real-time throughput.
Batch validation runs asynchronously after messages reach Kafka. Consumers read messages, run comprehensive validation, and report quality issues without blocking producers.
Batch checks include:
- Referential integrity (does
userId exist in user database?) - Business rule validation (is
shippingDate after orderDate?) - Statistical anomaly detection (is this value an outlier?)
Batch validation finds issues after data is in Kafka but before downstream processing.
Building a Data Quality Culture
Technology enforces quality, but culture sustains it. Teams must value quality enough to invest in validation, monitoring, and remediation.
Producer accountability makes teams responsible for data quality. If orders-service produces to orders-created topic, orders team is accountable for message quality, schema compliance, and handling validation failures.
This shifts mindset from "throw data at Kafka and hope it works" to "ensure data is correct before sending."
Quality SLAs formalize commitments. Orders team commits: "99.9% of messages will pass schema validation. Validation failures will be investigated within 4 hours."
SLAs make quality measurable and create accountability.
Postmortem for quality incidents treats data quality issues like production outages. If bad data causes downstream failures, conduct postmortem: what went wrong, how did it slip through, what prevents recurrence?
This elevates data quality to the same importance as availability and performance.
The Path Forward
Kafka data quality isn't a consumer problem—it's a producer responsibility. Validate at write time through schema enforcement and policies, reject invalid messages before they propagate, and monitor quality metrics to catch degradation early.
Conduktor enforces data quality through Gateway proxy validation, schema enforcement policies, and quality monitoring across all topics. Teams prevent bad data from entering Kafka instead of cleaning it up downstream.
If your data quality strategy is "validate when consuming," the problem isn't consumers—it's accepting garbage at write time.
Related: Enforcing Data Quality at Scale → · Conduktor Trust → · Kafka Data Contracts →
AND NOT description REGEXP '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}'Value range validation enforces business rules. If orderAmount should be between $0.01 and $100,000, reject messages outside this range:
Mandatory field policies require specific fields for compliance. If GDPR mandates consentTimestamp for customer data, reject messages missing it:
Policies enforce at write time, preventing non-compliant data from entering Kafka.
Real-Time vs. Batch Quality Checks
Quality validation adds latency. Real-time pipelines need fast validation (sub-millisecond overhead). Batch pipelines tolerate slower validation (can run comprehensive checks).
Real-time validation checks:
- Schema conformance (fast: schema validation is optimized)
- Required field presence (fast: null checks)
- Basic type validation (fast: integer vs. string)
Skip expensive checks like: complex regex matching, database lookups, external API calls. These add 10-100ms latency per message, unacceptable for real-time throughput.
Batch validation runs asynchronously after messages reach Kafka. Consumers read messages, run comprehensive validation, and report quality issues without blocking producers.
Batch checks include:
- Referential integrity (does
userIdexist in user database?) - Business rule validation (is
shippingDateafterorderDate?) - Statistical anomaly detection (is this value an outlier?)
Batch validation finds issues after data is in Kafka but before downstream processing.
Building a Data Quality Culture
Technology enforces quality, but culture sustains it. Teams must value quality enough to invest in validation, monitoring, and remediation.
Producer accountability makes teams responsible for data quality. If orders-service produces to orders-created topic, orders team is accountable for message quality, schema compliance, and handling validation failures.
This shifts mindset from "throw data at Kafka and hope it works" to "ensure data is correct before sending."
Quality SLAs formalize commitments. Orders team commits: "99.9% of messages will pass schema validation. Validation failures will be investigated within 4 hours."
SLAs make quality measurable and create accountability.
Postmortem for quality incidents treats data quality issues like production outages. If bad data causes downstream failures, conduct postmortem: what went wrong, how did it slip through, what prevents recurrence?
This elevates data quality to the same importance as availability and performance.
The Path Forward
Kafka data quality isn't a consumer problem—it's a producer responsibility. Validate at write time through schema enforcement and policies, reject invalid messages before they propagate, and monitor quality metrics to catch degradation early.
Conduktor enforces data quality through Gateway proxy validation, schema enforcement policies, and quality monitoring across all topics. Teams prevent bad data from entering Kafka instead of cleaning it up downstream.
If your data quality strategy is "validate when consuming," the problem isn't consumers—it's accepting garbage at write time.
Related: Enforcing Data Quality at Scale → · Conduktor Trust → · Kafka Data Contracts →