At Current London 2025, we learned that the biggest challenges (and opportunities) for data in motion don’t lie in the stream itself—but in everything associated with it.
Jun 6, 2025
Earlier this month, we had the opportunity to meet over 200 engineers, architects, and platform leads at Current London 2025. These conversations—on the floor, at our booth, and in sessions (one on Kafka lag by our CTO Stéphane Derosiaux and another one on Kafka observability, by our Head of R&D Florent Ramière)—were incredibly revealing. While everyone’s stack looked a little different, the challenges were surprisingly consistent.
Here are the top pain points we heard—and how they’re shaping our roadmap moving forward.

Data Quality Remains the Silent Killer
One platform team told us bluntly: “99% of our data quality issues are caused by schemaless data.” They were referring to the endless workarounds required to infer schemas, manually fix sync issues with downstream analytics, and triage errors long after data had landed.
This echoed across conversations. Late validation, schema fragmentation across streams and lakes, and broken assumptions in the pipeline are slowing teams down.
What we’re doing about it:
We’re investing heavily on data quality enforcement at the streaming level with Conduktor Trust—because if quality isn’t baked in before it enters your pipelines, it’s already too late. Expect deeper integrations with schema registries and data formats, more proactive anomaly detection, and improved visibility into policy violations.
Migrations Are Long, Messy, and Still In Progress
Several teams shared that migrating from RabbitMQ, Tibco EMS, or legacy databases remains a multi-year slog. In many cases, they’re not just moving platforms—they’re untangling deeply embedded processes, bridging old systems, and rethinking their entire event strategy.
What we’re doing about it:
We’re focusing on making the migration path smoother—not just technically, but operationally. That means better tooling for hybrid environments, clearer guidance for staging transformations, and support for gradual, low-risk cutovers.

Developer Autonomy Is Blocked by Central Bottlenecks
One recurring dilemma was that developers wanted access to data, but their self-service tooling can’t keep up. Several organizations described month-long waits just to get access to the right topics or connectors. Platform teams, in turn, are drowning in Slack messages, Jira tickets, and manual provisioning.
What we’re doing about it:
Our goal is to unlock developer autonomy—without compromising security and governance. We’re investing in policy-based access controls, connector templates, and scalable patterns for “Connect-as-a-Service.” And we’re making sure that domain teams—not just central admins—can configure and deploy what they need.
Flink Is Powerful, But Still Hard to Operate
Apache Flink came up frequently—usually excitement followed by concern. Teams are intrigued by its ability to enable richer real-time use cases, but they’re frustrated by the lack of RBAC controls, poor isolation of workloads, and difficulty debugging application errors.
What we’re doing about it:
We see Flink as a major opportunity, but it needs a scalable model in order to fit into the platform-as-a-service (PaaS) architecture. We’re exploring ways to bring RBAC, guardrails, and monitoring into the developer experience—so that Flink can scale with your team, and not just your data.

To Share Data Externally, Replication Is the Default—But No One Likes It
Cluster replication is still the go-to solution for sharing streaming data with external partners. But almost every team we spoke to disliked the cost and complexity of this approach. One attendee summed it up well: “We replicate because we have to, not because we want to.”
What we’re doing about it:
With Conduktor Exchange, we’re reimagining data sharing around a zero-duplication model. That means enabling secure, governed, and performant access to data—without copying it. Our roadmap includes more granular access controls, proxy-based authorization patterns, and stronger support for cross-cluster visibility.
What This Means for Our Strategy
What stood out most from the conversations at Current was just how fragmented and fragile the real-time data landscape still is inside many organizations. Despite massive investments, teams are still stitching together pipelines, managing replication by default, and struggling to get clean, timely data into the hands of the people who need it.
This only reinforces the direction we’re already moving. At Conduktor, we’re focused on helping teams:
Unify fragmented data sources — from Kafka to RabbitMQ, Amazon SQS, legacy MQs, and cloud-native streams.
Optimize operations — with governed access, schema enforcement, and in-stream validation to reduce downstream rework.
Expand the impact of real-time data — making it easier for more teams (from engineering to analytics or data scientists) to safely access and use trusted data, and enabling its use in GenAI and Agentic AI systems that depend on fresh, high-quality inputs to generate better outcomes.
We believe real-time operational data—transactions, events, metrics, and signals that reflect the state of the business as it happens—should be accessible, trustworthy, and actionable. And our roadmap is designed to make that a reality: less replication, tighter controls, and faster paths from source to business value.