Engineers’ POV On Streaming: QCon 2025 Takeaways

Engineers’ POV On Streaming: QCon 2025 Takeaways

What software engineers, architects, and platform teams told us about the real challenges of scaling Kafka—straight from the floor at QCon London 2025.

Apr 11, 2025

QCon London is where senior software engineers, architects and tech leaders go to talk architecture, scale, and what’s next. And at the 2025 edition, tracks like Architectures You’ve Always Wondered About, Modern Data Architectures, and AI and ML for Software Engineers all circled the same core themes: moving data in real time, keeping it secure, and making it usable across teams. It’s clear that data streaming is no longer just a niche tool. It’s becoming the backbone of modern systems.

For us at Conduktor, the main reason to sponsor the event is talking to the people who live those streaming challenges every day. That’s what really keeps Conduktor aligned with what customers need. Software architects, platform teams, and developers kept stopping by our booth with real problems: access bottlenecks, insufficient, custom-built platforms, governance gaps, security concerns, and quality issues killing AI initiatives. These aren’t edge cases. They’re the daily reality for teams trying to scale data streaming responsibly. Here’s what we heard—and why it matters.

One Size Doesn’t Fit Anyone

Every team we talked to is somewhere different on the Kafka maturity curve. Some are just starting their journey. Others are managing hundreds of topics and hundreds of applications. The message is clear: solutions need to adapt to the team and meet them at their maturity stage—not force them into a lengthy, complicated deployment.

Why it matters: Many teams are just starting but they also grow fast. Platforms need to keep up—not get in the way.

Troubleshooting Kafka Still Hurts

Even seasoned Kafka teams are struggling to debug and operate cleanly. Between consumer lag, dead letters, and broken ACLs, teams are wasting time hunting across logs and half-connected tools. Kafka may be fast, but understanding what’s happening in real time? Still too slow.

Why it matters: You can’t fix what you can’t see. Teams waste hours or even days chasing symptoms instead of fixing root causes. That kills trust, and it’s expensive.

Kafka Without Friction: Self-Service or Bust

The loudest message from platform teams? Engineers need self-service, and they need it fast. I heard some horror stories, including one organization where it takes teams three weeks just to get access to a topic. 

When platform teams become the bottleneck instead of the enabler, everyone pays for it. Developers get frustrated. Platform teams burn out. Time-to-market suffers. Everyone wants autonomy. No one wants chaos.

Why it matters: The longer self-service gets postponed, the more innovation gets throttled. In fast-moving orgs, velocity without visibility is a recipe for chaos.

DIY Kafka Platforms Aren’t Holding Up

Many teams have tried building their own “platform” layer on top of Kafka. It starts simple—maybe a few scripts or access controls—but quickly turns into a Frankenstein system. It’s brittle, inconsistent, and always ends up back on the platform team’s plate. 

Why it matters: Platform teams have done incredible work building internal tooling—but maintaining that long-term takes time and resources. At scale, proven solutions free up platform teams to focus on what really moves the needle.

Data Sharing Is a Governance Nightmare

Teams want to share data across business units or with external partners. Right now, that means spinning up new clusters, replicating topics, and praying nothing breaks. It’s slow, messy, and creates new risks with every copy. Everyone agrees that Kafka should be sharable. The catch? No one wants to give up control over security or quality to make it happen.

Why it matters: External data sharing should be a growth enabler—not a compliance headache. If Kafka can’t be governed, it can’t scale outside the team that owns it.

Data Scientists Can’t Get What They Need

Security concerns are holding back internal innovation. Data scientists are stuck waiting or working with stale data because exposing live streams is seen as too risky. Teams know the value of real-time data—but can’t deliver it safely. The cost? Inaccurate AI outputs, lost trust, and missed opportunities.

Why it matters: When security blocks access, innovation stalls. Data science needs real-time, trusted data to build anything that matters.

Garbage In, AI Out

I’m sure you have heard it before: AI is only as good as the data feeding it—and right now, the source data isn’t reliable enough. Teams are seeing quality issues upstream that ripple through downstream pipelines. Fields are missing. Formats are inconsistent. Context is gone. Fixing it downstream is much more costly. They need control at the source, not a bandage later.

Why it matters: Poor data quality isn’t just a tech problem. It kills AI accuracy, slows down adoption, and undermines the entire promise of machine learning.

Where Conduktor Fits In

The conversations at QCon made one thing clear: data streaming is growing fast, and so are the challenges around control, access, and scale. Teams need better ways to manage Kafka and increase adoption without slowing down development or compromising security. That’s Conduktor’s focus and that's what we showed at our booth in dozens of product demos—making it easier to troubleshoot, govern, and grow Kafka, no matter where you are in the journey.

Don't miss these