Get an inside look at the interesting conversations we had at the Millennium Alliance Enterprise AI Maturity and Transformation Assembly.
Mar 18, 2025
Last week, Conduktor attended the Enterprise AI Maturity and Transformation Assembly, held in Austin, Texas. Hosted by Millennium Alliance, an exclusive forum for public and private sector leaders, attendees included a variety of CTOs, CIOs, and CDOs across a wide range of organizations.
On the first day of the Assembly, Conduktor hosted an interactive talk featuring Quentin Packard, SVP of Sales at Conduktor, and Stephane Derosiaux, CPTO and Co-Founder of Conduktor. Stephane and Quentin facilitated a free-flowing discussion on the many aspects of data, including cost, applications, innovation, and efficiency.

Unifying data in a fragmented world
Because it was a talk rather than a presentation, the session was a valuable opportunity for leaders to share insights around their struggles, challenges, and successes around data. “The common themes we continue to hear: data tends to sit in silos, the storage, access, and delivery of data continues to get very expensive,” Quentin explained. “We keep hearing about hydrating data warehouses—How do I avoid the data swamp, the unnecessary burden I throw onto my data scientists?”
But it wasn’t only data that was siloed—teams were too. “One of the challenges that we face is that our worldwide teams are all doing their own thing and we’re not trying to merge [this all together],” one attendee explained, “Each region—Europe, the US, South America—has their own standards. How do we make data work for all the regions? How can we keep data as clean as possible so we can convert?”
Some of this operational complexity arose from the differences in laws between the legislative entities, such as the European Union or the United States. “Balancing data sovereignty and compliance regulations result in a tradeoff between speed and security,” Quentin agreed, noting that one potential solution could be to shift left, referring to the movement to proactively integrate security earlier into the development lifecycle.
Still, these processes require documentation, encouragement, and support to be effective, as they weren’t necessarily the first priority of many developers. “We want these DevOps frameworks to thrive,” Quentin continued, “but the reality is that humans may have biases—things they love to do versus things they need to do.”
Control, ownership, and security of data
For that reason, Quentin and Stephane argued for a more expansive definition of shift left doctrine. Instead of putting all the operational burden on engineers, forcing them to anticipate business needs and security risks, organizations could directly empower data quality and security teams, giving them proactive roles earlier in the development cycle.
Teams could start by making data restrictions the standard user setting, granting authorizations only when explicitly required. “Today, the default is that all data is accessible,” Stephane explained. “So we start to see a shift from having everything accessible—emails, financial data—to no access by default…you have to grant access instead. So we’re starting to see federation in organizations, where they see who has access to what.”
In fact, rethinking data governance doesn’t necessarily fit into existing paradigms, but might require new ones. “It’s not a pure shift-left approach,” Stephane continued. “It’s shift left and right, where you want to give access to people but you also want someone in your business to know what is happening at any point in time…so you can remove access immediately if necessary.”
Today, rethinking strategies around data governance and security is crucial, especially given the shifting debates about the control and ownership of data—and how it concerns AI. “For example, people are sending data to LLMs [large language models],” Stephane argued, “and people are taking data from LLMs to send somewhere else. If I feed an email in my LLMs…these LLMs will be able to train on this email, and the email content will be in the response, and you will have absolutely no control over data and quality as well.”

Mainframe modernization
In their one-on-one conversations with many leaders, Stephane, Quentin, and other Conduktor employees also noticed another recurring theme—and one unrelated to the latest trends, like AI or LLMs. Many of the Assembly’s attendees were more concerned about how their companies still used outdated mainframes to handle data, despite these devices being long past their intended service lives and vulnerable to malicious activity.
This created a dilemma: on the one hand, teams were stuck with mainframes due to the difficulty of migrations—but because of their age, mainframes were difficult to maintain and update. Many leaders recognized that the continuing use of mainframes held their organizations back from implementing new use cases and applications, such as streaming data and AI, that were necessary to keep up with the competition.
These discussions revealed an interesting truth: the pace of innovation often depended not on the newest technologies, like AI, but instead on the infrastructure and systems companies had in place. As a result, reliance on obsolete technologies could stifle progress and lower an organization’s overall competitiveness.
The Enterprise AI Maturity and Transformation Assembly was an insightful look at the challenges of running modern data environments. While some organizations are implementing AI, leaders voiced a much wider range of concerns, including siloed teams and data; the role of legacy systems in impeding productivity and creation; and the necessity of taking a more proactive, restrictive approach to data security.
Conduktor provides tools for organizations to secure, scale, and control Kafka data and usage for AI use cases—eliminating risks, enforcing governance, and providing visibility into streaming data pipelines. To see what Conduktor can do for your team and environment, sign up for a demo today.