As AI adoption accelerates, enterprise leaders grapple with questions about ROI, data security, and reliability. Learn more about these challenges—in their own words.
17 avr. 2025
As an emerging technology, AI is thrilling, intriguing, and confusing in equal measures. As AI adoption ramps up, leaders ask hard questions, including: What is AI used for? Will it live up to our initial investment of time and money? How can we guarantee data safety and security for AI inputs? How can we ensure that AIs create trustworthy outputs?
In their respective roles as Senior Vice President of Sales and CPTO of Conduktor, Quentin Packard and Stephane Derosiaux have spoken to many leaders to better understand their opinions, questions, and issues regarding AI. Some of the top concerns include the reliability and trust of both AI inputs and outputs, business outcomes, and the importance of context for more accurate large language model results, among others.
These insights and questions were gathered from a variety of events, discussions, and talks with leaders across a range of industries.
AI without ROI: What’s the business use case?
One concern was whether AIs would live up to return on their initial investment, and how to measure such success. “It’s not always easy to determine what AI gives us in terms of benefits,” one CTO mentioned. “I’m struggling to define ROI—ok, I can use genAI, type faster code, or search better, but I don’t have the ROI or the time frame down.”
Another leader agreed, viewing AI as a solution in search of a problem, as well as a force that displaces human employees. “Our CEO says we need to onboard AI," he explained. “Whenever I ask my teams what problems they want to solve with AI, that never goes well—they say ‘we don’t have any problems,’ because they don’t want to lose anyone. So trying to hunt down what we’re trying to solve with AI is a challenge that requires a complete shift.”
However, Conduktor CPTO Stephane felt that part of the issue arose from a mismatch between the commonly available types of AI and their intended purposes. “When we talk about AI, we often talk about GenAI and B2C use cases, and we try to apply these to businesses,” Stephane explained. “Often, GenAI has nothing to do with these use cases, which were better served by classical machine learning.”
Instead, Stephane believed that much of the value of AI lay in its ability to amplify human efforts and effectiveness. “It’s just so much easier to do things with GenAI,” Stephane pointed out. “Today, you can start a business with five people and GenAI to do something that, ten years ago, you would have needed 100 people for. It is not so much unlocking business outcomes—but more about cost-efficiency outcomes.”
Data governance and controls: AI’s weakest link?
Widespread AI adoption also forces teams and leaders to rethink traditional data governance. Given the large amounts of sensitive data ingested by LLMs and GenAI, as well as regulations like the EU’s GDPR or AI Act, the stakes for misused data are high.
“People are sending data to LLMs, and people are taking data from LLMs to send somewhere else,” Stephane explained. “If I feed an email to my LLMs—they will be able to train on this email, and the email content will be in the response, and you will have absolutely no control over data and quality as well.”
As a result, CxOs and their teams need to renovate their trust frameworks—ideally before any issues arise. “How do people start to think about data controls and getting more proactive and the trust between human, AI, the code, and the collaboration between the technologies?” Quentin asked.
Why Shift-Left alone isn’t enough
One potential solution would be to make data restrictions the standard user setting, granting authorizations only when explicitly required. “Today, the default is that all data is accessible,” Stephane explained. “But now, because of AI and LLMs, you are losing control of who can send data to LLMs and external partners. So we start to see a shift from having everything accessible—emails, financial data—to no access by default…you have to grant access instead. So we’re starting to see federation in organizations, where they see who has access to what.”
In fact, rethinking data governance doesn’t necessarily fit into existing paradigms, but instead, might require new ones. “It’s not a pure shift-left approach,” Stephane continued, referring to the movement to proactively integrate security earlier into the development lifecycle. “It’s shift left and right, where you want to give access to people but you also want someone in your business to know what is happening at any point in time, so you can remove access immediately if necessary.”
Despite AI’s popularity and increasing adoption across companies and verticals, leaders still have questions around strategy, trust, and control, especially around the data which powers much of AI’s capabilities. Any organization that can overcome these obstacles can adopt AI at a wider scale—and finally unlock its true potential.
Conduktor provides tools for organizations to secure, scale, and control Kafka data and usage for AI use cases—eliminating risks, enforcing governance, and providing visibility into streaming data pipelines. To see what Conduktor can do for your team and environment, sign up for a demo today.