48 Hours at Dash: Security, AI, and the Observability Paradox

48 Hours at Dash: Security, AI, and the Observability Paradox

AI observability promises clarity—but can expose new risks. Here’s how Conduktor and Datadog are helping teams stay in control.

Jun 18, 2025

Held in New York City every June, Dash is Datadog’s flagship conference—and a comprehensive introduction into trends and emerging technologies around observability. This year, key themes included observability for AI, including large language models (LLMs) and autonomous AI agents, as well as building AI into monitoring and troubleshooting tools and workflows.

After speaking to attendees, listening to the keynotes, and exploring the different companies and services at Dash, it was clear that both monitoring AI and creating AI-powered monitoring tools introduce new considerations and challenges. Not only do SRE teams have to prevent sensitive telemetry, credentials, model behaviours, and business logic from being exposed, but AI-based tools may also present new attack vectors for hackers. 

As a result, teams must build and implement security and governance procedures from the very beginning. 

The security paradox

Even as monitoring AI improves its transparency, it can simultaneously amplify potential security risks. Data, such as logs, traces, and agent telemetry, may include sensitive prompts, PII, tokens, and other proprietary information. 

In turn, this increases the attack surface for malicious parties. For instance, LLM dashboards, metrics pipelines, and integrated IDPs all become attractive targets. At the same time, because MCPs and AI agents enable real-time access to observability systems, they also require secure broker layers to filter, control, and safeguard these data flows.

AI systems can also ingest erroneous inputs, leading to unreliable and untrustworthy outputs. This is compounded by the use of real-time data—when an agent acts on the data it sees in the system, this will generate new data and throw the agent into a loop, ultimately creating unstable behavior.

Key security challenges

One key area of discussion was telemetry governance, or ensuring that data is tagged, masked, or obscured to comply with privacy legislation, all while ensuring that the fidelity of metrics, logs, traces, and other indicators was not lost. This is a crucial balancing act, as companies must comply with privacy legislation without sacrificing visibility.

Another key concern was anomalies in agentic AI performance. Because AI agents have the ability to act autonomously, their behavior must be monitored closely. Any poisoned data or misaligned context has to be detected and removed to ensure smooth operations. 

And just as data access has to be carefully regulated, so too does AI data, including observability data and model execution histories. Just as role-based access control (RBAC) is crucial for data at rest and in motion, so too is it necessary for AI data.

Often, as teams rush to build, train, and deploy AI, infrastructure rapidly multiplies, leading to sprawl. Each component, whether it’s deployed in the edge, on-prem, in the cloud, or in hybrid form, has to be governed and secured consistently, without configurations differing by team or environment. 

Lastly, when suspicious behavior is detected, organizations need a solution that can correlate suspicious behavior across application, infrastructure, and model layers, all while coordinating responses across teams and time zones.

How Conduktor and Datadog help

Given these new challenges, teams need to monitor, troubleshoot, and govern their AI data environments. By using Conduktor and Datadog in tandem, they can not only bring visibility to their AI infrastructure — but also control.

As a leader in the observability space, Datadog continues to push the boundaries. Their LLM Observability & Experiments feature simplifies agent execution procedures, displays real-time model performance, and provides visibility into model drift. Their AI Agents Console is a single interface for monitoring agentic AI security and performance, user engagement, and even business value.

Going beyond agentic performance, Datadog’s Bits AI Security Analyst automates triage and response for AI-specific threats, while Audit Trails and Access Controls provide transparency and regulation, enabling teams to trace user actions and lock out suspicious accounts. Lastly, the Flex Logs’ Frozen Tier enables ultra-low cost storage of voluminous (and vital) data over seven years, to facilitate long-running, historical analysis.

Conduktor complements Datadog by providing in-stream governance and quality enforcement. It starts at the topic level, as Conduktor redacts sensitive payloads before data is ingested, so that pipelines don’t accidentally absorb and expose PII or other vital data. In addition, Conduktor validates schema, which is especially important for LLM training data and inference traces. Lastly, Kafka ACLs, metadata, and schema history are made visible and exportable to SIEM systems, ensuring that security solutions have complete context and not just logs.

What customers can do now

So what can teams do today to secure their telemetry data? It starts with implementing best practices, including zero trust, in their pipelines. By tagging, restricting, and encrypting observability data at rest and in motion, organizations can more easily guard against leaks and unauthorized access.

This also includes deploying guardrails and taking a meta-view of observability. By using Datadog and Conduktor MCP Servers together, teams can validate access, redact prompts, and limit agent scope — and align everyone on telemetry visibility policies. At the same time, they need to monitor the monitors with audit logs, access analytics, and behavioral tracing, to understand how their observability works.

As AI-native architectures emerge, security must evolve alongside observability. The integration of Datadog and Conduktor provides deep visibility for teams, combined with the control, governance, and trust that AI innovation demands.

Observability shouldn’t become your biggest insider threat — it should be your AI’s first line of defense. Visualize your environment and make your AI more trustworthy, reliable, and capable.

To learn more about what Conduktor can do for your environment, book a demo today.

Don't miss these