
Datadog announced the general availability of LLM Observability, which allows AI application developers and machine learning (ML) engineers to efficiently monitor, improve and secure large language model (LLM) applications.
With LLM Observability, companies can accelerate the deployment of generative AI applications to production environments and scale them reliably.
Datadog LLM Observability helps customers confidently deploy and monitor their generative AI applications. This new product provides visibility into each step of the LLM chain to easily identify the root cause of errors and unexpected responses such as hallucinations. Users can also monitor operational metrics like latency and token usage to optimize performance and cost, and can evaluate the quality of their AI applications—such as topic relevance or toxicity—and gain insights to mitigate security and privacy risks with out-of-the-box quality and safety evaluations.
Datadog’s LLM Observability offers prompt and response clustering, seamless integration with Datadog Application Performance Monitoring (APM), and out-of-the-box evaluation and sensitive data scanning capabilities to enhance the performance, accuracy and security of generative AI applications while helping to keep data private and secure.
“There’s a rush to adopt new LLM-based technologies, but organizations of all sizes and industries are finding it difficult to do so in a way that is both cost effective and doesn’t negatively impact the end user experience,” said Yrieix Garnier, VP of Product at Datadog. “Datadog LLM Observability provides the deep visibility needed to help teams manage and understand performance, detect drifts or biases, and resolve issues before they have a significant impact on the business or end-user experience.”
LLM Observability helps organizations:
- Evaluate Inference Quality: Visualize the quality and effectiveness of LLM applications’ conversations—such as failure to answer—to monitor any hallucinations, drifts and the overall experience of the apps’ end users.
- Identify Root Causes: Quickly pinpoint the root cause of errors and failures in the LLM chain with full visibility into end-to-end traces for each user request.
- Improve Costs and Performance: Efficiently monitor key operational metrics for applications across all major platforms—including OpenAI, Anthropic, Azure OpenAI, Amazon Bedrock, Vertex AI and more—in a unified dashboard to uncover opportunities for performance and cost optimization.
- Protect Against Security Threats: Safeguard applications against prompt hacking and help prevent leaks of sensitive data, such as PII, emails and IP addresses, using built-in security and privacy scanners powered by Datadog Sensitive Data Scanner.
Datadog LLM Observability is generally available now.
The Latest
According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...
Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...
IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...
Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...
In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...
In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...
In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...
In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...