Skip to main content

Datadog Announces Deep Database Monitoring

Datadog announced the general availability of Database Monitoring (DBM).

With insights into query performance and explain plans, as well as automatic correlation of query metrics with application and infrastructure metrics, Database Monitoring provides engineers and database administrators the visibility they need to quickly find and fix application performance issues that arise from slow running database queries.

Database queries are often the root cause of incidents and application performance issues. When applications make unnecessary queries or fail to use indices, they burden the entire database, causing performance degradation for all applications using the database. Databases do not store historical query performance metrics, which makes it extremely difficult to understand the context around an issue and identify trends. This becomes even harder as engineers typically need to dig into each database individually to investigate, which prolongs downtime and exacerbates the impact on the customer experience.

Datadog Database Monitoring builds on the existing ability to monitor the general health and availability of the database and underlying infrastructure by allowing users to pinpoint the exact queries that impact application performance and user experience. With DBM, users can see the performance of database queries, troubleshoot slow queries with detailed execution breakdowns, and analyze historical trends in query latencies and overhead. This allows organizations to unlock improvements not only in database performance, but also in the performance of the upstream applications, APIs, and microservices that the database underpins.

DBM users are also able to automatically correlate query performance data with Datadog infrastructure metrics to easily identify resource bottlenecks. This allows engineers to quickly understand whether performance issues are at the database or infrastructure level, without needing to manually export and reconcile information from multiple, disconnected point solutions. Datadog’s unified data model makes it easy to search and filter information at scale with the same tags that are used everywhere in Datadog.

“Databases underpin today’s digital experiences. Consequently, a disruption in database uptime and performance can quickly have dramatic effects on business operations,” said Renaud Boutet, Senior VP, Product Management, Datadog. “The Datadog platform now enables database administrators and application engineers to detect and act on database issues by sharing the same data. This allows organizations to discover and implement improvements while saving time communicating and reconciling information.”

Datadog DBM delivers deep visibility into databases and enables organizations to:

- Quickly detect and isolate drops in performance. Users can track the performance of normalized queries across their entire fleet of databases, see which types of queries are executed the most and where they run, and get alerts for long running or expensive queries. For each query, they can drill down further to the hosts that are running that query, and leverage log and network information to understand host performance.

- Pinpoint the root cause of performance drops. DBM provides quick access to explain plans, so users can view the sequence of steps that make up a query. This allows them to localize bottlenecks and identify opportunities to optimize performance and resource efficiency.

- Improve and maintain database health, preventing incidents and saving costs. DBM enables organizations to keep historical query performance data for up to three months, so they can understand changes over time and prevent regressions.

- Provide engineers access to database performance telemetry, without compromising data security. DBM offers a centralized view of database performance data, automatically correlated with infrastructure and application metrics, without requiring direct user access to database instances.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Datadog Announces Deep Database Monitoring

Datadog announced the general availability of Database Monitoring (DBM).

With insights into query performance and explain plans, as well as automatic correlation of query metrics with application and infrastructure metrics, Database Monitoring provides engineers and database administrators the visibility they need to quickly find and fix application performance issues that arise from slow running database queries.

Database queries are often the root cause of incidents and application performance issues. When applications make unnecessary queries or fail to use indices, they burden the entire database, causing performance degradation for all applications using the database. Databases do not store historical query performance metrics, which makes it extremely difficult to understand the context around an issue and identify trends. This becomes even harder as engineers typically need to dig into each database individually to investigate, which prolongs downtime and exacerbates the impact on the customer experience.

Datadog Database Monitoring builds on the existing ability to monitor the general health and availability of the database and underlying infrastructure by allowing users to pinpoint the exact queries that impact application performance and user experience. With DBM, users can see the performance of database queries, troubleshoot slow queries with detailed execution breakdowns, and analyze historical trends in query latencies and overhead. This allows organizations to unlock improvements not only in database performance, but also in the performance of the upstream applications, APIs, and microservices that the database underpins.

DBM users are also able to automatically correlate query performance data with Datadog infrastructure metrics to easily identify resource bottlenecks. This allows engineers to quickly understand whether performance issues are at the database or infrastructure level, without needing to manually export and reconcile information from multiple, disconnected point solutions. Datadog’s unified data model makes it easy to search and filter information at scale with the same tags that are used everywhere in Datadog.

“Databases underpin today’s digital experiences. Consequently, a disruption in database uptime and performance can quickly have dramatic effects on business operations,” said Renaud Boutet, Senior VP, Product Management, Datadog. “The Datadog platform now enables database administrators and application engineers to detect and act on database issues by sharing the same data. This allows organizations to discover and implement improvements while saving time communicating and reconciling information.”

Datadog DBM delivers deep visibility into databases and enables organizations to:

- Quickly detect and isolate drops in performance. Users can track the performance of normalized queries across their entire fleet of databases, see which types of queries are executed the most and where they run, and get alerts for long running or expensive queries. For each query, they can drill down further to the hosts that are running that query, and leverage log and network information to understand host performance.

- Pinpoint the root cause of performance drops. DBM provides quick access to explain plans, so users can view the sequence of steps that make up a query. This allows them to localize bottlenecks and identify opportunities to optimize performance and resource efficiency.

- Improve and maintain database health, preventing incidents and saving costs. DBM enables organizations to keep historical query performance data for up to three months, so they can understand changes over time and prevent regressions.

- Provide engineers access to database performance telemetry, without compromising data security. DBM offers a centralized view of database performance data, automatically correlated with infrastructure and application metrics, without requiring direct user access to database instances.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...