Skip to main content

Data Downtime Nearly Doubled Year Over Year

Data downtime — periods of time when an organization's data is missing, wrong or otherwise inaccurate — nearly doubled year over year (1.89x), according to the State of Data Quality report from Monte Carlo.

Image removed.

The Wakefield Research survey, which was commissioned by Monte Carlo and polled 200 data professionals in March 2023, found that three critical factors contributed to this increase in data downtime. These factors included:

■ A rise in monthly data incidents, from 59 in 2022 to 67 in 2023.

■ 68% of respondents reported an average time of detection for data incidents of four hours or more, up from 62% of respondents in 2022.

■ A 166% increase in average time to resolution, rising to an average of 15 hours per incident across respondents.

More than half of respondents reported 25% or more of revenue was subjected to data quality issues. The average percentage of impacted revenue jumped to 31%, up from 26% in 2022. Additionally, an astounding 74% reported business stakeholders identify issues first, "all or most of the time," up from 47% in 2022.

These findings suggest data quality remains among the biggest problems facing data teams, with bad data having more severe repercussions on an organization's revenue and data trust than in years prior.

The survey also suggests data teams are making a tradeoff between data downtime and the amount of time spent on data quality as their datasets grow.

For instance, organizations with fewer tables reported spending less time on data quality than their peers with more tables, but their average time to detection and average time to resolution was comparatively higher. Conversely, organizations with more tables reported lower average time to detection and average time to resolution, but spent a greater percentage of their team's time to do so.

■ Respondents that spent more than 50% of their time on data quality had more tables (average 2,571) compared to respondents that spent less than 50% of their time on data quality (average 208).

■ Respondents that took less than 4 hours to detect an issue had more tables (average 1,269) than those who took longer than 4 hours to detect an issue (average 346).

■ Respondents that took less than 4 hours to resolve an issue had more tables (average 1,172) than those who took longer than 4 hours to resolve an issue (average 330).

"These results show teams having to make a lose-lose choice between spending too much time solving for data quality or suffering adverse consequences to their bottom line," said Barr Moses, CEO and co-founder of Monte Carlo. "In this economic climate, it's more urgent than ever for data leaders to turn this lose-lose into a win-win by leveraging data quality solutions that will lower BOTH the amount of time teams spend tackling data downtime and mitigating its consequences. As an industry, we need to prioritize data trust to optimize the potential of our data investments."

The survey revealed additional insights on the state of data quality management, including:

■ 50% of respondents reported data engineering is primarily responsible for data quality, compared to:
- 22% for data analysts
- 9% for software engineering
- 7% for data reliability engineering
- 6% for analytics engineering
- 5% for the data governance team
- 3% for non-technical business stakeholders

■ Respondents averaged 642 tables across their data lake, lakehouse, or warehouse environments.

■ Respondents reported having an average of 24 dbt models, and 41% reported having 25 or more dbt models.

■ Respondents averaged 290 manually-written tests across their data pipelines.

■ The number one reason for launching a data quality initiative was that the data organization identified data quality as a need (28%), followed by a migration or modernization of the data platform or systems (23%).

"Data testing remains data engineers' number one defense against data quality issues — and that's clearly not cutting it," said Lior Gavish, Monte Carlo CTO and Co-Founder. "Incidents fall through the cracks, stakeholders are the first to identify problems, and teams fall further behind. Leaning into more robust incident management processes and automated, ML-driven approaches like data observability is the future of data engineering at scale."

Hot Topics

The Latest

E-commerce is set to skyrocket with a 9% rise over the next few years ... To thrive in this competitive environment, retailers must identify digital resilience as their top priority. In a world where savvy shoppers expect 24/7 access to online deals and experiences, any unexpected downtime to digital services can lead to significant financial losses, damage to brand reputation, abandoned carts with designer shoes, and additional issues ...

Efficiency is a highly-desirable objective in business ... We're seeing this scenario play out in enterprises around the world as they continue to struggle with infrastructures and remote work models with an eye toward operational efficiencies. In contrast to that goal, a recent Broadcom survey of global IT and network professionals found widespread adoption of these strategies is making the network more complex and hampering observability, leading to uptime, performance and security issues. Let's look more closely at these challenges ...

Image
Broadcom

The 2025 Catchpoint SRE Report dives into the forces transforming the SRE landscape, exploring both the challenges and opportunities ahead. Let's break down the key findings and what they mean for SRE professionals and the businesses relying on them ...

Image
Catchpoint

The pressure on IT teams has never been greater. As data environments grow increasingly complex, resource shortages are emerging as a major obstacle for IT leaders striving to meet the demands of modern infrastructure management ... According to DataStrike's newly released 2025 Data Infrastructure Survey Report, more than half (54%) of IT leaders cite resource limitations as a top challenge, highlighting a growing trend toward outsourcing as a solution ...

Image
Datastrike

Gartner revealed its top strategic predictions for 2025 and beyond. Gartner's top predictions explore how generative AI (GenAI) is affecting areas where most would assume only humans can have lasting impact ...

The adoption of artificial intelligence (AI) is accelerating across the telecoms industry, with 88% of fixed broadband service providers now investigating or trialing AI automation to enhance their fixed broadband services, according to new research from Incognito Software Systems and Omdia ...

 

AWS is a cloud-based computing platform known for its reliability, scalability, and flexibility. However, as helpful as its comprehensive infrastructure is, disparate elements and numerous siloed components make it difficult for admins to visualize the cloud performance in detail. It requires meticulous monitoring techniques and deep visibility to understand cloud performance and analyze operational efficiency in detail to ensure seamless cloud operations ...

Imagine a future where software, once a complex obstacle, becomes a natural extension of daily workflow — an intuitive, seamless experience that maximizes productivity and efficiency. This future is no longer a distant vision but a reality being crafted by the transformative power of Artificial Intelligence ...

Enterprise data sprawl already challenges companies' ability to protect and back up their data. Much of this information is never fully secured, leaving organizations vulnerable. Now, as GenAI platforms emerge as yet another environment where enterprise data is consumed, transformed, and created, this fragmentation is set to intensify ...

Image
Crashplan

OpenTelemetry (OTel) has revolutionized the way we approach observability by standardizing the collection of telemetry data ... Here are five myths — and truths — to help elevate your OTel integration by harnessing the untapped power of logs ...

Data Downtime Nearly Doubled Year Over Year

Data downtime — periods of time when an organization's data is missing, wrong or otherwise inaccurate — nearly doubled year over year (1.89x), according to the State of Data Quality report from Monte Carlo.

Image removed.

The Wakefield Research survey, which was commissioned by Monte Carlo and polled 200 data professionals in March 2023, found that three critical factors contributed to this increase in data downtime. These factors included:

■ A rise in monthly data incidents, from 59 in 2022 to 67 in 2023.

■ 68% of respondents reported an average time of detection for data incidents of four hours or more, up from 62% of respondents in 2022.

■ A 166% increase in average time to resolution, rising to an average of 15 hours per incident across respondents.

More than half of respondents reported 25% or more of revenue was subjected to data quality issues. The average percentage of impacted revenue jumped to 31%, up from 26% in 2022. Additionally, an astounding 74% reported business stakeholders identify issues first, "all or most of the time," up from 47% in 2022.

These findings suggest data quality remains among the biggest problems facing data teams, with bad data having more severe repercussions on an organization's revenue and data trust than in years prior.

The survey also suggests data teams are making a tradeoff between data downtime and the amount of time spent on data quality as their datasets grow.

For instance, organizations with fewer tables reported spending less time on data quality than their peers with more tables, but their average time to detection and average time to resolution was comparatively higher. Conversely, organizations with more tables reported lower average time to detection and average time to resolution, but spent a greater percentage of their team's time to do so.

■ Respondents that spent more than 50% of their time on data quality had more tables (average 2,571) compared to respondents that spent less than 50% of their time on data quality (average 208).

■ Respondents that took less than 4 hours to detect an issue had more tables (average 1,269) than those who took longer than 4 hours to detect an issue (average 346).

■ Respondents that took less than 4 hours to resolve an issue had more tables (average 1,172) than those who took longer than 4 hours to resolve an issue (average 330).

"These results show teams having to make a lose-lose choice between spending too much time solving for data quality or suffering adverse consequences to their bottom line," said Barr Moses, CEO and co-founder of Monte Carlo. "In this economic climate, it's more urgent than ever for data leaders to turn this lose-lose into a win-win by leveraging data quality solutions that will lower BOTH the amount of time teams spend tackling data downtime and mitigating its consequences. As an industry, we need to prioritize data trust to optimize the potential of our data investments."

The survey revealed additional insights on the state of data quality management, including:

■ 50% of respondents reported data engineering is primarily responsible for data quality, compared to:
- 22% for data analysts
- 9% for software engineering
- 7% for data reliability engineering
- 6% for analytics engineering
- 5% for the data governance team
- 3% for non-technical business stakeholders

■ Respondents averaged 642 tables across their data lake, lakehouse, or warehouse environments.

■ Respondents reported having an average of 24 dbt models, and 41% reported having 25 or more dbt models.

■ Respondents averaged 290 manually-written tests across their data pipelines.

■ The number one reason for launching a data quality initiative was that the data organization identified data quality as a need (28%), followed by a migration or modernization of the data platform or systems (23%).

"Data testing remains data engineers' number one defense against data quality issues — and that's clearly not cutting it," said Lior Gavish, Monte Carlo CTO and Co-Founder. "Incidents fall through the cracks, stakeholders are the first to identify problems, and teams fall further behind. Leaning into more robust incident management processes and automated, ML-driven approaches like data observability is the future of data engineering at scale."

Hot Topics

The Latest

E-commerce is set to skyrocket with a 9% rise over the next few years ... To thrive in this competitive environment, retailers must identify digital resilience as their top priority. In a world where savvy shoppers expect 24/7 access to online deals and experiences, any unexpected downtime to digital services can lead to significant financial losses, damage to brand reputation, abandoned carts with designer shoes, and additional issues ...

Efficiency is a highly-desirable objective in business ... We're seeing this scenario play out in enterprises around the world as they continue to struggle with infrastructures and remote work models with an eye toward operational efficiencies. In contrast to that goal, a recent Broadcom survey of global IT and network professionals found widespread adoption of these strategies is making the network more complex and hampering observability, leading to uptime, performance and security issues. Let's look more closely at these challenges ...

Image
Broadcom

The 2025 Catchpoint SRE Report dives into the forces transforming the SRE landscape, exploring both the challenges and opportunities ahead. Let's break down the key findings and what they mean for SRE professionals and the businesses relying on them ...

Image
Catchpoint

The pressure on IT teams has never been greater. As data environments grow increasingly complex, resource shortages are emerging as a major obstacle for IT leaders striving to meet the demands of modern infrastructure management ... According to DataStrike's newly released 2025 Data Infrastructure Survey Report, more than half (54%) of IT leaders cite resource limitations as a top challenge, highlighting a growing trend toward outsourcing as a solution ...

Image
Datastrike

Gartner revealed its top strategic predictions for 2025 and beyond. Gartner's top predictions explore how generative AI (GenAI) is affecting areas where most would assume only humans can have lasting impact ...

The adoption of artificial intelligence (AI) is accelerating across the telecoms industry, with 88% of fixed broadband service providers now investigating or trialing AI automation to enhance their fixed broadband services, according to new research from Incognito Software Systems and Omdia ...

 

AWS is a cloud-based computing platform known for its reliability, scalability, and flexibility. However, as helpful as its comprehensive infrastructure is, disparate elements and numerous siloed components make it difficult for admins to visualize the cloud performance in detail. It requires meticulous monitoring techniques and deep visibility to understand cloud performance and analyze operational efficiency in detail to ensure seamless cloud operations ...

Imagine a future where software, once a complex obstacle, becomes a natural extension of daily workflow — an intuitive, seamless experience that maximizes productivity and efficiency. This future is no longer a distant vision but a reality being crafted by the transformative power of Artificial Intelligence ...

Enterprise data sprawl already challenges companies' ability to protect and back up their data. Much of this information is never fully secured, leaving organizations vulnerable. Now, as GenAI platforms emerge as yet another environment where enterprise data is consumed, transformed, and created, this fragmentation is set to intensify ...

Image
Crashplan

OpenTelemetry (OTel) has revolutionized the way we approach observability by standardizing the collection of telemetry data ... Here are five myths — and truths — to help elevate your OTel integration by harnessing the untapped power of logs ...