Skip to main content

Data Downtime Nearly Doubled Year Over Year

Data downtime — periods of time when an organization's data is missing, wrong or otherwise inaccurate — nearly doubled year over year (1.89x), according to the State of Data Quality report from Monte Carlo.


The Wakefield Research survey, which was commissioned by Monte Carlo and polled 200 data professionals in March 2023, found that three critical factors contributed to this increase in data downtime. These factors included:

■ A rise in monthly data incidents, from 59 in 2022 to 67 in 2023.

■ 68% of respondents reported an average time of detection for data incidents of four hours or more, up from 62% of respondents in 2022.

■ A 166% increase in average time to resolution, rising to an average of 15 hours per incident across respondents.

More than half of respondents reported 25% or more of revenue was subjected to data quality issues. The average percentage of impacted revenue jumped to 31%, up from 26% in 2022. Additionally, an astounding 74% reported business stakeholders identify issues first, "all or most of the time," up from 47% in 2022.

These findings suggest data quality remains among the biggest problems facing data teams, with bad data having more severe repercussions on an organization's revenue and data trust than in years prior.

The survey also suggests data teams are making a tradeoff between data downtime and the amount of time spent on data quality as their datasets grow.

For instance, organizations with fewer tables reported spending less time on data quality than their peers with more tables, but their average time to detection and average time to resolution was comparatively higher. Conversely, organizations with more tables reported lower average time to detection and average time to resolution, but spent a greater percentage of their team's time to do so.

■ Respondents that spent more than 50% of their time on data quality had more tables (average 2,571) compared to respondents that spent less than 50% of their time on data quality (average 208).

■ Respondents that took less than 4 hours to detect an issue had more tables (average 1,269) than those who took longer than 4 hours to detect an issue (average 346).

■ Respondents that took less than 4 hours to resolve an issue had more tables (average 1,172) than those who took longer than 4 hours to resolve an issue (average 330).

"These results show teams having to make a lose-lose choice between spending too much time solving for data quality or suffering adverse consequences to their bottom line," said Barr Moses, CEO and co-founder of Monte Carlo. "In this economic climate, it's more urgent than ever for data leaders to turn this lose-lose into a win-win by leveraging data quality solutions that will lower BOTH the amount of time teams spend tackling data downtime and mitigating its consequences. As an industry, we need to prioritize data trust to optimize the potential of our data investments."

The survey revealed additional insights on the state of data quality management, including:

■ 50% of respondents reported data engineering is primarily responsible for data quality, compared to:
- 22% for data analysts
- 9% for software engineering
- 7% for data reliability engineering
- 6% for analytics engineering
- 5% for the data governance team
- 3% for non-technical business stakeholders

■ Respondents averaged 642 tables across their data lake, lakehouse, or warehouse environments.

■ Respondents reported having an average of 24 dbt models, and 41% reported having 25 or more dbt models.

■ Respondents averaged 290 manually-written tests across their data pipelines.

■ The number one reason for launching a data quality initiative was that the data organization identified data quality as a need (28%), followed by a migration or modernization of the data platform or systems (23%).

"Data testing remains data engineers' number one defense against data quality issues — and that's clearly not cutting it," said Lior Gavish, Monte Carlo CTO and Co-Founder. "Incidents fall through the cracks, stakeholders are the first to identify problems, and teams fall further behind. Leaning into more robust incident management processes and automated, ML-driven approaches like data observability is the future of data engineering at scale."

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

Data Downtime Nearly Doubled Year Over Year

Data downtime — periods of time when an organization's data is missing, wrong or otherwise inaccurate — nearly doubled year over year (1.89x), according to the State of Data Quality report from Monte Carlo.


The Wakefield Research survey, which was commissioned by Monte Carlo and polled 200 data professionals in March 2023, found that three critical factors contributed to this increase in data downtime. These factors included:

■ A rise in monthly data incidents, from 59 in 2022 to 67 in 2023.

■ 68% of respondents reported an average time of detection for data incidents of four hours or more, up from 62% of respondents in 2022.

■ A 166% increase in average time to resolution, rising to an average of 15 hours per incident across respondents.

More than half of respondents reported 25% or more of revenue was subjected to data quality issues. The average percentage of impacted revenue jumped to 31%, up from 26% in 2022. Additionally, an astounding 74% reported business stakeholders identify issues first, "all or most of the time," up from 47% in 2022.

These findings suggest data quality remains among the biggest problems facing data teams, with bad data having more severe repercussions on an organization's revenue and data trust than in years prior.

The survey also suggests data teams are making a tradeoff between data downtime and the amount of time spent on data quality as their datasets grow.

For instance, organizations with fewer tables reported spending less time on data quality than their peers with more tables, but their average time to detection and average time to resolution was comparatively higher. Conversely, organizations with more tables reported lower average time to detection and average time to resolution, but spent a greater percentage of their team's time to do so.

■ Respondents that spent more than 50% of their time on data quality had more tables (average 2,571) compared to respondents that spent less than 50% of their time on data quality (average 208).

■ Respondents that took less than 4 hours to detect an issue had more tables (average 1,269) than those who took longer than 4 hours to detect an issue (average 346).

■ Respondents that took less than 4 hours to resolve an issue had more tables (average 1,172) than those who took longer than 4 hours to resolve an issue (average 330).

"These results show teams having to make a lose-lose choice between spending too much time solving for data quality or suffering adverse consequences to their bottom line," said Barr Moses, CEO and co-founder of Monte Carlo. "In this economic climate, it's more urgent than ever for data leaders to turn this lose-lose into a win-win by leveraging data quality solutions that will lower BOTH the amount of time teams spend tackling data downtime and mitigating its consequences. As an industry, we need to prioritize data trust to optimize the potential of our data investments."

The survey revealed additional insights on the state of data quality management, including:

■ 50% of respondents reported data engineering is primarily responsible for data quality, compared to:
- 22% for data analysts
- 9% for software engineering
- 7% for data reliability engineering
- 6% for analytics engineering
- 5% for the data governance team
- 3% for non-technical business stakeholders

■ Respondents averaged 642 tables across their data lake, lakehouse, or warehouse environments.

■ Respondents reported having an average of 24 dbt models, and 41% reported having 25 or more dbt models.

■ Respondents averaged 290 manually-written tests across their data pipelines.

■ The number one reason for launching a data quality initiative was that the data organization identified data quality as a need (28%), followed by a migration or modernization of the data platform or systems (23%).

"Data testing remains data engineers' number one defense against data quality issues — and that's clearly not cutting it," said Lior Gavish, Monte Carlo CTO and Co-Founder. "Incidents fall through the cracks, stakeholders are the first to identify problems, and teams fall further behind. Leaning into more robust incident management processes and automated, ML-driven approaches like data observability is the future of data engineering at scale."

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...