Skip to main content

Challenges and Trends in Observability Adoption 2024

Dotan Horovits
Logz.io

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report.

According to the survey, for the third year in a row mean time to recovery (MTTR) is increasing; taking over an hour for 82% of 2024 respondents (up from 74% in 2023, 64% in 2022, and 47% in 2021). Clearly, whatever organizations are doing is not enough to resolve their production issues or reach their SLOs efficiently.


As previously mentioned, 10% of organizations that recognize the value of observability are actually practicing full observability: that is most certainly a low number. But we found that 60% of teams that are increasing their focus on observability are reporting improved and accelerated troubleshooting.

So why aren't more organizations prioritizing a strong observability strategy?

Challenges to Full Observability

One complicating factor is the increasing volume of tools and data. This may add to the complexity of a successful observability plan, but the expertise of the people deploying the plan is the biggest issue according to the survey. Lack of knowledge on the team ranked as the top challenge as the tech talent gap is impacting 48% of survey respondents.

Not surprisingly, costs are a primary concern for organizations — 91% of respondents, in fact. As they move toward full observability of their systems, data volume is multiplying, especially for those running Kubernetes in production. Monitoring and troubleshooting their Kubernetes clusters was the top challenge for 40% of respondents deploying them.

Organizations are responding to this huge increase in data and the subsequent expense of that data by adapting their observability practices to keep costs down. Exploring ways to gain better visibility into monitoring costs (52%) and working to optimize the volume of monitoring data (37%) are tactics being used in an effort to reduce observability costs.

Trends in Observability

The survey revealed some noteworthy trends in the tools being used and the approach being taken to reduce MTTR.

Consolidating services appears to be on the rise, and simplifying environments could be a way to improve MTTR. With this strategy, 28% of organizations surveyed are embracing a shared model for observability and security monitoring, a 13% increase over last year.

The big news here, however, is that 87% of respondents said they are using some form of a Platform Engineering model with 10% saying it's in the works. With Platform Engineering, a single group enables observability for all involved teams. Platform Engineering is definitely a trend that is on the rise industry-wide.

Other trends revealed are the use of data pipeline analytics as a means to address observability costs and complexity; this was noted by 75% of survey respondents. In terms of the tools being used, the majority of organizations are using between 1 and 5 observability tools currently. OpenTelemetry adoption is increasing, with 76% of respondents using the open source project as a framework to assist in generating and capturing telemetry data for their cloud-native software.

Grafana and Prometheus were the top two open source systems, 43% and 38% respectively, chosen for observability. Although it's important to note that in 2024, 21% of respondents said they have consolidated to one tool, up from 16% last year. This is an interesting trend we're definitely keeping an eye on and are happy to be a part of.

As organizations continue to adopt cloud-native technologies and face growing complexity paired with skyrocketing costs, unified, business-centric observability is becoming a must-have strategy for not only ensuring the smooth operation of their applications and infrastructure, but for meeting service level objectives (SLOs) that impact the bottom line.

Methodology: This is our sixth year running this survey (previously named the DevOps Pulse Survey) in which we engaged with 500 respondents about their observability journey. Developers, DevOps engineers, IT professionals, and executives from around the globe all chimed in to give us a glimpse into their organizations' observability efforts; the goals, the challenges, and the realities.

Dotan Horovits is Principal Developer Advocate at Logz.io

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

Challenges and Trends in Observability Adoption 2024

Dotan Horovits
Logz.io

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report.

According to the survey, for the third year in a row mean time to recovery (MTTR) is increasing; taking over an hour for 82% of 2024 respondents (up from 74% in 2023, 64% in 2022, and 47% in 2021). Clearly, whatever organizations are doing is not enough to resolve their production issues or reach their SLOs efficiently.


As previously mentioned, 10% of organizations that recognize the value of observability are actually practicing full observability: that is most certainly a low number. But we found that 60% of teams that are increasing their focus on observability are reporting improved and accelerated troubleshooting.

So why aren't more organizations prioritizing a strong observability strategy?

Challenges to Full Observability

One complicating factor is the increasing volume of tools and data. This may add to the complexity of a successful observability plan, but the expertise of the people deploying the plan is the biggest issue according to the survey. Lack of knowledge on the team ranked as the top challenge as the tech talent gap is impacting 48% of survey respondents.

Not surprisingly, costs are a primary concern for organizations — 91% of respondents, in fact. As they move toward full observability of their systems, data volume is multiplying, especially for those running Kubernetes in production. Monitoring and troubleshooting their Kubernetes clusters was the top challenge for 40% of respondents deploying them.

Organizations are responding to this huge increase in data and the subsequent expense of that data by adapting their observability practices to keep costs down. Exploring ways to gain better visibility into monitoring costs (52%) and working to optimize the volume of monitoring data (37%) are tactics being used in an effort to reduce observability costs.

Trends in Observability

The survey revealed some noteworthy trends in the tools being used and the approach being taken to reduce MTTR.

Consolidating services appears to be on the rise, and simplifying environments could be a way to improve MTTR. With this strategy, 28% of organizations surveyed are embracing a shared model for observability and security monitoring, a 13% increase over last year.

The big news here, however, is that 87% of respondents said they are using some form of a Platform Engineering model with 10% saying it's in the works. With Platform Engineering, a single group enables observability for all involved teams. Platform Engineering is definitely a trend that is on the rise industry-wide.

Other trends revealed are the use of data pipeline analytics as a means to address observability costs and complexity; this was noted by 75% of survey respondents. In terms of the tools being used, the majority of organizations are using between 1 and 5 observability tools currently. OpenTelemetry adoption is increasing, with 76% of respondents using the open source project as a framework to assist in generating and capturing telemetry data for their cloud-native software.

Grafana and Prometheus were the top two open source systems, 43% and 38% respectively, chosen for observability. Although it's important to note that in 2024, 21% of respondents said they have consolidated to one tool, up from 16% last year. This is an interesting trend we're definitely keeping an eye on and are happy to be a part of.

As organizations continue to adopt cloud-native technologies and face growing complexity paired with skyrocketing costs, unified, business-centric observability is becoming a must-have strategy for not only ensuring the smooth operation of their applications and infrastructure, but for meeting service level objectives (SLOs) that impact the bottom line.

Methodology: This is our sixth year running this survey (previously named the DevOps Pulse Survey) in which we engaged with 500 respondents about their observability journey. Developers, DevOps engineers, IT professionals, and executives from around the globe all chimed in to give us a glimpse into their organizations' observability efforts; the goals, the challenges, and the realities.

Dotan Horovits is Principal Developer Advocate at Logz.io

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...