Skip to main content

Observability Maturity Brings Higher Productivity, Code Quality and End-User Satisfaction

More than half (61%) of respondents reported that their teams are practicing observability, an 8% increase from 2020, signaling that overall adoption is on the rise, according to a 2021 survey from Honeycomb with over 400 responses from across multiple industries and organization sizes.

However, the majority of respondents indicated their teams are at the earliest stages of observability maturity.

Key findings include:

Observability is gaining traction

61% of respondents reported that their teams are currently practicing observability, an increase of 8% from last year. That increase is sharply reflected across individual teams (up 7%) as opposed to entire organizations (up only 1%).

Mature teams realize more benefits

Teams on the higher end of the maturity spectrum realize more benefits than their less-mature counterparts. Teams that are mature in their observability practice realize even more impactful business outcomes, including deploying more frequently, being able to find bugs more quickly before and after pushing to production, and reduced burnout.

Mature teams deliver higher customer satisfaction

More-mature teams are also 3X more likely to deliver higher customer satisfaction. Teams that have achieved Intermediate or Advanced-level maturity reported their end-user customers are "Always Satisfied" with their service quality and capability at a rate of three times more than teams that do not practice observability.

Lack of implementation skills is a barrier

Lack of implementation skills is a disproportionate barrier for observability adoption. While interest in observability has gained significant momentum, organizations at the earliest stages of observability maturity report lack of implementation skills as the second-largest hurdle to observability adoption, indicating a need for more training options. All respondents indicated their primary hurdle was competing with other initiatives.

The Honeycomb maturity model outlines a progression of five distinct stages ranging from "Planning" or "Novice" (with limited observability capability and processes) to "Advanced" (with comprehensive processes). The highlights of this year's report indicate that:

■ 10% of those surveyed reported a combination of practices and tooling that reflect a highly observable system in the "Advanced" and "Intermediate" groups. These two groups highly prioritize observability: 50% practice observability across the organization and 43% on a team-by-team basis. Respondents also reported high public cloud use, and most work at large enterprises (57%) and in the tech industry (46%).

■ 37% of survey respondents fall into the "Novice" group. This group is more likely to self-report that they are practicing observability because they are using tools like logs, metrics, and traces. However, they also do not report having the key capabilities associated with practicing observability, such as having a comprehensive understanding of their systems, which suggests that respondents in this group may be focusing on the data needed for observability but not yet fully adopting the tools or practices of observability.

■ One in four teams are at the "Planning" stage or the very beginning of their observability journey and are starting to practice on a team-by-team basis. In this group, approximately one in five respondents do not currently practice or use observability tooling but have plans to do so within the next year.

The research verifies that teams on the higher end of the maturity spectrum are more likely to have:

■ Code that is well understood, well maintained, and fewer bugs than average.

■ The ability to follow predictable release cycles because they confidently address issues that arise.

■ Understanding of the end-to-end performance of their systems and how technical debt is costing their organization.

■ The ability to visualize context-rich events that allow efficient, focused, and actionable on-call processes.

■ The ability to prioritize responsiveness to user behavior and feedback.

■ Completely automated or mostly automated releases, resulting in reduced toil.

■ The ability to set and measure service level objectives, resulting in better alignment between engineering and business goals.

"This year, we're seeing that teams focused on building up their observability capabilities are identifying problems faster and producing better business outcomes," said Christine Yen, CEO and co-founder of Honeycomb. "Our observability maturity model can be used as a roadmap for anyone to see how organizations across the industry are approaching a fundamentally new way of understanding their production services. Teams can understand what's working, what's not, and how early investments in observability adoption are creating meaningful business impacts, so that they can achieve similar results."

Methodology: The 2021 Observability Maturity Community Research Findings study was conducted by ClearPath Strategies, an independent strategic consulting and public opinion research firm.

Hot Topics

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...

Observability Maturity Brings Higher Productivity, Code Quality and End-User Satisfaction

More than half (61%) of respondents reported that their teams are practicing observability, an 8% increase from 2020, signaling that overall adoption is on the rise, according to a 2021 survey from Honeycomb with over 400 responses from across multiple industries and organization sizes.

However, the majority of respondents indicated their teams are at the earliest stages of observability maturity.

Key findings include:

Observability is gaining traction

61% of respondents reported that their teams are currently practicing observability, an increase of 8% from last year. That increase is sharply reflected across individual teams (up 7%) as opposed to entire organizations (up only 1%).

Mature teams realize more benefits

Teams on the higher end of the maturity spectrum realize more benefits than their less-mature counterparts. Teams that are mature in their observability practice realize even more impactful business outcomes, including deploying more frequently, being able to find bugs more quickly before and after pushing to production, and reduced burnout.

Mature teams deliver higher customer satisfaction

More-mature teams are also 3X more likely to deliver higher customer satisfaction. Teams that have achieved Intermediate or Advanced-level maturity reported their end-user customers are "Always Satisfied" with their service quality and capability at a rate of three times more than teams that do not practice observability.

Lack of implementation skills is a barrier

Lack of implementation skills is a disproportionate barrier for observability adoption. While interest in observability has gained significant momentum, organizations at the earliest stages of observability maturity report lack of implementation skills as the second-largest hurdle to observability adoption, indicating a need for more training options. All respondents indicated their primary hurdle was competing with other initiatives.

The Honeycomb maturity model outlines a progression of five distinct stages ranging from "Planning" or "Novice" (with limited observability capability and processes) to "Advanced" (with comprehensive processes). The highlights of this year's report indicate that:

■ 10% of those surveyed reported a combination of practices and tooling that reflect a highly observable system in the "Advanced" and "Intermediate" groups. These two groups highly prioritize observability: 50% practice observability across the organization and 43% on a team-by-team basis. Respondents also reported high public cloud use, and most work at large enterprises (57%) and in the tech industry (46%).

■ 37% of survey respondents fall into the "Novice" group. This group is more likely to self-report that they are practicing observability because they are using tools like logs, metrics, and traces. However, they also do not report having the key capabilities associated with practicing observability, such as having a comprehensive understanding of their systems, which suggests that respondents in this group may be focusing on the data needed for observability but not yet fully adopting the tools or practices of observability.

■ One in four teams are at the "Planning" stage or the very beginning of their observability journey and are starting to practice on a team-by-team basis. In this group, approximately one in five respondents do not currently practice or use observability tooling but have plans to do so within the next year.

The research verifies that teams on the higher end of the maturity spectrum are more likely to have:

■ Code that is well understood, well maintained, and fewer bugs than average.

■ The ability to follow predictable release cycles because they confidently address issues that arise.

■ Understanding of the end-to-end performance of their systems and how technical debt is costing their organization.

■ The ability to visualize context-rich events that allow efficient, focused, and actionable on-call processes.

■ The ability to prioritize responsiveness to user behavior and feedback.

■ Completely automated or mostly automated releases, resulting in reduced toil.

■ The ability to set and measure service level objectives, resulting in better alignment between engineering and business goals.

"This year, we're seeing that teams focused on building up their observability capabilities are identifying problems faster and producing better business outcomes," said Christine Yen, CEO and co-founder of Honeycomb. "Our observability maturity model can be used as a roadmap for anyone to see how organizations across the industry are approaching a fundamentally new way of understanding their production services. Teams can understand what's working, what's not, and how early investments in observability adoption are creating meaningful business impacts, so that they can achieve similar results."

Methodology: The 2021 Observability Maturity Community Research Findings study was conducted by ClearPath Strategies, an independent strategic consulting and public opinion research firm.

Hot Topics

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...