Skip to main content

10 Key Takeaways from the 2022 Observability Forecast

Ishan Mukherjee
New Relic

Earlier this year, New Relic conducted an extensive survey of IT practitioners and decision-makers to understand the current state of observability: the ability to measure how a system is performing and identify issues and errors based on its external outputs. The company surveyed 1,614 IT professionals across 14 countries in North America, Europe and the Asia Pacific region. The findings of the 2022 Observability Forecast offer a detailed view of how this practice is shaping engineering and the technologies of the future.


Here are 10 key takeaways from the forecast:

1. Observability improves service-level metrics. Organizations see its value and plan to invest more

Respondents to the New Relic survey plan aggressive observability deployments, with 72% planning to maintain or increase their observability budgets over the next year. More than half expected their budgets to increase, while 20% expect to maintain current spending levels.

2. Most organizations will have robust observability practices in place by 2025

The 2022 Observability Forecast identified 17 observability capabilities that comprise a mature practice. Nearly all respondents expected to deploy key capabilities like network monitoring, security monitoring and log management by 2025. The majority expected to have 88–97% of the 17 capabilities deployed, but just 3% of respondents already maintain all 17 capabilities today.

3. Observability is now a board-level imperative

Tech executives have recognized the value and importance of observability: 73% of respondents reported that their C-suite executives are supporters of observability, and 78% saw observability as a key enabler for achieving core business goals. Furthermore, of those who had mature observability practices by the report's definition, 100% indicated that observability improves revenue retention by deepening their understanding of customer behaviors.

4. For many organizations, large sections of tech stacks are still not being fully observed or monitored

Despite the overall enthusiasm for observability and the fact that most organizations are practicing some form of observability, only 27% of respondents' organizations have achieved full-stack observability as defined in the report. The overall lack of adoption of full-stack observability signals that many organizations have an opportunity to make rapid improvements to their observability practices over the next year.

5. Organizations must tackle fragmentation of data, tools and teams

Many organizations use a patchwork of tools to monitor their technology stacks, requiring extensive manual effort only to gain a fragmented view of IT systems. More than 80% of respondents used four or more observability tools, and a third of respondents had to detect outages manually or from complaints. Just 7% of respondents said their telemetry data is unified in one place, and only 5% had a mature observability practice. Recognizing the challenges of fragmentation, respondents reported the need for simplicity, integration, seamlessness, and more efficient ways to complete high-value projects.

6. Telemetry data is often siloed

Siloed and fragmented data lead to a painful user experience, but slightly more than half (51%) of respondents still have siloed data in their tech stacks. Of those who have entirely siloed data, 77% stated they would prefer a single, consolidated platform. Those who struggle the most to juggle data across different silos long for more simplicity in their observability solutions.

7. There is a strong correlation between full-stack observability and faster mean time to detection (MTTD)

Respondents from organizations that have achieved full-stack observability, as well as those who have already prioritized full-stack observability, were more likely than others to experience the fastest mean time to detect an outage — less than five minutes. The data supports a strong correlation between achieving or prioritizing full-stack observability and a range of performance benefits, including fewer outages, improved outage detection rates, and improved resolution.

8. One of the biggest roadblocks to achieving observability is a failure to understand the benefits

The 2022 Observability Forecast asked respondents to name the biggest challenges preventing full-stack observability. The top responses were a lack of understanding of the benefits of observability and the belief that current IT performance is adequate (28% for each). Other leading roadblocks were a lack of budget (27%) and too many monitoring tools (25%).

9. Despite that, IT professionals recognize the bottom-line benefits of observability

Survey respondents named a wide range of observability benefits. These include improved uptime and reliability (cited by 36% of respondents), increased operational efficiency (35%), proactive detection of issues (33%) and an improved customer experience (33%). Respondents also said that observability improves the lives of engineers and developers, with 34% saying it helped to increase productivity and 32% crediting observability for supporting cross-team collaboration.

10. Organizations expect to need observability for AI, IoT and key business applications

C-suite executives see observability playing a major role in the development of new technologies. More than half of respondents said they would need observability most for artificial intelligence (AI) applications, while 48% mentioned the Internet of Things, 38% cited edge computing, and 36% highlighted blockchain applications. Observability in AI was mentioned across industries, with a majority of respondents in services/consulting (62%), energy/utilities (60%), government (58%) and IT/telco (51%) mentioning the need for observability in their AI projects.

Ishan Mukherjee is SVP of Growth at New Relic

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

10 Key Takeaways from the 2022 Observability Forecast

Ishan Mukherjee
New Relic

Earlier this year, New Relic conducted an extensive survey of IT practitioners and decision-makers to understand the current state of observability: the ability to measure how a system is performing and identify issues and errors based on its external outputs. The company surveyed 1,614 IT professionals across 14 countries in North America, Europe and the Asia Pacific region. The findings of the 2022 Observability Forecast offer a detailed view of how this practice is shaping engineering and the technologies of the future.


Here are 10 key takeaways from the forecast:

1. Observability improves service-level metrics. Organizations see its value and plan to invest more

Respondents to the New Relic survey plan aggressive observability deployments, with 72% planning to maintain or increase their observability budgets over the next year. More than half expected their budgets to increase, while 20% expect to maintain current spending levels.

2. Most organizations will have robust observability practices in place by 2025

The 2022 Observability Forecast identified 17 observability capabilities that comprise a mature practice. Nearly all respondents expected to deploy key capabilities like network monitoring, security monitoring and log management by 2025. The majority expected to have 88–97% of the 17 capabilities deployed, but just 3% of respondents already maintain all 17 capabilities today.

3. Observability is now a board-level imperative

Tech executives have recognized the value and importance of observability: 73% of respondents reported that their C-suite executives are supporters of observability, and 78% saw observability as a key enabler for achieving core business goals. Furthermore, of those who had mature observability practices by the report's definition, 100% indicated that observability improves revenue retention by deepening their understanding of customer behaviors.

4. For many organizations, large sections of tech stacks are still not being fully observed or monitored

Despite the overall enthusiasm for observability and the fact that most organizations are practicing some form of observability, only 27% of respondents' organizations have achieved full-stack observability as defined in the report. The overall lack of adoption of full-stack observability signals that many organizations have an opportunity to make rapid improvements to their observability practices over the next year.

5. Organizations must tackle fragmentation of data, tools and teams

Many organizations use a patchwork of tools to monitor their technology stacks, requiring extensive manual effort only to gain a fragmented view of IT systems. More than 80% of respondents used four or more observability tools, and a third of respondents had to detect outages manually or from complaints. Just 7% of respondents said their telemetry data is unified in one place, and only 5% had a mature observability practice. Recognizing the challenges of fragmentation, respondents reported the need for simplicity, integration, seamlessness, and more efficient ways to complete high-value projects.

6. Telemetry data is often siloed

Siloed and fragmented data lead to a painful user experience, but slightly more than half (51%) of respondents still have siloed data in their tech stacks. Of those who have entirely siloed data, 77% stated they would prefer a single, consolidated platform. Those who struggle the most to juggle data across different silos long for more simplicity in their observability solutions.

7. There is a strong correlation between full-stack observability and faster mean time to detection (MTTD)

Respondents from organizations that have achieved full-stack observability, as well as those who have already prioritized full-stack observability, were more likely than others to experience the fastest mean time to detect an outage — less than five minutes. The data supports a strong correlation between achieving or prioritizing full-stack observability and a range of performance benefits, including fewer outages, improved outage detection rates, and improved resolution.

8. One of the biggest roadblocks to achieving observability is a failure to understand the benefits

The 2022 Observability Forecast asked respondents to name the biggest challenges preventing full-stack observability. The top responses were a lack of understanding of the benefits of observability and the belief that current IT performance is adequate (28% for each). Other leading roadblocks were a lack of budget (27%) and too many monitoring tools (25%).

9. Despite that, IT professionals recognize the bottom-line benefits of observability

Survey respondents named a wide range of observability benefits. These include improved uptime and reliability (cited by 36% of respondents), increased operational efficiency (35%), proactive detection of issues (33%) and an improved customer experience (33%). Respondents also said that observability improves the lives of engineers and developers, with 34% saying it helped to increase productivity and 32% crediting observability for supporting cross-team collaboration.

10. Organizations expect to need observability for AI, IoT and key business applications

C-suite executives see observability playing a major role in the development of new technologies. More than half of respondents said they would need observability most for artificial intelligence (AI) applications, while 48% mentioned the Internet of Things, 38% cited edge computing, and 36% highlighted blockchain applications. Observability in AI was mentioned across industries, with a majority of respondents in services/consulting (62%), energy/utilities (60%), government (58%) and IT/telco (51%) mentioning the need for observability in their AI projects.

Ishan Mukherjee is SVP of Growth at New Relic

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...