10 Key Takeaways from the 2022 Observability Forecast
September 14, 2022

Ishan Mukherjee
New Relic

Share this

Earlier this year, New Relic conducted an extensive survey of IT practitioners and decision-makers to understand the current state of observability: the ability to measure how a system is performing and identify issues and errors based on its external outputs. The company surveyed 1,614 IT professionals across 14 countries in North America, Europe and the Asia Pacific region. The findings of the 2022 Observability Forecast offer a detailed view of how this practice is shaping engineering and the technologies of the future.


Here are 10 key takeaways from the forecast:

1. Observability improves service-level metrics. Organizations see its value and plan to invest more

Respondents to the New Relic survey plan aggressive observability deployments, with 72% planning to maintain or increase their observability budgets over the next year. More than half expected their budgets to increase, while 20% expect to maintain current spending levels.

2. Most organizations will have robust observability practices in place by 2025

The 2022 Observability Forecast identified 17 observability capabilities that comprise a mature practice. Nearly all respondents expected to deploy key capabilities like network monitoring, security monitoring and log management by 2025. The majority expected to have 88–97% of the 17 capabilities deployed, but just 3% of respondents already maintain all 17 capabilities today.

3. Observability is now a board-level imperative

Tech executives have recognized the value and importance of observability: 73% of respondents reported that their C-suite executives are supporters of observability, and 78% saw observability as a key enabler for achieving core business goals. Furthermore, of those who had mature observability practices by the report's definition, 100% indicated that observability improves revenue retention by deepening their understanding of customer behaviors.

4. For many organizations, large sections of tech stacks are still not being fully observed or monitored

Despite the overall enthusiasm for observability and the fact that most organizations are practicing some form of observability, only 27% of respondents' organizations have achieved full-stack observability as defined in the report. The overall lack of adoption of full-stack observability signals that many organizations have an opportunity to make rapid improvements to their observability practices over the next year.

5. Organizations must tackle fragmentation of data, tools and teams

Many organizations use a patchwork of tools to monitor their technology stacks, requiring extensive manual effort only to gain a fragmented view of IT systems. More than 80% of respondents used four or more observability tools, and a third of respondents had to detect outages manually or from complaints. Just 7% of respondents said their telemetry data is unified in one place, and only 5% had a mature observability practice. Recognizing the challenges of fragmentation, respondents reported the need for simplicity, integration, seamlessness, and more efficient ways to complete high-value projects.

6. Telemetry data is often siloed

Siloed and fragmented data lead to a painful user experience, but slightly more than half (51%) of respondents still have siloed data in their tech stacks. Of those who have entirely siloed data, 77% stated they would prefer a single, consolidated platform. Those who struggle the most to juggle data across different silos long for more simplicity in their observability solutions.

7. There is a strong correlation between full-stack observability and faster mean time to detection (MTTD)

Respondents from organizations that have achieved full-stack observability, as well as those who have already prioritized full-stack observability, were more likely than others to experience the fastest mean time to detect an outage — less than five minutes. The data supports a strong correlation between achieving or prioritizing full-stack observability and a range of performance benefits, including fewer outages, improved outage detection rates, and improved resolution.

8. One of the biggest roadblocks to achieving observability is a failure to understand the benefits

The 2022 Observability Forecast asked respondents to name the biggest challenges preventing full-stack observability. The top responses were a lack of understanding of the benefits of observability and the belief that current IT performance is adequate (28% for each). Other leading roadblocks were a lack of budget (27%) and too many monitoring tools (25%).

9. Despite that, IT professionals recognize the bottom-line benefits of observability

Survey respondents named a wide range of observability benefits. These include improved uptime and reliability (cited by 36% of respondents), increased operational efficiency (35%), proactive detection of issues (33%) and an improved customer experience (33%). Respondents also said that observability improves the lives of engineers and developers, with 34% saying it helped to increase productivity and 32% crediting observability for supporting cross-team collaboration.

10. Organizations expect to need observability for AI, IoT and key business applications

C-suite executives see observability playing a major role in the development of new technologies. More than half of respondents said they would need observability most for artificial intelligence (AI) applications, while 48% mentioned the Internet of Things, 38% cited edge computing, and 36% highlighted blockchain applications. Observability in AI was mentioned across industries, with a majority of respondents in services/consulting (62%), energy/utilities (60%), government (58%) and IT/telco (51%) mentioning the need for observability in their AI projects.

Ishan Mukherjee is SVP of Marketing at New Relic
Share this

The Latest

October 03, 2022

IT engineers and executives are responsible for system reliability and availability. The volume of data can make it hard to be proactive and fix issues quickly. With over a decade of experience in the field, I know the importance of IT operations analytics and how it can help identify incidents and enable agile responses ...

September 30, 2022

For businesses with vast and distributed computing infrastructures, one of the main objectives of IT and network operations is to locate the cause of a service condition that is having an impact. The more human resources are put into the task of gathering, processing, and finally visual monitoring the massive volumes of event and log data that serve as the main source of symptomatic indications for emerging crises, the closer the service is to the company's source of revenue ...

September 29, 2022

Our digital economy is intolerant of downtime. But consumers haven't just come to expect always-on digital apps and services. They also expect continuous innovation, new functionality and lightening fast response times. Organizations have taken note, investing heavily in teams and tools that supposedly increase uptime and free resources for innovation. But leaders have not realized this "throw money at the problem" approach to monitoring is burning through resources without much improvement in availability outcomes ...

September 28, 2022

Although 83% of businesses are concerned about a recession in 2023, B2B tech marketers can look forward to growth — 51% of organizations plan to increase IT budgets in 2023 vs. a narrow 6% that plan to reduce their spend, according to the 2023 State of IT report from Spiceworks Ziff Davis ...

September 27, 2022

Users have high expectations around applications — quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities — all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks ...

September 26, 2022

In Italy, it is customary to treat wine as part of the meal ... Too often, testing is treated with the same reverence as the post-meal task of loading the dishwasher, when it should be treated like an elegant wine pairing ...

September 23, 2022

In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...

September 22, 2022

As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...

September 21, 2022

US government agencies are bringing more of their employees back into the office and implementing hybrid work schedules, but federal workers are worried that their agencies' IT architectures aren't built to handle the "new normal." They fear that the reactive, manual methods used by the current systems in dealing with user, IT architecture and application problems will degrade the user experience and negatively affect productivity. In fact, according to a recent survey, many federal employees are concerned that they won't work as effectively back in the office as they did at home ...

September 20, 2022

Users today expect a seamless, uninterrupted experience when interacting with their web and mobile apps. Their expectations have continued to grow in tandem with their appetite for new features and consistent updates. Mobile apps have responded by increasing their release cadence by up to 40%, releasing a new full version of their app every 4-5 days, as determined in this year's SmartBear State of Software Quality | Application Stability Index report ...