Skip to main content

8 Takeaways on the State of Observability for Energy and Utilities

Peter Pezaris
New Relic

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities.

  

Source: National Grid

Here are eight key takeaways from the report:

1. Outages Cost Energy and Utilities Companies More than Any Other Industry

The report found that high-impact-outages affect energy and utilities more than any other industry, with 40% experiencing outages at least once per week compared to 32% across all other industries surveyed. Consequently, the median annual downtime for energy and utility organizations was 37 hours, with 61% of respondents reporting that their mean time to resolve (MTTR) is at least 30 minutes to resolve outages. Each second during an outage comes with a price tag. More than half of energy and utilities organizations (52%) shared that critical business app outages cost at least $500,000 per hour, and 34% indicated that outages cost at least $1 million per hour.

2. Observability Increases Productivity

Since adopting observability solutions, energy and utilities companies have experienced substantial productivity improvements. Of those surveyed, 78% said their MTTR has somewhat improved. Further, organizations with full-stack observability noted even more significant MTTR progress, with 87% reporting improvements.

3. Increased Focus on Security, Governance, Risk, and Compliance is Driving Observability Adoption

For energy and utility organizations, the top technology trend driving the need for observability was an increased focus on security, governance, risk, and compliance (44%), followed by the adoption of Internet of Things (IoT) technologies (36%) and customer experience management (36%).

4. Observability Tooling Deployment is on the Rise

Organizations are prioritizing investment in observability tooling, which includes security monitoring (68%), network monitoring (66%), and infrastructure monitoring (60%). Notably, energy and utility organizations reported high levels of deployment for AIOps (AI for IT operations) capabilities, including anomaly detection, indecent intelligence, and root cause analysis (55%). In fact, by mid-2026, 89% of respondents plan to have deployed AIOps.

5. Energy and Utilities Companies are More Likely to Use Multiple Monitoring Tools

Energy and utilities organizations showed a higher tendency than average to utilize multiple monitoring tools across the 17 observability capabilities included in the study. In fact, three-fourths (75%) of respondents used four or more tools for observability, and 24% used eight or more tools. However, over the next year, 36% indicated that their organization is likely to consolidate tools.

6. Organizations are Maximizing the Value of Observability Spend

Out of all industries surveyed, energy and utilities organizations indicated the highest annual observability spend, with more than two-thirds (68%) spending at least $500,000 and 46% spending at least $1 million per year on observability tooling. In turn, organizations are planning to maximize the return on investment (ROI) on observability spending in the next year by training staff on how best to use their observability tools (48%), optimizing their engineering team size (42%), and consolidating tools (36%). Energy and utility companies stated that their organizations receive a significantly higher total annual value from observability than average, with 76% reporting receiving more than $500,000 from its observability investment per year, 66% stating $1 million or more, and 41% attaining $5 million or more per year in total value. The numbers reported around annual spending and annual value received reflect nearly a 3x median ROI, or 192%.

7. Observability Increases Business Value

Energy and utilities companies reported that observability improves their lives in several ways. Half of IT decision-makers (ITDMs) expressed that observability helps establish a technology strategy, and 46% said it enables data visualization from a single dashboard. Practitioners indicated that observability increases productivity so they can detect and resolve issues faster (43%) and allows less guesswork when managing complicated and distributed tech stacks (35%). Respondents also noted benefits enabled by observability, including increased operational efficiency (39%), improved system uptime and reliability (35%), security vulnerability management (35%), and improved real-user experience (29%). Ultimately, organizations concluded that observability provides numerous positive business outcomes, including improving collaboration across teams to make decisions related to the software stack (42%), creating revenue-generating use cases (35%), and quantifying the business impact of events and incidents with telemetry data (33%).

8. The Future is Bright for Observability Tooling Deployment

Energy and utilities companies are enthusiastic about their observability deployment plans over the next one to three years. By mid-2026, 99% of respondents expect to have deployed several monitoring tools, including security monitoring, database monitoring, and network monitoring, followed by 96% of organizations anticipating alerts and application performance monitoring. Methodology: New Relic's annual observability forecast offers insights into how observability influences organizations and their decision-makers. To gauge the current observability landscape, professionals from various industries and regions were surveyed. Among the 1,700 technology practitioners and decision-makers surveyed, 132 were associated with the energy and utilities sectors.

Peter Pezaris is Chief Design and Strategy Officer at New Relic

Hot Topics

The Latest

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

8 Takeaways on the State of Observability for Energy and Utilities

Peter Pezaris
New Relic

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities.

  

Source: National Grid

Here are eight key takeaways from the report:

1. Outages Cost Energy and Utilities Companies More than Any Other Industry

The report found that high-impact-outages affect energy and utilities more than any other industry, with 40% experiencing outages at least once per week compared to 32% across all other industries surveyed. Consequently, the median annual downtime for energy and utility organizations was 37 hours, with 61% of respondents reporting that their mean time to resolve (MTTR) is at least 30 minutes to resolve outages. Each second during an outage comes with a price tag. More than half of energy and utilities organizations (52%) shared that critical business app outages cost at least $500,000 per hour, and 34% indicated that outages cost at least $1 million per hour.

2. Observability Increases Productivity

Since adopting observability solutions, energy and utilities companies have experienced substantial productivity improvements. Of those surveyed, 78% said their MTTR has somewhat improved. Further, organizations with full-stack observability noted even more significant MTTR progress, with 87% reporting improvements.

3. Increased Focus on Security, Governance, Risk, and Compliance is Driving Observability Adoption

For energy and utility organizations, the top technology trend driving the need for observability was an increased focus on security, governance, risk, and compliance (44%), followed by the adoption of Internet of Things (IoT) technologies (36%) and customer experience management (36%).

4. Observability Tooling Deployment is on the Rise

Organizations are prioritizing investment in observability tooling, which includes security monitoring (68%), network monitoring (66%), and infrastructure monitoring (60%). Notably, energy and utility organizations reported high levels of deployment for AIOps (AI for IT operations) capabilities, including anomaly detection, indecent intelligence, and root cause analysis (55%). In fact, by mid-2026, 89% of respondents plan to have deployed AIOps.

5. Energy and Utilities Companies are More Likely to Use Multiple Monitoring Tools

Energy and utilities organizations showed a higher tendency than average to utilize multiple monitoring tools across the 17 observability capabilities included in the study. In fact, three-fourths (75%) of respondents used four or more tools for observability, and 24% used eight or more tools. However, over the next year, 36% indicated that their organization is likely to consolidate tools.

6. Organizations are Maximizing the Value of Observability Spend

Out of all industries surveyed, energy and utilities organizations indicated the highest annual observability spend, with more than two-thirds (68%) spending at least $500,000 and 46% spending at least $1 million per year on observability tooling. In turn, organizations are planning to maximize the return on investment (ROI) on observability spending in the next year by training staff on how best to use their observability tools (48%), optimizing their engineering team size (42%), and consolidating tools (36%). Energy and utility companies stated that their organizations receive a significantly higher total annual value from observability than average, with 76% reporting receiving more than $500,000 from its observability investment per year, 66% stating $1 million or more, and 41% attaining $5 million or more per year in total value. The numbers reported around annual spending and annual value received reflect nearly a 3x median ROI, or 192%.

7. Observability Increases Business Value

Energy and utilities companies reported that observability improves their lives in several ways. Half of IT decision-makers (ITDMs) expressed that observability helps establish a technology strategy, and 46% said it enables data visualization from a single dashboard. Practitioners indicated that observability increases productivity so they can detect and resolve issues faster (43%) and allows less guesswork when managing complicated and distributed tech stacks (35%). Respondents also noted benefits enabled by observability, including increased operational efficiency (39%), improved system uptime and reliability (35%), security vulnerability management (35%), and improved real-user experience (29%). Ultimately, organizations concluded that observability provides numerous positive business outcomes, including improving collaboration across teams to make decisions related to the software stack (42%), creating revenue-generating use cases (35%), and quantifying the business impact of events and incidents with telemetry data (33%).

8. The Future is Bright for Observability Tooling Deployment

Energy and utilities companies are enthusiastic about their observability deployment plans over the next one to three years. By mid-2026, 99% of respondents expect to have deployed several monitoring tools, including security monitoring, database monitoring, and network monitoring, followed by 96% of organizations anticipating alerts and application performance monitoring. Methodology: New Relic's annual observability forecast offers insights into how observability influences organizations and their decision-makers. To gauge the current observability landscape, professionals from various industries and regions were surveyed. Among the 1,700 technology practitioners and decision-makers surveyed, 132 were associated with the energy and utilities sectors.

Peter Pezaris is Chief Design and Strategy Officer at New Relic

Hot Topics

The Latest

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...