Skip to main content

8 Takeaways on the State of Observability for Energy and Utilities

Peter Pezaris
New Relic

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities.

  

Source: National Grid

Here are eight key takeaways from the report:

1. Outages Cost Energy and Utilities Companies More than Any Other Industry

The report found that high-impact-outages affect energy and utilities more than any other industry, with 40% experiencing outages at least once per week compared to 32% across all other industries surveyed. Consequently, the median annual downtime for energy and utility organizations was 37 hours, with 61% of respondents reporting that their mean time to resolve (MTTR) is at least 30 minutes to resolve outages. Each second during an outage comes with a price tag. More than half of energy and utilities organizations (52%) shared that critical business app outages cost at least $500,000 per hour, and 34% indicated that outages cost at least $1 million per hour.

2. Observability Increases Productivity

Since adopting observability solutions, energy and utilities companies have experienced substantial productivity improvements. Of those surveyed, 78% said their MTTR has somewhat improved. Further, organizations with full-stack observability noted even more significant MTTR progress, with 87% reporting improvements.

3. Increased Focus on Security, Governance, Risk, and Compliance is Driving Observability Adoption

For energy and utility organizations, the top technology trend driving the need for observability was an increased focus on security, governance, risk, and compliance (44%), followed by the adoption of Internet of Things (IoT) technologies (36%) and customer experience management (36%).

4. Observability Tooling Deployment is on the Rise

Organizations are prioritizing investment in observability tooling, which includes security monitoring (68%), network monitoring (66%), and infrastructure monitoring (60%). Notably, energy and utility organizations reported high levels of deployment for AIOps (AI for IT operations) capabilities, including anomaly detection, indecent intelligence, and root cause analysis (55%). In fact, by mid-2026, 89% of respondents plan to have deployed AIOps.

5. Energy and Utilities Companies are More Likely to Use Multiple Monitoring Tools

Energy and utilities organizations showed a higher tendency than average to utilize multiple monitoring tools across the 17 observability capabilities included in the study. In fact, three-fourths (75%) of respondents used four or more tools for observability, and 24% used eight or more tools. However, over the next year, 36% indicated that their organization is likely to consolidate tools.

6. Organizations are Maximizing the Value of Observability Spend

Out of all industries surveyed, energy and utilities organizations indicated the highest annual observability spend, with more than two-thirds (68%) spending at least $500,000 and 46% spending at least $1 million per year on observability tooling. In turn, organizations are planning to maximize the return on investment (ROI) on observability spending in the next year by training staff on how best to use their observability tools (48%), optimizing their engineering team size (42%), and consolidating tools (36%). Energy and utility companies stated that their organizations receive a significantly higher total annual value from observability than average, with 76% reporting receiving more than $500,000 from its observability investment per year, 66% stating $1 million or more, and 41% attaining $5 million or more per year in total value. The numbers reported around annual spending and annual value received reflect nearly a 3x median ROI, or 192%.

7. Observability Increases Business Value

Energy and utilities companies reported that observability improves their lives in several ways. Half of IT decision-makers (ITDMs) expressed that observability helps establish a technology strategy, and 46% said it enables data visualization from a single dashboard. Practitioners indicated that observability increases productivity so they can detect and resolve issues faster (43%) and allows less guesswork when managing complicated and distributed tech stacks (35%). Respondents also noted benefits enabled by observability, including increased operational efficiency (39%), improved system uptime and reliability (35%), security vulnerability management (35%), and improved real-user experience (29%). Ultimately, organizations concluded that observability provides numerous positive business outcomes, including improving collaboration across teams to make decisions related to the software stack (42%), creating revenue-generating use cases (35%), and quantifying the business impact of events and incidents with telemetry data (33%).

8. The Future is Bright for Observability Tooling Deployment

Energy and utilities companies are enthusiastic about their observability deployment plans over the next one to three years. By mid-2026, 99% of respondents expect to have deployed several monitoring tools, including security monitoring, database monitoring, and network monitoring, followed by 96% of organizations anticipating alerts and application performance monitoring. Methodology: New Relic's annual observability forecast offers insights into how observability influences organizations and their decision-makers. To gauge the current observability landscape, professionals from various industries and regions were surveyed. Among the 1,700 technology practitioners and decision-makers surveyed, 132 were associated with the energy and utilities sectors.

Peter Pezaris is Chief Design and Strategy Officer at New Relic

Hot Topics

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

8 Takeaways on the State of Observability for Energy and Utilities

Peter Pezaris
New Relic

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities.

  

Source: National Grid

Here are eight key takeaways from the report:

1. Outages Cost Energy and Utilities Companies More than Any Other Industry

The report found that high-impact-outages affect energy and utilities more than any other industry, with 40% experiencing outages at least once per week compared to 32% across all other industries surveyed. Consequently, the median annual downtime for energy and utility organizations was 37 hours, with 61% of respondents reporting that their mean time to resolve (MTTR) is at least 30 minutes to resolve outages. Each second during an outage comes with a price tag. More than half of energy and utilities organizations (52%) shared that critical business app outages cost at least $500,000 per hour, and 34% indicated that outages cost at least $1 million per hour.

2. Observability Increases Productivity

Since adopting observability solutions, energy and utilities companies have experienced substantial productivity improvements. Of those surveyed, 78% said their MTTR has somewhat improved. Further, organizations with full-stack observability noted even more significant MTTR progress, with 87% reporting improvements.

3. Increased Focus on Security, Governance, Risk, and Compliance is Driving Observability Adoption

For energy and utility organizations, the top technology trend driving the need for observability was an increased focus on security, governance, risk, and compliance (44%), followed by the adoption of Internet of Things (IoT) technologies (36%) and customer experience management (36%).

4. Observability Tooling Deployment is on the Rise

Organizations are prioritizing investment in observability tooling, which includes security monitoring (68%), network monitoring (66%), and infrastructure monitoring (60%). Notably, energy and utility organizations reported high levels of deployment for AIOps (AI for IT operations) capabilities, including anomaly detection, indecent intelligence, and root cause analysis (55%). In fact, by mid-2026, 89% of respondents plan to have deployed AIOps.

5. Energy and Utilities Companies are More Likely to Use Multiple Monitoring Tools

Energy and utilities organizations showed a higher tendency than average to utilize multiple monitoring tools across the 17 observability capabilities included in the study. In fact, three-fourths (75%) of respondents used four or more tools for observability, and 24% used eight or more tools. However, over the next year, 36% indicated that their organization is likely to consolidate tools.

6. Organizations are Maximizing the Value of Observability Spend

Out of all industries surveyed, energy and utilities organizations indicated the highest annual observability spend, with more than two-thirds (68%) spending at least $500,000 and 46% spending at least $1 million per year on observability tooling. In turn, organizations are planning to maximize the return on investment (ROI) on observability spending in the next year by training staff on how best to use their observability tools (48%), optimizing their engineering team size (42%), and consolidating tools (36%). Energy and utility companies stated that their organizations receive a significantly higher total annual value from observability than average, with 76% reporting receiving more than $500,000 from its observability investment per year, 66% stating $1 million or more, and 41% attaining $5 million or more per year in total value. The numbers reported around annual spending and annual value received reflect nearly a 3x median ROI, or 192%.

7. Observability Increases Business Value

Energy and utilities companies reported that observability improves their lives in several ways. Half of IT decision-makers (ITDMs) expressed that observability helps establish a technology strategy, and 46% said it enables data visualization from a single dashboard. Practitioners indicated that observability increases productivity so they can detect and resolve issues faster (43%) and allows less guesswork when managing complicated and distributed tech stacks (35%). Respondents also noted benefits enabled by observability, including increased operational efficiency (39%), improved system uptime and reliability (35%), security vulnerability management (35%), and improved real-user experience (29%). Ultimately, organizations concluded that observability provides numerous positive business outcomes, including improving collaboration across teams to make decisions related to the software stack (42%), creating revenue-generating use cases (35%), and quantifying the business impact of events and incidents with telemetry data (33%).

8. The Future is Bright for Observability Tooling Deployment

Energy and utilities companies are enthusiastic about their observability deployment plans over the next one to three years. By mid-2026, 99% of respondents expect to have deployed several monitoring tools, including security monitoring, database monitoring, and network monitoring, followed by 96% of organizations anticipating alerts and application performance monitoring. Methodology: New Relic's annual observability forecast offers insights into how observability influences organizations and their decision-makers. To gauge the current observability landscape, professionals from various industries and regions were surveyed. Among the 1,700 technology practitioners and decision-makers surveyed, 132 were associated with the energy and utilities sectors.

Peter Pezaris is Chief Design and Strategy Officer at New Relic

Hot Topics

The Latest

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...