Skip to main content

8 Takeaways on the State of Observability for Energy and Utilities

Peter Pezaris
New Relic

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities.

  

Source: National Grid

Here are eight key takeaways from the report:

1. Outages Cost Energy and Utilities Companies More than Any Other Industry

The report found that high-impact-outages affect energy and utilities more than any other industry, with 40% experiencing outages at least once per week compared to 32% across all other industries surveyed. Consequently, the median annual downtime for energy and utility organizations was 37 hours, with 61% of respondents reporting that their mean time to resolve (MTTR) is at least 30 minutes to resolve outages. Each second during an outage comes with a price tag. More than half of energy and utilities organizations (52%) shared that critical business app outages cost at least $500,000 per hour, and 34% indicated that outages cost at least $1 million per hour.

2. Observability Increases Productivity

Since adopting observability solutions, energy and utilities companies have experienced substantial productivity improvements. Of those surveyed, 78% said their MTTR has somewhat improved. Further, organizations with full-stack observability noted even more significant MTTR progress, with 87% reporting improvements.

3. Increased Focus on Security, Governance, Risk, and Compliance is Driving Observability Adoption

For energy and utility organizations, the top technology trend driving the need for observability was an increased focus on security, governance, risk, and compliance (44%), followed by the adoption of Internet of Things (IoT) technologies (36%) and customer experience management (36%).

4. Observability Tooling Deployment is on the Rise

Organizations are prioritizing investment in observability tooling, which includes security monitoring (68%), network monitoring (66%), and infrastructure monitoring (60%). Notably, energy and utility organizations reported high levels of deployment for AIOps (AI for IT operations) capabilities, including anomaly detection, indecent intelligence, and root cause analysis (55%). In fact, by mid-2026, 89% of respondents plan to have deployed AIOps.

5. Energy and Utilities Companies are More Likely to Use Multiple Monitoring Tools

Energy and utilities organizations showed a higher tendency than average to utilize multiple monitoring tools across the 17 observability capabilities included in the study. In fact, three-fourths (75%) of respondents used four or more tools for observability, and 24% used eight or more tools. However, over the next year, 36% indicated that their organization is likely to consolidate tools.

6. Organizations are Maximizing the Value of Observability Spend

Out of all industries surveyed, energy and utilities organizations indicated the highest annual observability spend, with more than two-thirds (68%) spending at least $500,000 and 46% spending at least $1 million per year on observability tooling. In turn, organizations are planning to maximize the return on investment (ROI) on observability spending in the next year by training staff on how best to use their observability tools (48%), optimizing their engineering team size (42%), and consolidating tools (36%). Energy and utility companies stated that their organizations receive a significantly higher total annual value from observability than average, with 76% reporting receiving more than $500,000 from its observability investment per year, 66% stating $1 million or more, and 41% attaining $5 million or more per year in total value. The numbers reported around annual spending and annual value received reflect nearly a 3x median ROI, or 192%.

7. Observability Increases Business Value

Energy and utilities companies reported that observability improves their lives in several ways. Half of IT decision-makers (ITDMs) expressed that observability helps establish a technology strategy, and 46% said it enables data visualization from a single dashboard. Practitioners indicated that observability increases productivity so they can detect and resolve issues faster (43%) and allows less guesswork when managing complicated and distributed tech stacks (35%). Respondents also noted benefits enabled by observability, including increased operational efficiency (39%), improved system uptime and reliability (35%), security vulnerability management (35%), and improved real-user experience (29%). Ultimately, organizations concluded that observability provides numerous positive business outcomes, including improving collaboration across teams to make decisions related to the software stack (42%), creating revenue-generating use cases (35%), and quantifying the business impact of events and incidents with telemetry data (33%).

8. The Future is Bright for Observability Tooling Deployment

Energy and utilities companies are enthusiastic about their observability deployment plans over the next one to three years. By mid-2026, 99% of respondents expect to have deployed several monitoring tools, including security monitoring, database monitoring, and network monitoring, followed by 96% of organizations anticipating alerts and application performance monitoring. Methodology: New Relic's annual observability forecast offers insights into how observability influences organizations and their decision-makers. To gauge the current observability landscape, professionals from various industries and regions were surveyed. Among the 1,700 technology practitioners and decision-makers surveyed, 132 were associated with the energy and utilities sectors.

Peter Pezaris is Chief Design and Strategy Officer at New Relic

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

8 Takeaways on the State of Observability for Energy and Utilities

Peter Pezaris
New Relic

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities.

  

Source: National Grid

Here are eight key takeaways from the report:

1. Outages Cost Energy and Utilities Companies More than Any Other Industry

The report found that high-impact-outages affect energy and utilities more than any other industry, with 40% experiencing outages at least once per week compared to 32% across all other industries surveyed. Consequently, the median annual downtime for energy and utility organizations was 37 hours, with 61% of respondents reporting that their mean time to resolve (MTTR) is at least 30 minutes to resolve outages. Each second during an outage comes with a price tag. More than half of energy and utilities organizations (52%) shared that critical business app outages cost at least $500,000 per hour, and 34% indicated that outages cost at least $1 million per hour.

2. Observability Increases Productivity

Since adopting observability solutions, energy and utilities companies have experienced substantial productivity improvements. Of those surveyed, 78% said their MTTR has somewhat improved. Further, organizations with full-stack observability noted even more significant MTTR progress, with 87% reporting improvements.

3. Increased Focus on Security, Governance, Risk, and Compliance is Driving Observability Adoption

For energy and utility organizations, the top technology trend driving the need for observability was an increased focus on security, governance, risk, and compliance (44%), followed by the adoption of Internet of Things (IoT) technologies (36%) and customer experience management (36%).

4. Observability Tooling Deployment is on the Rise

Organizations are prioritizing investment in observability tooling, which includes security monitoring (68%), network monitoring (66%), and infrastructure monitoring (60%). Notably, energy and utility organizations reported high levels of deployment for AIOps (AI for IT operations) capabilities, including anomaly detection, indecent intelligence, and root cause analysis (55%). In fact, by mid-2026, 89% of respondents plan to have deployed AIOps.

5. Energy and Utilities Companies are More Likely to Use Multiple Monitoring Tools

Energy and utilities organizations showed a higher tendency than average to utilize multiple monitoring tools across the 17 observability capabilities included in the study. In fact, three-fourths (75%) of respondents used four or more tools for observability, and 24% used eight or more tools. However, over the next year, 36% indicated that their organization is likely to consolidate tools.

6. Organizations are Maximizing the Value of Observability Spend

Out of all industries surveyed, energy and utilities organizations indicated the highest annual observability spend, with more than two-thirds (68%) spending at least $500,000 and 46% spending at least $1 million per year on observability tooling. In turn, organizations are planning to maximize the return on investment (ROI) on observability spending in the next year by training staff on how best to use their observability tools (48%), optimizing their engineering team size (42%), and consolidating tools (36%). Energy and utility companies stated that their organizations receive a significantly higher total annual value from observability than average, with 76% reporting receiving more than $500,000 from its observability investment per year, 66% stating $1 million or more, and 41% attaining $5 million or more per year in total value. The numbers reported around annual spending and annual value received reflect nearly a 3x median ROI, or 192%.

7. Observability Increases Business Value

Energy and utilities companies reported that observability improves their lives in several ways. Half of IT decision-makers (ITDMs) expressed that observability helps establish a technology strategy, and 46% said it enables data visualization from a single dashboard. Practitioners indicated that observability increases productivity so they can detect and resolve issues faster (43%) and allows less guesswork when managing complicated and distributed tech stacks (35%). Respondents also noted benefits enabled by observability, including increased operational efficiency (39%), improved system uptime and reliability (35%), security vulnerability management (35%), and improved real-user experience (29%). Ultimately, organizations concluded that observability provides numerous positive business outcomes, including improving collaboration across teams to make decisions related to the software stack (42%), creating revenue-generating use cases (35%), and quantifying the business impact of events and incidents with telemetry data (33%).

8. The Future is Bright for Observability Tooling Deployment

Energy and utilities companies are enthusiastic about their observability deployment plans over the next one to three years. By mid-2026, 99% of respondents expect to have deployed several monitoring tools, including security monitoring, database monitoring, and network monitoring, followed by 96% of organizations anticipating alerts and application performance monitoring. Methodology: New Relic's annual observability forecast offers insights into how observability influences organizations and their decision-makers. To gauge the current observability landscape, professionals from various industries and regions were surveyed. Among the 1,700 technology practitioners and decision-makers surveyed, 132 were associated with the energy and utilities sectors.

Peter Pezaris is Chief Design and Strategy Officer at New Relic

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...