Skip to main content

Building Digital Resiliency in a Hyper-Connected World

Eric Johnson
PagerDuty

Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted.

According to a survey conducted by PagerDuty, resilience was ranked as one of the top three operations priorities by IT and business leaders across industries. But while organizations must protect against operational failures, many nevertheless struggle to identify and respond to service disruptions before they impact customers, reputation and revenue. High-profile incidents like the July 19 global IT outage offer an opportunity for organizations to learn from mistakes — either their own, or that of others.

Image
Pagerduty

How to Build More Resilient Systems

Organizations must reduce the frequency and severity of major incidents to improve business continuity and digital operations resiliency. This ensures always-on access to mission-critical services to meet customer expectations, compliance requirements and SLAs.

Think automation-first

The influx of data, as well as the increase in noise and incidents, means that humans simply can't keep up with the sheer amount of information coming in. Not only that, but responding to each and every problem leaves room for error and takes subject matter expert (SME) time away from other, more critical work, to say nothing of the cost of the time spent on resolving them. The survey of over 500 IT leaders, estimated the true cost of downtime to be $4,537 per minute, so when you consider that the average resolution time takes 175 minutes, each customer-impacting digital incident can cost nearly $800k. It all adds up to a giant waste of resources and exacerbates customer impact.

With automation as the first line of defense, organizations can let machines enrich and normalize data, run diagnostics, remediate issues and coordinate response efforts before responders are even alerted to the issue. This preserves human capacity and makes systems more resilient against human error, thus minimizing customer disruption, reputational risk, and revenue loss from operational failures.

Make it people-centric

Resilience is also reliant on the humans that power these technical systems. In cases where automation can't resolve problems without intervention, it's important to have processes in place that support teams doing their best work under challenging circumstances with as little disruption to both them and the customer as possible. Consider all the processes that go into ensuring that systems stay up and available. From on-call rotations to how postmortems are conducted and fixes prioritized, the people involved should feel like the processes help them become more efficient and proactive.

Keeping the humans — internal stakeholders, technical teams, customer support agents, leadership and customers — in the loop and informed with timely and critical updates regarding incidents is key.

Our CEO also offers this human-centric gem: Instead of looking for who to blame, look for the learning opportunity in the form of "blameless postmortems," and you'll get better outcomes. Turning every incident into an opportunity for everyone to learn from them and continuously improve will build more resilient operations.

Leverage the power of AI/ML-assist

Resilience is, in part, about speed. We can't predict everything that could go wrong, but we just know that somewhere, sometime, something will. But being able to fix a broken system and provide a more reliable customer experience is time-sensitive. Every minute of downtime has a cost to the business.

Organizations need to leverage AI and ML to assist technical teams in triaging, communicating and reporting problems faster. With the right information at responders' fingertips:

Teams can bring incidents to a resolution faster. By having the ability to query the data, they're able to get a much richer and detailed understanding of what's happening in a fraction of the time, so they're able to get to the work of resolution quickly.

Teams are able to communicate with less time and toil required. These tools can act as a "first drafter" for communications, postmortems, automation runbooks and more, helping teams use their capacity for more value-add work.

Teams can create post-incident reviews easier to ensure that the system is hardening over time. The system uses these tools to incorporate learnings into future response strategies, including how to standardize incident management, streamlining operations and automating key workflows to scale smoothly and reduce cognitive load.

Teams can have confidence in their compliance with emerging digital resiliency requirements. With the rise in critical digital infrastructure incidents, organizations can expect regulatory and compliance constraints to gain strength. The EU is already there, with DORA set to become law in early 2025, and similar attempts ramping up globally. AI and ML tools can help build out documentation of actions taken during an incident to create auditable records for compliance purposes later.

Resilience Is a Modern "Must Have"

In today's hyper-connected world, and with cyber threats growing more frequent and sophisticated, building digital resilience is as critical as the electricity running our systems. Organizations that adopt a proactive approach — focusing on both technical resilience and empowering their teams — will be better equipped to navigate challenges before they impact the customer experience.

Ultimately, resilience is an ongoing journey. By learning from past incidents and continuously improving systems and processes, companies can not only prevent failures but turn challenges into opportunities for growth and innovation. With the right mix of technology and human expertise, businesses can stay ahead of disruptions and build a future where digital operations are as robust as they are adaptable.

Eric Johnson is Chief Information Officer at PagerDuty

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Building Digital Resiliency in a Hyper-Connected World

Eric Johnson
PagerDuty

Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted.

According to a survey conducted by PagerDuty, resilience was ranked as one of the top three operations priorities by IT and business leaders across industries. But while organizations must protect against operational failures, many nevertheless struggle to identify and respond to service disruptions before they impact customers, reputation and revenue. High-profile incidents like the July 19 global IT outage offer an opportunity for organizations to learn from mistakes — either their own, or that of others.

Image
Pagerduty

How to Build More Resilient Systems

Organizations must reduce the frequency and severity of major incidents to improve business continuity and digital operations resiliency. This ensures always-on access to mission-critical services to meet customer expectations, compliance requirements and SLAs.

Think automation-first

The influx of data, as well as the increase in noise and incidents, means that humans simply can't keep up with the sheer amount of information coming in. Not only that, but responding to each and every problem leaves room for error and takes subject matter expert (SME) time away from other, more critical work, to say nothing of the cost of the time spent on resolving them. The survey of over 500 IT leaders, estimated the true cost of downtime to be $4,537 per minute, so when you consider that the average resolution time takes 175 minutes, each customer-impacting digital incident can cost nearly $800k. It all adds up to a giant waste of resources and exacerbates customer impact.

With automation as the first line of defense, organizations can let machines enrich and normalize data, run diagnostics, remediate issues and coordinate response efforts before responders are even alerted to the issue. This preserves human capacity and makes systems more resilient against human error, thus minimizing customer disruption, reputational risk, and revenue loss from operational failures.

Make it people-centric

Resilience is also reliant on the humans that power these technical systems. In cases where automation can't resolve problems without intervention, it's important to have processes in place that support teams doing their best work under challenging circumstances with as little disruption to both them and the customer as possible. Consider all the processes that go into ensuring that systems stay up and available. From on-call rotations to how postmortems are conducted and fixes prioritized, the people involved should feel like the processes help them become more efficient and proactive.

Keeping the humans — internal stakeholders, technical teams, customer support agents, leadership and customers — in the loop and informed with timely and critical updates regarding incidents is key.

Our CEO also offers this human-centric gem: Instead of looking for who to blame, look for the learning opportunity in the form of "blameless postmortems," and you'll get better outcomes. Turning every incident into an opportunity for everyone to learn from them and continuously improve will build more resilient operations.

Leverage the power of AI/ML-assist

Resilience is, in part, about speed. We can't predict everything that could go wrong, but we just know that somewhere, sometime, something will. But being able to fix a broken system and provide a more reliable customer experience is time-sensitive. Every minute of downtime has a cost to the business.

Organizations need to leverage AI and ML to assist technical teams in triaging, communicating and reporting problems faster. With the right information at responders' fingertips:

Teams can bring incidents to a resolution faster. By having the ability to query the data, they're able to get a much richer and detailed understanding of what's happening in a fraction of the time, so they're able to get to the work of resolution quickly.

Teams are able to communicate with less time and toil required. These tools can act as a "first drafter" for communications, postmortems, automation runbooks and more, helping teams use their capacity for more value-add work.

Teams can create post-incident reviews easier to ensure that the system is hardening over time. The system uses these tools to incorporate learnings into future response strategies, including how to standardize incident management, streamlining operations and automating key workflows to scale smoothly and reduce cognitive load.

Teams can have confidence in their compliance with emerging digital resiliency requirements. With the rise in critical digital infrastructure incidents, organizations can expect regulatory and compliance constraints to gain strength. The EU is already there, with DORA set to become law in early 2025, and similar attempts ramping up globally. AI and ML tools can help build out documentation of actions taken during an incident to create auditable records for compliance purposes later.

Resilience Is a Modern "Must Have"

In today's hyper-connected world, and with cyber threats growing more frequent and sophisticated, building digital resilience is as critical as the electricity running our systems. Organizations that adopt a proactive approach — focusing on both technical resilience and empowering their teams — will be better equipped to navigate challenges before they impact the customer experience.

Ultimately, resilience is an ongoing journey. By learning from past incidents and continuously improving systems and processes, companies can not only prevent failures but turn challenges into opportunities for growth and innovation. With the right mix of technology and human expertise, businesses can stay ahead of disruptions and build a future where digital operations are as robust as they are adaptable.

Eric Johnson is Chief Information Officer at PagerDuty

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...