Skip to main content

What it Takes for Today's Organizations to Achieve Operational Resilience

Sean Sebring
SolarWinds

Over the past year, I spent a good amount of time thinking about operational resilience. I asked myself what does it mean? Why is it so important, especially now?

My colleagues and I define operational resilience as the ability to identify, anticipate, and mitigate risks to help prevent future issues while accelerating responsiveness to ongoing disruptions when they do occur. It is achieved by understanding the different parts of the business and how they interact across teams, workflows, and tools, while driving a culture of intentional learning and adaptation.

Adequately preventing and responding to disruptions has never been more important — or more possible. The growing ubiquity of AI has introduced more automated workstreams and increased productivity, while simultaneously creating a greater need for better data management. As customer expectations increasingly align with always-on services, the ability to prevent and recover from disruptions has direct ties to a business's bottom line.

Recent data from the SolarWinds IT Trends Report 2025, which surveyed more than 600 IT leaders and professionals, suggests nine in 10 IT teams believe they're resilient. However, a closer inspection of the data indicates a more complex reality. Many organizations still have room to improve their operational reliance and prepare for an AI-driven, data-intensive future.

The Complex Reality of Today's IT Teams

While these organizations consider themselves resilient, survey respondents pointed to a lack of confidence in their ability to handle certain core IT functions. For example, only 26% of IT leaders were confident they could sufficiently handle bring-your-own-device practices. Less than half of IT leaders felt confident they could manage increasing user expectations (36%), support artificial intelligence (38%), and manage remote and distributed workforces (45%). A little more than half, 52%, felt they could sufficiently deal with cyberthreats.

An operationally resilient organization must be able to handle these functions. For example, if employees or third-party contractors are bringing their own devices onto your network, your IT systems will require proper security policies to help ensure those parties aren't introducing malicious content or data into your network. If today's organizations aren't able to adequately implement and support the use of AI, they run the risks of shadow AI use or experiencing competitive disadvantages in their respective markets.  

Speaking of competitive disadvantages, the report also highlighted how sub-par operational resilience can lead to reputational harm. More than one quarter (28%) of IT leaders said service outages can cause brand damage. A hit to public image can have cascading effects, causing consumers to take their business elsewhere and leading to both short-term and long-term revenue loss.

Why Organizations Are Facing Gaps in Their IT Operations

When facing issues with an IT environment, the most natural — and even logical — step is to expand IT capabilities. However, IT leaders in the report said their issues weren't solely technology-based. In fact, for some teams, tools are the least important issue. More IT leaders cited workflows (51%) and the size of their teams (36%) as the biggest hindrances to exercising operational resilience during disruption. This is a great reminder that, although a system disruption may begin as a technology issue, the resilience necessary to respond is neither technology-only nor technology-first.

Organizations face an inability to measure operational resilience as well as the additional challenge of practicing operational resilience. According to the survey, 3 in 10 IT teams spend half their time resolving critical issues. The only way to reduce these numbers is to know how long it takes to reach resolution and recover after an incident occurs.

Many teams view incident management and response times as a great way to measure IT performance. This often translates to use of the MTTx metric, also known as mean time to detect, mean time to acknowledge and or mean time to resolve.

Almost half of the respondents (45%) said they didn't use MTTx for multiple reasons, such as a lack of awareness, difficulty measuring accurately, or a preference for alternative metrics. Regardless, sufficient and prompt MTTx is a strong measure for operational resilience.

Improving Operational Resilience

To take operational resilience from insufficient to excellent, organizations must build their IT frameworks on solid relationships, streamlined processes, and comprehensive tooling.

A focus on relationships should extend to both technology and teams. IT leaders can look to comprehensive observability software to view how each IT asset, piece of data, and login credentials relate to each other. This help leaders create a map to describe the causes and effects within a system if a disruption occurs. Similar to tooling, it's also important to map relationships between team members. When you understand the relationships between team and technology, you can discern which assets and workflows are most important and which require the highest priority.

Once you outline relationships, you can begin delving into processes. A good way to figure out what's working and what isn't, is by surveying IT team members. They can best describe areas with communication problems, antithetical working styles, or a lack of necessary expertise. You may find you need to move team members around or decide that teamwork is great but could benefit from better tooling.

If tooling is part of the solution, it's important to meet with leadership to implement technology that is helpful, addresses team needs, and aligns with business goals. For example, if you have an IT team that has historically suffered from alert fatigue and disjointed incident management, the team may benefit from tooling that centralizes incident response and helps isolate and identify the most critical issues. This creates focus and streamlined processes that can enhance teamwide operational resilience.

When organizations can improve their tools, teams and processes, they can create a culture of operational resilience that breaks down silos and efficiently responds in the face of disruption. 

Sean Sebring is Solutions Engineering Manager at SolarWinds

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

What it Takes for Today's Organizations to Achieve Operational Resilience

Sean Sebring
SolarWinds

Over the past year, I spent a good amount of time thinking about operational resilience. I asked myself what does it mean? Why is it so important, especially now?

My colleagues and I define operational resilience as the ability to identify, anticipate, and mitigate risks to help prevent future issues while accelerating responsiveness to ongoing disruptions when they do occur. It is achieved by understanding the different parts of the business and how they interact across teams, workflows, and tools, while driving a culture of intentional learning and adaptation.

Adequately preventing and responding to disruptions has never been more important — or more possible. The growing ubiquity of AI has introduced more automated workstreams and increased productivity, while simultaneously creating a greater need for better data management. As customer expectations increasingly align with always-on services, the ability to prevent and recover from disruptions has direct ties to a business's bottom line.

Recent data from the SolarWinds IT Trends Report 2025, which surveyed more than 600 IT leaders and professionals, suggests nine in 10 IT teams believe they're resilient. However, a closer inspection of the data indicates a more complex reality. Many organizations still have room to improve their operational reliance and prepare for an AI-driven, data-intensive future.

The Complex Reality of Today's IT Teams

While these organizations consider themselves resilient, survey respondents pointed to a lack of confidence in their ability to handle certain core IT functions. For example, only 26% of IT leaders were confident they could sufficiently handle bring-your-own-device practices. Less than half of IT leaders felt confident they could manage increasing user expectations (36%), support artificial intelligence (38%), and manage remote and distributed workforces (45%). A little more than half, 52%, felt they could sufficiently deal with cyberthreats.

An operationally resilient organization must be able to handle these functions. For example, if employees or third-party contractors are bringing their own devices onto your network, your IT systems will require proper security policies to help ensure those parties aren't introducing malicious content or data into your network. If today's organizations aren't able to adequately implement and support the use of AI, they run the risks of shadow AI use or experiencing competitive disadvantages in their respective markets.  

Speaking of competitive disadvantages, the report also highlighted how sub-par operational resilience can lead to reputational harm. More than one quarter (28%) of IT leaders said service outages can cause brand damage. A hit to public image can have cascading effects, causing consumers to take their business elsewhere and leading to both short-term and long-term revenue loss.

Why Organizations Are Facing Gaps in Their IT Operations

When facing issues with an IT environment, the most natural — and even logical — step is to expand IT capabilities. However, IT leaders in the report said their issues weren't solely technology-based. In fact, for some teams, tools are the least important issue. More IT leaders cited workflows (51%) and the size of their teams (36%) as the biggest hindrances to exercising operational resilience during disruption. This is a great reminder that, although a system disruption may begin as a technology issue, the resilience necessary to respond is neither technology-only nor technology-first.

Organizations face an inability to measure operational resilience as well as the additional challenge of practicing operational resilience. According to the survey, 3 in 10 IT teams spend half their time resolving critical issues. The only way to reduce these numbers is to know how long it takes to reach resolution and recover after an incident occurs.

Many teams view incident management and response times as a great way to measure IT performance. This often translates to use of the MTTx metric, also known as mean time to detect, mean time to acknowledge and or mean time to resolve.

Almost half of the respondents (45%) said they didn't use MTTx for multiple reasons, such as a lack of awareness, difficulty measuring accurately, or a preference for alternative metrics. Regardless, sufficient and prompt MTTx is a strong measure for operational resilience.

Improving Operational Resilience

To take operational resilience from insufficient to excellent, organizations must build their IT frameworks on solid relationships, streamlined processes, and comprehensive tooling.

A focus on relationships should extend to both technology and teams. IT leaders can look to comprehensive observability software to view how each IT asset, piece of data, and login credentials relate to each other. This help leaders create a map to describe the causes and effects within a system if a disruption occurs. Similar to tooling, it's also important to map relationships between team members. When you understand the relationships between team and technology, you can discern which assets and workflows are most important and which require the highest priority.

Once you outline relationships, you can begin delving into processes. A good way to figure out what's working and what isn't, is by surveying IT team members. They can best describe areas with communication problems, antithetical working styles, or a lack of necessary expertise. You may find you need to move team members around or decide that teamwork is great but could benefit from better tooling.

If tooling is part of the solution, it's important to meet with leadership to implement technology that is helpful, addresses team needs, and aligns with business goals. For example, if you have an IT team that has historically suffered from alert fatigue and disjointed incident management, the team may benefit from tooling that centralizes incident response and helps isolate and identify the most critical issues. This creates focus and streamlined processes that can enhance teamwide operational resilience.

When organizations can improve their tools, teams and processes, they can create a culture of operational resilience that breaks down silos and efficiently responds in the face of disruption. 

Sean Sebring is Solutions Engineering Manager at SolarWinds

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...