Skip to main content

Maintaining Application Performance and Continuity in the Face of Natural Disasters

Stephen Pierzchala

Sadly, natural disasters often cause major devastation and wreckage. They can make a business prone to widespread power outages, transportation stoppages, and massive flooding, interrupting day-to-day physical operations and revenue streams. But recent advances in computing – specifically, the advent of Cloud computing – have made today’s data centers and the businesses they support much more resilient.

For example, if the recent Hurricane Sandy had any silver lining, it was this: even as data centers in the northeast took a beating, Cloud service providers and the overall Internet infrastructure remained solid. Compuware’s own Outage Analyzer indicated only a few scattered outages, and major service disruptions were avoided. As a result, many area businesses saw minimal disruption to critical business processes conducted online, including CRM, SCM, content management and accounting, with the worst effects limited to infrastructure and applications located in the worst hit areas of Manhattan.

The distributed nature of the Cloud made this possible by addressing the holy grail of business continuity — eliminating single points of failure. The ability to host data center assets off-premise in remote, distributed data centers can protect data and applications from a disaster, even if it’s a storm system spanning several hundred miles. When it comes to maintaining application performance (speed) and continuity in the face of a major natural disaster — or the constant day-to-day volatility of the Internet for that matter — here are three key takeaways:

1. Use the Cloud for Business Continuity

One of the most understated use cases for the Cloud is business continuity. People often think of the Cloud as a way to save money and gain agility, but the Cloud is also built for back-up and recovery, with geographically dispersed networks.

We expect that many businesses are going to start thinking more seriously about disaster recovery in the Cloud. Many businesses can't afford to put in the redundancy they have in a Cloud solution with an on-premise solution and make it accessible to so many people regardless of their location. If you have two feet of water in your data center, your servers and backup are likely gone; but if you are on one or more Cloud platforms, you can just drive to your local fast-food restaurant or library and be up-and-running.

2. Make Sure Your Chosen Cloud Service Provider Can Perform at the Level You Expect

When you select a Cloud service provider, you should make sure they can support the level of application performance your business requires on a day-to-day basis. Many Cloud service providers offer availability guarantees, but all this means is that their servers are up and running — not necessarily that your application end users are having a fast, high-quality experience.

You should also expect your Cloud service provider to be able to seamlessly move your applications – even without your awareness — in the event of an impending localized disaster. Many Cloud service providers offer standard back-up and disaster recovery services that make continuous access to data and applications for their clients a non-issue.

The extent to which a Cloud service provider is responsible for your back-up and disaster recovery depends on how you are using the Cloud services. If you’re using Cloud services in a Software-as-a-Service (SaaS) business model — a mode of software delivery in which software and associated data are centrally hosted on the Cloud — the Cloud service provider bears responsibility for ensuring your apps are redundant.

On the other hand, if you’re using Cloud services in an Infrastructure-as-a-Service (IaaS) provision model — meaning you’re “renting” from the Cloud the equipment used to support operations, including storage, hardware, servers and networking components — responsibility for software management (including redundancy) remains with you.

3. Monitor Your Apps, 24x7

Even if you have the most reliable Cloud service provider in the world, there are still network and website components like CDNs, regional and local ISPs and third-party services that can degrade performance at the edge of the Internet. In fact, Compuware recently found that ad servers were the number one culprit when it comes to slowing or bringing down websites, choking the very sites from which they’re trying to generate revenue.

It doesn't take a natural disaster to create the first tear that rips apart other connections. Sometimes just one service getting hammered is all it takes to start a chain reaction that knocks your site off the web. Outages and slow-downs for network and website components can be completely random, and the truth is that the Internet has “little storms” like this all the time, caused by things as mundane as server failures, unplugged cables, backhoe-on-fiber collisions, and dragging fish boat anchors.

This means you need to take responsibility for understanding your own end-user experiences. You must monitor all your applications 24x7, storm or no storm, whether you’re using the Cloud or not. You must understand where your single points of failure are and eliminate them. You never want to get into a spot where your application is failing you, and it’s your customers letting you know.

In summary, regional presence should never determine one’s vulnerability to lost applications and data. Today’s data centers are more virtual than ever, and that’s a major plus in the face of all types of network events — natural disasters and otherwise. To cost-effectively protect your business operations, consider using the Cloud for business continuity; make sure your Cloud service provider meets your day-to-day application performance requirements as well as your back-up and disaster recovery requirements; and realize you are ultimately responsible for managing the performance of all your own applications, around the clock.

Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.

Related Links:

Compuware Technology Strategist Joins the Vendor Forum

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Maintaining Application Performance and Continuity in the Face of Natural Disasters

Stephen Pierzchala

Sadly, natural disasters often cause major devastation and wreckage. They can make a business prone to widespread power outages, transportation stoppages, and massive flooding, interrupting day-to-day physical operations and revenue streams. But recent advances in computing – specifically, the advent of Cloud computing – have made today’s data centers and the businesses they support much more resilient.

For example, if the recent Hurricane Sandy had any silver lining, it was this: even as data centers in the northeast took a beating, Cloud service providers and the overall Internet infrastructure remained solid. Compuware’s own Outage Analyzer indicated only a few scattered outages, and major service disruptions were avoided. As a result, many area businesses saw minimal disruption to critical business processes conducted online, including CRM, SCM, content management and accounting, with the worst effects limited to infrastructure and applications located in the worst hit areas of Manhattan.

The distributed nature of the Cloud made this possible by addressing the holy grail of business continuity — eliminating single points of failure. The ability to host data center assets off-premise in remote, distributed data centers can protect data and applications from a disaster, even if it’s a storm system spanning several hundred miles. When it comes to maintaining application performance (speed) and continuity in the face of a major natural disaster — or the constant day-to-day volatility of the Internet for that matter — here are three key takeaways:

1. Use the Cloud for Business Continuity

One of the most understated use cases for the Cloud is business continuity. People often think of the Cloud as a way to save money and gain agility, but the Cloud is also built for back-up and recovery, with geographically dispersed networks.

We expect that many businesses are going to start thinking more seriously about disaster recovery in the Cloud. Many businesses can't afford to put in the redundancy they have in a Cloud solution with an on-premise solution and make it accessible to so many people regardless of their location. If you have two feet of water in your data center, your servers and backup are likely gone; but if you are on one or more Cloud platforms, you can just drive to your local fast-food restaurant or library and be up-and-running.

2. Make Sure Your Chosen Cloud Service Provider Can Perform at the Level You Expect

When you select a Cloud service provider, you should make sure they can support the level of application performance your business requires on a day-to-day basis. Many Cloud service providers offer availability guarantees, but all this means is that their servers are up and running — not necessarily that your application end users are having a fast, high-quality experience.

You should also expect your Cloud service provider to be able to seamlessly move your applications – even without your awareness — in the event of an impending localized disaster. Many Cloud service providers offer standard back-up and disaster recovery services that make continuous access to data and applications for their clients a non-issue.

The extent to which a Cloud service provider is responsible for your back-up and disaster recovery depends on how you are using the Cloud services. If you’re using Cloud services in a Software-as-a-Service (SaaS) business model — a mode of software delivery in which software and associated data are centrally hosted on the Cloud — the Cloud service provider bears responsibility for ensuring your apps are redundant.

On the other hand, if you’re using Cloud services in an Infrastructure-as-a-Service (IaaS) provision model — meaning you’re “renting” from the Cloud the equipment used to support operations, including storage, hardware, servers and networking components — responsibility for software management (including redundancy) remains with you.

3. Monitor Your Apps, 24x7

Even if you have the most reliable Cloud service provider in the world, there are still network and website components like CDNs, regional and local ISPs and third-party services that can degrade performance at the edge of the Internet. In fact, Compuware recently found that ad servers were the number one culprit when it comes to slowing or bringing down websites, choking the very sites from which they’re trying to generate revenue.

It doesn't take a natural disaster to create the first tear that rips apart other connections. Sometimes just one service getting hammered is all it takes to start a chain reaction that knocks your site off the web. Outages and slow-downs for network and website components can be completely random, and the truth is that the Internet has “little storms” like this all the time, caused by things as mundane as server failures, unplugged cables, backhoe-on-fiber collisions, and dragging fish boat anchors.

This means you need to take responsibility for understanding your own end-user experiences. You must monitor all your applications 24x7, storm or no storm, whether you’re using the Cloud or not. You must understand where your single points of failure are and eliminate them. You never want to get into a spot where your application is failing you, and it’s your customers letting you know.

In summary, regional presence should never determine one’s vulnerability to lost applications and data. Today’s data centers are more virtual than ever, and that’s a major plus in the face of all types of network events — natural disasters and otherwise. To cost-effectively protect your business operations, consider using the Cloud for business continuity; make sure your Cloud service provider meets your day-to-day application performance requirements as well as your back-up and disaster recovery requirements; and realize you are ultimately responsible for managing the performance of all your own applications, around the clock.

Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.

Related Links:

Compuware Technology Strategist Joins the Vendor Forum

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...