Skip to main content

Maintaining Application Performance and Continuity in the Face of Natural Disasters

Stephen Pierzchala

Sadly, natural disasters often cause major devastation and wreckage. They can make a business prone to widespread power outages, transportation stoppages, and massive flooding, interrupting day-to-day physical operations and revenue streams. But recent advances in computing – specifically, the advent of Cloud computing – have made today’s data centers and the businesses they support much more resilient.

For example, if the recent Hurricane Sandy had any silver lining, it was this: even as data centers in the northeast took a beating, Cloud service providers and the overall Internet infrastructure remained solid. Compuware’s own Outage Analyzer indicated only a few scattered outages, and major service disruptions were avoided. As a result, many area businesses saw minimal disruption to critical business processes conducted online, including CRM, SCM, content management and accounting, with the worst effects limited to infrastructure and applications located in the worst hit areas of Manhattan.

The distributed nature of the Cloud made this possible by addressing the holy grail of business continuity — eliminating single points of failure. The ability to host data center assets off-premise in remote, distributed data centers can protect data and applications from a disaster, even if it’s a storm system spanning several hundred miles. When it comes to maintaining application performance (speed) and continuity in the face of a major natural disaster — or the constant day-to-day volatility of the Internet for that matter — here are three key takeaways:

1. Use the Cloud for Business Continuity

One of the most understated use cases for the Cloud is business continuity. People often think of the Cloud as a way to save money and gain agility, but the Cloud is also built for back-up and recovery, with geographically dispersed networks.

We expect that many businesses are going to start thinking more seriously about disaster recovery in the Cloud. Many businesses can't afford to put in the redundancy they have in a Cloud solution with an on-premise solution and make it accessible to so many people regardless of their location. If you have two feet of water in your data center, your servers and backup are likely gone; but if you are on one or more Cloud platforms, you can just drive to your local fast-food restaurant or library and be up-and-running.

2. Make Sure Your Chosen Cloud Service Provider Can Perform at the Level You Expect

When you select a Cloud service provider, you should make sure they can support the level of application performance your business requires on a day-to-day basis. Many Cloud service providers offer availability guarantees, but all this means is that their servers are up and running — not necessarily that your application end users are having a fast, high-quality experience.

You should also expect your Cloud service provider to be able to seamlessly move your applications – even without your awareness — in the event of an impending localized disaster. Many Cloud service providers offer standard back-up and disaster recovery services that make continuous access to data and applications for their clients a non-issue.

The extent to which a Cloud service provider is responsible for your back-up and disaster recovery depends on how you are using the Cloud services. If you’re using Cloud services in a Software-as-a-Service (SaaS) business model — a mode of software delivery in which software and associated data are centrally hosted on the Cloud — the Cloud service provider bears responsibility for ensuring your apps are redundant.

On the other hand, if you’re using Cloud services in an Infrastructure-as-a-Service (IaaS) provision model — meaning you’re “renting” from the Cloud the equipment used to support operations, including storage, hardware, servers and networking components — responsibility for software management (including redundancy) remains with you.

3. Monitor Your Apps, 24x7

Even if you have the most reliable Cloud service provider in the world, there are still network and website components like CDNs, regional and local ISPs and third-party services that can degrade performance at the edge of the Internet. In fact, Compuware recently found that ad servers were the number one culprit when it comes to slowing or bringing down websites, choking the very sites from which they’re trying to generate revenue.

It doesn't take a natural disaster to create the first tear that rips apart other connections. Sometimes just one service getting hammered is all it takes to start a chain reaction that knocks your site off the web. Outages and slow-downs for network and website components can be completely random, and the truth is that the Internet has “little storms” like this all the time, caused by things as mundane as server failures, unplugged cables, backhoe-on-fiber collisions, and dragging fish boat anchors.

This means you need to take responsibility for understanding your own end-user experiences. You must monitor all your applications 24x7, storm or no storm, whether you’re using the Cloud or not. You must understand where your single points of failure are and eliminate them. You never want to get into a spot where your application is failing you, and it’s your customers letting you know.

In summary, regional presence should never determine one’s vulnerability to lost applications and data. Today’s data centers are more virtual than ever, and that’s a major plus in the face of all types of network events — natural disasters and otherwise. To cost-effectively protect your business operations, consider using the Cloud for business continuity; make sure your Cloud service provider meets your day-to-day application performance requirements as well as your back-up and disaster recovery requirements; and realize you are ultimately responsible for managing the performance of all your own applications, around the clock.

Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.

Related Links:

Compuware Technology Strategist Joins the Vendor Forum

Hot Topics

The Latest

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

A payment gateway fails at 2 AM. Thousands of transactions hang in limbo. Post-mortems reveal failures cascading across dozens of services, each technically sound in isolation. The diagnosis takes hours. The fix requires coordinated deployments across teams ...

Every enterprise technology conversation right now circles back to AI agents. And for once, the excitement isn't running too far ahead of reality. According to a Zapier survey of over 500 enterprise leaders, 72% of enterprises are already using or testing AI agents, and 84% plan to increase their investment over the next 12 months. Those numbers are big. But they also raise a question that doesn't get asked enough: what exactly are companies doing with these agents, and are they actually getting value from them? ...

Many organizations still rely on reactive availability models, taking action only after an outage occurs. However, as applications become more complex, this approach often leads to delayed detection, prolonged disruption, and incomplete recovery. Monitoring is evolving from a basic operational function into a foundational capability for sustaining availability in modern environments ...

In MEAN TIME TO INSIGHT Episode 22, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses DNS Security ... 

The financial stakes of extended service disruption has made operational resilience a top priority, according to 2026 State of AI-First Operations Report, a report from PagerDuty. According to survey findings, 95% of respondents believe their leadership understands the competitive advantage that can be gained from reducing incidents and speeding recovery ...

Maintaining Application Performance and Continuity in the Face of Natural Disasters

Stephen Pierzchala

Sadly, natural disasters often cause major devastation and wreckage. They can make a business prone to widespread power outages, transportation stoppages, and massive flooding, interrupting day-to-day physical operations and revenue streams. But recent advances in computing – specifically, the advent of Cloud computing – have made today’s data centers and the businesses they support much more resilient.

For example, if the recent Hurricane Sandy had any silver lining, it was this: even as data centers in the northeast took a beating, Cloud service providers and the overall Internet infrastructure remained solid. Compuware’s own Outage Analyzer indicated only a few scattered outages, and major service disruptions were avoided. As a result, many area businesses saw minimal disruption to critical business processes conducted online, including CRM, SCM, content management and accounting, with the worst effects limited to infrastructure and applications located in the worst hit areas of Manhattan.

The distributed nature of the Cloud made this possible by addressing the holy grail of business continuity — eliminating single points of failure. The ability to host data center assets off-premise in remote, distributed data centers can protect data and applications from a disaster, even if it’s a storm system spanning several hundred miles. When it comes to maintaining application performance (speed) and continuity in the face of a major natural disaster — or the constant day-to-day volatility of the Internet for that matter — here are three key takeaways:

1. Use the Cloud for Business Continuity

One of the most understated use cases for the Cloud is business continuity. People often think of the Cloud as a way to save money and gain agility, but the Cloud is also built for back-up and recovery, with geographically dispersed networks.

We expect that many businesses are going to start thinking more seriously about disaster recovery in the Cloud. Many businesses can't afford to put in the redundancy they have in a Cloud solution with an on-premise solution and make it accessible to so many people regardless of their location. If you have two feet of water in your data center, your servers and backup are likely gone; but if you are on one or more Cloud platforms, you can just drive to your local fast-food restaurant or library and be up-and-running.

2. Make Sure Your Chosen Cloud Service Provider Can Perform at the Level You Expect

When you select a Cloud service provider, you should make sure they can support the level of application performance your business requires on a day-to-day basis. Many Cloud service providers offer availability guarantees, but all this means is that their servers are up and running — not necessarily that your application end users are having a fast, high-quality experience.

You should also expect your Cloud service provider to be able to seamlessly move your applications – even without your awareness — in the event of an impending localized disaster. Many Cloud service providers offer standard back-up and disaster recovery services that make continuous access to data and applications for their clients a non-issue.

The extent to which a Cloud service provider is responsible for your back-up and disaster recovery depends on how you are using the Cloud services. If you’re using Cloud services in a Software-as-a-Service (SaaS) business model — a mode of software delivery in which software and associated data are centrally hosted on the Cloud — the Cloud service provider bears responsibility for ensuring your apps are redundant.

On the other hand, if you’re using Cloud services in an Infrastructure-as-a-Service (IaaS) provision model — meaning you’re “renting” from the Cloud the equipment used to support operations, including storage, hardware, servers and networking components — responsibility for software management (including redundancy) remains with you.

3. Monitor Your Apps, 24x7

Even if you have the most reliable Cloud service provider in the world, there are still network and website components like CDNs, regional and local ISPs and third-party services that can degrade performance at the edge of the Internet. In fact, Compuware recently found that ad servers were the number one culprit when it comes to slowing or bringing down websites, choking the very sites from which they’re trying to generate revenue.

It doesn't take a natural disaster to create the first tear that rips apart other connections. Sometimes just one service getting hammered is all it takes to start a chain reaction that knocks your site off the web. Outages and slow-downs for network and website components can be completely random, and the truth is that the Internet has “little storms” like this all the time, caused by things as mundane as server failures, unplugged cables, backhoe-on-fiber collisions, and dragging fish boat anchors.

This means you need to take responsibility for understanding your own end-user experiences. You must monitor all your applications 24x7, storm or no storm, whether you’re using the Cloud or not. You must understand where your single points of failure are and eliminate them. You never want to get into a spot where your application is failing you, and it’s your customers letting you know.

In summary, regional presence should never determine one’s vulnerability to lost applications and data. Today’s data centers are more virtual than ever, and that’s a major plus in the face of all types of network events — natural disasters and otherwise. To cost-effectively protect your business operations, consider using the Cloud for business continuity; make sure your Cloud service provider meets your day-to-day application performance requirements as well as your back-up and disaster recovery requirements; and realize you are ultimately responsible for managing the performance of all your own applications, around the clock.

Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.

Related Links:

Compuware Technology Strategist Joins the Vendor Forum

Hot Topics

The Latest

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

A payment gateway fails at 2 AM. Thousands of transactions hang in limbo. Post-mortems reveal failures cascading across dozens of services, each technically sound in isolation. The diagnosis takes hours. The fix requires coordinated deployments across teams ...

Every enterprise technology conversation right now circles back to AI agents. And for once, the excitement isn't running too far ahead of reality. According to a Zapier survey of over 500 enterprise leaders, 72% of enterprises are already using or testing AI agents, and 84% plan to increase their investment over the next 12 months. Those numbers are big. But they also raise a question that doesn't get asked enough: what exactly are companies doing with these agents, and are they actually getting value from them? ...

Many organizations still rely on reactive availability models, taking action only after an outage occurs. However, as applications become more complex, this approach often leads to delayed detection, prolonged disruption, and incomplete recovery. Monitoring is evolving from a basic operational function into a foundational capability for sustaining availability in modern environments ...

In MEAN TIME TO INSIGHT Episode 22, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses DNS Security ... 

The financial stakes of extended service disruption has made operational resilience a top priority, according to 2026 State of AI-First Operations Report, a report from PagerDuty. According to survey findings, 95% of respondents believe their leadership understands the competitive advantage that can be gained from reducing incidents and speeding recovery ...