Sadly, natural disasters often cause major devastation and wreckage. They can make a business prone to widespread power outages, transportation stoppages, and massive flooding, interrupting day-to-day physical operations and revenue streams. But recent advances in computing – specifically, the advent of Cloud computing – have made today’s data centers and the businesses they support much more resilient.
For example, if the recent Hurricane Sandy had any silver lining, it was this: even as data centers in the northeast took a beating, Cloud service providers and the overall Internet infrastructure remained solid. Compuware’s own Outage Analyzer indicated only a few scattered outages, and major service disruptions were avoided. As a result, many area businesses saw minimal disruption to critical business processes conducted online, including CRM, SCM, content management and accounting, with the worst effects limited to infrastructure and applications located in the worst hit areas of Manhattan.
The distributed nature of the Cloud made this possible by addressing the holy grail of business continuity — eliminating single points of failure. The ability to host data center assets off-premise in remote, distributed data centers can protect data and applications from a disaster, even if it’s a storm system spanning several hundred miles. When it comes to maintaining application performance (speed) and continuity in the face of a major natural disaster — or the constant day-to-day volatility of the Internet for that matter — here are three key takeaways:
1. Use the Cloud for Business Continuity
One of the most understated use cases for the Cloud is business continuity. People often think of the Cloud as a way to save money and gain agility, but the Cloud is also built for back-up and recovery, with geographically dispersed networks.
We expect that many businesses are going to start thinking more seriously about disaster recovery in the Cloud. Many businesses can't afford to put in the redundancy they have in a Cloud solution with an on-premise solution and make it accessible to so many people regardless of their location. If you have two feet of water in your data center, your servers and backup are likely gone; but if you are on one or more Cloud platforms, you can just drive to your local fast-food restaurant or library and be up-and-running.
2. Make Sure Your Chosen Cloud Service Provider Can Perform at the Level You Expect
When you select a Cloud service provider, you should make sure they can support the level of application performance your business requires on a day-to-day basis. Many Cloud service providers offer availability guarantees, but all this means is that their servers are up and running — not necessarily that your application end users are having a fast, high-quality experience.
You should also expect your Cloud service provider to be able to seamlessly move your applications – even without your awareness — in the event of an impending localized disaster. Many Cloud service providers offer standard back-up and disaster recovery services that make continuous access to data and applications for their clients a non-issue.
The extent to which a Cloud service provider is responsible for your back-up and disaster recovery depends on how you are using the Cloud services. If you’re using Cloud services in a Software-as-a-Service (SaaS) business model — a mode of software delivery in which software and associated data are centrally hosted on the Cloud — the Cloud service provider bears responsibility for ensuring your apps are redundant.
On the other hand, if you’re using Cloud services in an Infrastructure-as-a-Service (IaaS) provision model — meaning you’re “renting” from the Cloud the equipment used to support operations, including storage, hardware, servers and networking components — responsibility for software management (including redundancy) remains with you.
3. Monitor Your Apps, 24x7
Even if you have the most reliable Cloud service provider in the world, there are still network and website components like CDNs, regional and local ISPs and third-party services that can degrade performance at the edge of the Internet. In fact, Compuware recently found that ad servers were the number one culprit when it comes to slowing or bringing down websites, choking the very sites from which they’re trying to generate revenue.
It doesn't take a natural disaster to create the first tear that rips apart other connections. Sometimes just one service getting hammered is all it takes to start a chain reaction that knocks your site off the web. Outages and slow-downs for network and website components can be completely random, and the truth is that the Internet has “little storms” like this all the time, caused by things as mundane as server failures, unplugged cables, backhoe-on-fiber collisions, and dragging fish boat anchors.
This means you need to take responsibility for understanding your own end-user experiences. You must monitor all your applications 24x7, storm or no storm, whether you’re using the Cloud or not. You must understand where your single points of failure are and eliminate them. You never want to get into a spot where your application is failing you, and it’s your customers letting you know.
In summary, regional presence should never determine one’s vulnerability to lost applications and data. Today’s data centers are more virtual than ever, and that’s a major plus in the face of all types of network events — natural disasters and otherwise. To cost-effectively protect your business operations, consider using the Cloud for business continuity; make sure your Cloud service provider meets your day-to-day application performance requirements as well as your back-up and disaster recovery requirements; and realize you are ultimately responsible for managing the performance of all your own applications, around the clock.
Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.
Related Links:
The Latest
Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...
In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...
The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...
In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...
eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...
Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...
Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...
Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...
This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...