The headlines are filled with news of retail website failures and crashes – most recently with the launch of Obamacare and the continuing healthcare.gov crashes due to high visitor load. Some of this attention is due to the media's insatiable appetite for bad news, some of it is fueled by massive user dissatisfaction, but for the most part; websites are just simply failing more.
Load-driven performance issues aside, the causes of most failures are unavoidable. Malicious attacks are getting more sophisticated; natural disasters are taking out datacenters like we saw with Sandy. Attaining perfection is impossible, so human error will always be a factor, and as we heard at Yahoo, sometimes even a single squirrel can bring business to a halt.
Quite often however, sites go down because organizations are not sufficiently prepared to manage the risks that exist because of the complexity that surrounds their sites. Most websites are intricate ecosystems of different services, tools and platforms. More players than ever are involved in creating a rich, engaging and profitable experience.
Operations must worry not only about the health of the infrastructure and applications they own and manage, but also about those of their vendors, their vendors’ vendors and so on. Just one broken component in the delivery chain of a website can take down the entire service, as we have seen in the case of SPoF (single point of failure).
So with all of this in mind, companies need to accept that failure will happen and plan for it to alleviate and minimize its negative business and branding impacts. As Benjamin Franklin once said, "By failing to prepare, you are preparing to fail." By planning, you can get creative, as did the New York Times when it took to social media to keep pushing the news when its site went down in August.
Prevention and Readiness
So, how to plan?
1. Identify every situation that can make your business fail - Dig through every part of your infrastructure and applications and identify who your vendors are and what their impacts are to your service.
2. Monitor every aspect of your site's availability on a regular basis – Keep an eye on your partners’ servers to truly understand the availability of your site.
3. Do capacity testing on all of your servers - Test load balancers, front end, back end, edge servers, vendors – everything.
4. Design your strategy for each case of failure - Ensure you have a capacity plan for the worst case scenario and build it into your release cycle. A capacity plan is especially important before an event or promotion when you expect a lot of traffic to come to your site. Smart companies will stagger promotions to prevent drastic spikes in traffic.
As a backup plan, have a lightweight site ready and on hand if your business requires 100 percent uptime. Even if it's simply a bunch of Apache servers hosted in the cloud, have one ready. Absolutely no third parties or personalization, keep it bare-boned so it can be turned on during any and all types of downtime.
Creative Response to Failures
When you do fail, make it fun and give what could be a frustrated user a chuckle. This will provide a happy memory of your page even if they were unable to access it and will elicit a better chance of return.
A good error page is like a good airport bar. You are still stuck at the airport, but at least you are enjoying yourself.
Recovery
If you do experience a site crash:
1. Offer some incentive for your customers to come back and revisit the site once it's back up - Offer a "failure discount" to keep a customer from immediately going to a competing site to purchase the power drill they originally intended to buy from you.
2. Collect data during the outage - Monitor and understand what is going on to determine the root cause and analyze the events leading up to the downtime.
3. Ask questions - Have we experienced this before? Was my infrastructure at fault? Could this have been avoided? Understanding the failure allows you to adjust your disaster plans accordingly.
4. Share your post-mortem analysis both internally and externally - Let everyone learn what you learned; sharing knowledge is the best way to make the web better, stronger and faster for everyone.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...