The recent outage of the University of Cambridge website hosting Stephen Hawking's doctoral thesis is a prime example of what happens when niche websites become exposed to mainstream levels of traffic.
The widespread fame of the author as one of the figureheads of science generated a level of interest the university's web team was not prepared to handle, resulting in a familiar story: Website goes live; minutes or hours later, it crashes due to the large influx of traffic.
While it is obvious that the University of Cambridge didn't expect the level of traffic they saw, there are steps organizations and enterprises of all sizes can take to prevent this kind of digital downtime.
On Oct. 23, Hawking's Ph.D thesis went live, but by Oct. 24, the website had crashed. The release of the paper was timed with Open Access Week 2017, a worldwide event aimed at promoting free and open access to scholarly research. Though the scholarly research was made available through the university, within 24 hours of its release, no one could access it.
According to a Cambridge spokesperson, the website received nearly 60,000 download requests in less than 24 hours, causing a shutdown of the page, slower runtimes, and inaccessible content for users.
While this could be the first time a doctoral thesis invoked such widespread interest, this kind of problem, due to overloaded networks has unfolded before. In this case, it seems that the sudden increase in the number of visitors saturated the infrastructure that hosts and delivers this research. This happens when the amount of processing power required to determine what the searcher is looking for and where to send it exceeds the ability of the machines (routers, switches and servers) on the network to respond.
Organizations like Cambridge University often have limited processing power on their networks either because they build their own data centers, reducing their flexibility to respond to spikes in traffic. While each individual request may only take a fraction of each machine's resources, when several come in at once, it can slow connections, create congestion or even absolute failure.
Figure 1: Global locations unable to access the Cambridge University website, with errors in the connect and receive stages.
Figure 2: Traffic from all over the world terminates within the Cambridge infrastructure, as indicated by the spike in packet loss
For a web property like the Cambridge library, this is a temporary surge in traffic -- but not all websites are this lucky. The lesson is that if an organization isn't prepared, this is how a problem would manifest itself. Pre-planning for a spike would include increasing capacity on existing infrastructure. Leveraging a CDN can also help distribute the load across servers/geographies.
As you make important decisions about your company's website, there are many factors you'll want to consider, especially if you're expecting a surge (like on Black Friday or Cyber Monday). For sites that have spiky, but predictable traffic, here are a few options to help them stay online:
■ Use a CDN to serve up traffic round-the clock. This costs more but will have the best customer experience.
■ Flip on a CDN service well before known traffic peaks. If Cambridge had done this prior to releasing Hawking's thesis, they could have stayed afloat during the massive download requests.
■ Diversify with multiple data centers and upstream ISPs. If your organization has only one data center and one upstream ISP — if the ISP or their single data center goes down, your service goes with it.
■ Within the data center, load balanced network paths and web servers can also help reduce performance impacts.
The University of Cambridge may not plan to release another legendary scientist's thesis again anytime soon, but when it comes to web performance, you can have a guaranteed return if you properly prepare for your network's next big event.
The cloud has recently proven to be a vital tool for many organizations to deal with the COVID-19 pandemic by enabling employees to work from home. To me, COVID-19 has clearly shown that work doesn't need to happen at the office. It has strengthened our belief that working from home is going to be the norm for many. The move to the cloud introduces many technical challenges ...
Legacy tools traditionally utilized by IT organizations for alerting and on-premises performance monitoring are inadequate in this age of WFH and multi-cloud integration. A true Digital Experience Monitoring (DEM) strategy ensures that optimizing the end-user experience for these tools is critical for better performance and higher productivity ...
More than 80% of organizations have experienced a significant increase in pressure on digital services since the start of the COVID-19 pandemic, according to a new study conducted by PagerDuty ...
In Episode 9, Sean McDermott, President, CEO and Founder of Windward Consulting Group, joins the AI+ITOPS Podcast to discuss how the pandemic has impacted IT and is driving the need for AIOps ...
Michael Olson on the AI+ITOPS Podcast: "I really see AIOps as being a core requirement for observability because it ... applies intelligence to your telemetry data and your incident data ... to potentially predict problems before they happen."
Enterprise ITOM and ITSM teams have been welcoming of AIOps, believing that it has the potential to deliver great value to them as their IT environments become more distributed, hybrid and complex. Not so with DevOps teams. It's safe to say they've kept AIOps at arm's length, because they don't think it's relevant nor useful for what they do. Instead, to manage the software code they develop and deploy, they've focused on observability ...
The post-pandemic environment has resulted in a major shift on where SREs will be located, with nearly 50% of SREs believing they will be working remotely post COVID-19, as compared to only 19% prior to the pandemic, according to the 2020 SRE Survey Report from Catchpoint and the DevOps Institute ...
All application traffic travels across the network. While application performance management tools can offer insight into how critical applications are functioning, they do not provide visibility into the broader network environment. In order to optimize application performance, you need a few key capabilities. Let's explore three steps that can help NetOps teams better support the critical applications upon which your business depends ...
In Episode 8, Michael Olson, Director of Product Marketing at New Relic, joins the AI+ITOPS Podcast to discuss how AIOps provides real benefits to IT teams ...
Will Cappelli on the AI+ITOPS Podcast: "I'll predict that in 5 years time, APM as we know it will have been completely mutated into an observability plus dynamic analytics capability."