The Perils of Downtime in the Cloud
October 23, 2014

Cliff Moon

Share this

The mantra for developers at Facebook for the longest time has been "move fast and break things". The idea behind this philosophy being that the stigma around screwing up and breaking production slows down feature development, therefore if one removes the stigma from breakage, more agility will result. The cloud readily embodies this philosophy, since it is explicitly made of of unreliable components. The challenge for the enterprise embracing the cloud is to build up the processes and resiliency necessary to build reliable systems from unreliable components. Otherwise, moving to the cloud will mean that your customers are the first people to notice when you are experiencing downtime.

So what changes are necessary to remove the costs of downtime in the cloud? Foremost what is needed is a move to a more resilient architecture. The health of the service as a whole cannot rely on any single node. This means no special nodes: everything gets installed onto multiple instances with active-active load balancing between identical services. Not only that, but any service with a dependency must be able to survive that dependency going away. Writing code that is resilient to the myriad failures that may happen in the cloud is an art unto itself. No one will be good at it to start. This is where process and culture modifications come in.

It turns out that if you want programmers to write code that behaves well in production, an effective way to achieve that is to make them responsible for the behavior of their code in production. The individual programmers go on pager rotation and because they have to work side by side with the other people on rotation, they are held accountable for the code they write. It should never be an option to point to the failure of another service as the cause of your own service's failure. The writers of each discrete service should be encouraged to own their availability by measuring it separately from that of their dependencies. Techniques like serving stale data from cache, graceful degradation of ancillary features, and well reasoned timeout settings are all useful for being resilient while still depending on unreliable dependencies.

If your developers are on pager rotation, then there should be something to page them about. This is where monitoring comes in. Monitoring alerts come in two basic flavors: noise and signal. Monitoring setups with too many alerts configured will tend to be noisy, which leads to alert fatigue.

A good rule of thumb for any alerts you may have setup are that they be: actionable, impacting, and imminent. By actionable, I mean that there is a clear set of steps for resolving the issue. An actionable alert would be to tell you that a service has gone down. Less actionable would be to tell you that latencies are up, since it isn't clear what, if anything, you could do about that.

Impacting means that without human intervention the underlying condition will either cause or continue to cause customer impact.

And imminent means that the alert requires immediate intervention to alleviate service disruption. An example of a non-imminent alert would be alerting that your SSL certificates were due to expire in a month. Impactful and actionable, absolutely. But it doesn't warrant getting out of bed in the middle of the night.

At the end of the day, adopting the cloud alone isn't going to be the silver bullet that automatically injects agility into your team. The culture and structure of the team must be adapted to fit the tools and platforms they use in order to get the most out of them. Otherwise, you're going to be having a lot of downtime in the cloud.

Cliff Moon is CTO and Founder of Boundary.

Share this

The Latest

March 16, 2018

The State of the Mainframe report from Syncsort revealed an increased focus on traditional data infrastructure optimization to control costs and help fund strategic organizational projects like AI, machine learning and predictive analytics in addition to widespread concern about meeting security and compliance requirements ...

March 15, 2018

The 2018 Software Fail Watch report from Tricentis investigated 606 failures that affected over 3.6 billion people and caused $1.7 trillion in lost revenue ...

March 14, 2018

Gartner predicts there will be nearly 21 billion connected “things” in use worldwide by 2020 – impressive numbers that should catch the attention of every CIO. IT leaders in nearly every vertical market will soon be inundated with the management of both the data from these devices as well as the management of the devices themselves, each of which require the same lifecycle management as any other IT equipment. This can be an overwhelming realization for CIOs who don’t have an adequate configuration management strategy for their current IT environments, the foundation upon which all future digital strategies – Internet-connected or otherwise – will be built ...

March 13, 2018

Many network operations teams question if they need to TAP their networks; perhaps they aren't familiar with test access points (TAPs), or they think there isn't an application that makes sense for them. Over the past decade, industry best-practice revealed that all network infrastructure should utilize a network TAP as the foundation for complete visibility. The following are the seven most popular applications for TAPs ...

March 12, 2018

Organizations are eager to adopt cloud based architectures in an effort to support their digital transformation efforts, drive efficiencies and strengthen customer satisfaction, according to a new online cloud usage survey conducted by Denodo ...

March 09, 2018

Globally, cloud data center traffic will represent 95 percent of total data center traffic by 2021, compared to 88 percent in 2016, according to the Cisco Global Cloud Index (2016-2021) ...

March 08, 2018

Enterprise cloud spending will grow rapidly over the next year, and yet 35 percent of cloud spend is wasted, according to The RightScale 2018 State of the Cloud Survey ...

March 07, 2018

What often goes overlooked in our always-on digital culture are the people at the other end of each of these services tasked with their 24/7 management. If something goes wrong, users are quick to complain or switch to a competitor as IT practitioners on the backend race to rectify the situation. A recent PagerDuty State of IT Work-Life Balance Report revealed that IT professionals are struggling with the pressures associated with the management of these digital offerings ...

March 06, 2018

Businesses everywhere continually strive for greater efficiency. By way of illustration, more than a third of IT professionals cite "moving faster" as their top goal for 2018, and improving the efficiency of operations was one of the top three stated business objectives for organizations considering digital transformation initiatives ...

March 05, 2018

One of the current challenges for IT teams is the movement of the network to the cloud, and the lack of visibility that comes with that shift. While there has been a lot of hype around the benefits of cloud computing, very little is being said about the inherent drawbacks ...