Getting to Zero Unplanned Downtime with AIOps
September 02, 2020

Andy Thurai
The Field CTO

Share this

Most business executives are worried about the competition taking them down. What they don't realize is, their own IT can do an equal amount of damage. Imagine this: if your rideshare app is constantly down, would you rather wait for it to come back up, or would you use the "other app?" Without realizing this fact, most organizations are one high-profile incident away from losing a lot of their customers.

A Silent Business Killer

With "digital native" businesses' complete reliance on IT, whether it is private data centers or cloud-based solutions, any long, unplanned downtime can kill a business quickly. The unplanned IT downtime is costing their businesses more than they think and the business executives are not fully aware of this.

Unplanned downtime can kill a business quickly

In a recent (2019) survey of 1000 businesses, ITIC estimated that for 86% of businesses, an hour of downtime will cost the business close to $300k an hour. And, for nearly 1/3 of the businesses, it will cost them between $1 million to $5 million. On average, an unplanned service outage lasts for four hours. And, on average, two outages are expected per enterprise, per year. All this means is that an enterprise can expect to lose between $2.5 to $40 million per year when the work stops due to an unplanned IT outage. And these numbers are going up by 30% YOY.

While these numbers (which are purely based on opportunity lost, lost employee productivity, and IT recovery and restoration costs) are mind-boggling, it doesn't even include any litigation, fines, penalties, non-compliance issues by regulatory bodies or even the brand hit, lost loyalty, or customer sat/attrition issues any enterprises will take due to unplanned downtime.

While large enterprises have specialized SWAT teams to reduce this impact, the smaller enterprises feel the brunt when such an event happens. When SRE teams, NOC/SOC teams and engineering teams are pulled into war rooms to solve the problem, the development/innovation and sometimes critical operational work come to a standstill. CIOs and other IT executives get involved until critical P1s are resolved, taking their time away from other strategic work.

Components of an IT Issue Resolution

Fixing any unplanned downtime consists of three major components:

1. Identifying the problem

2. Fixing the problem

3. Get the systems back up and running

While #2 can be minimized by having skilled IT staff, and #3 can be solved by a combination of automation and skilled IT staff, #1 is part art and part science. For most organizations, this is the hardest part and the silent killer. They spend too much time trying to find out the root cause of the problem. If you can't find the problem, you can't fix it. The very thought of having long, drawn-out war-room meetings, for critical P1 issues, should be any CIO's nightmare.

The efficiency of any ITOps team is measured by two major KPIs: MTBF (mean time between failures) and MTTR (mean time to repair). MTBF is a measure of how reliable your systems are, and MTTR is the true measure of how long your service is down, costing your business. The faster you find the problem, the faster you can solve it.

Is the "Zero-Downtime Unicorn" Just a Fantasy?

Zero unplanned downtime conversation has made its way to the board rooms now. According to the Vanson Bourne/ServiceMax survey of 450 IT decision-makers, zero unplanned downtime has become the top priority for 72% of the organizations. More importantly, boards are willing to approve additional funding, outside of the IT budget, to make this happen due to the pressure from business executives. But, just having the funding is not going to solve the problem. You need the right tools to help with that process.

In order to close the downtime gap, you need to know when your services will be going down or find out what is causing the problem as soon as a service goes down. The existing tools and data collection are mostly reactive measures, so the teams scramble after the service goes down. I discussed the need for root cause analysis and identifying issues before they happen in my earlier Forbes blog.

Why is it Complicated?

There are a few critical reasons why a lot of organizations are struggling with identifying the root cause of a problem:

1. IT has gotten more complex

2. IT budgets are getting crushed

3. Most IT organizations are siloed

4. There are not enough skilled IT personnel

5. Only reactive indicators are measured by ITOps teams

While all of the above is true, it doesn't have to be complicated. Implementing a properly designed AIOps solution can solve most of this. Keep in mind, having a good monitoring solution is not the same as having an AIOps solution. One is a reactive measure and the other is a proactive measure. Monitoring can tell you what went wrong, AIOps can potentially indicate something is about to go wrong.

A properly implemented AIOps platform can aggregate and analyze data collected by many tools: Application Performance Monitoring (APM), Network Performance Monitoring and Diagnostics (NPMD), Digital Experience Monitoring (DEM), IT Infrastructure Monitoring Tools (ITIM), Security Incident and Event Management (SIEM), and log tools, which make consolidation of events across the enterprise possible.

Avoiding Alert Fatigue

Unfortunately, today's complex IT systems produce a lot of events and alerts. Most organizations also have siloed implementations. For example, it is very common for a cloud implementation to use a different monitoring tool than an on-prem implementation of the same service. This leads to uncoordinated and unrelated alerts from multiple locations for the same incident.

A large enterprise we worked with had an issue of getting over a total of 100k events (alerts, network events, and service tickets) per single logical incident. This created "alert fatigue," with everyone chasing everything until they found the root cause. The first step was to reduce the large stream of low-level system events into a smaller number of logical incidents. With a combination of log structure discovery and parsing, periodicity detection, frequent pattern mining, and applying entropy-based encoding with a combination of temporal association detection and network topology graph analysis (yes, all the geeky stuff), we were able to reduce the volume of the event stream by 98%.

Imagine chasing a P1 in 100,000 single-siloed information pieces vs in 2,000 consolidated, grouped, and related events.


Demand More from Your CIO

The same ITIC survey suggests that 85% of corporations now demand a minimum of "four nines" of uptime (99.99%) for critical applications from their service providers. That is about 52 minutes of acceptable, unplanned downtime per year. Yet most CIOs struggle to provide the same quality of service and SLAs for their services to their businesses.

It is time the business executives demand the same "uptime" for their services from their CIOs that the CIOs demand from their service providers.

Andy Thurai is Founder and Principal of The Field CTO
Share this

The Latest

February 01, 2023

As organizations continue to adapt to a post-pandemic surge in cloud-based productivity, the 2023 State of the Network report from Viavi Solutions details how end-user awareness remains critical and explores the benefits — and challenges — of cloud and off-premises network modernization initiatives ...

February 01, 2023

In the network engineering world, many teams have yet to realize the immense benefit real-time collaboration tools can bring to a successful automation strategy. By integrating a collaboration platform into a network automation strategy — and taking advantage of being able to share responses, files, videos and even links to applications and device statuses — network teams can leverage these tools to manage, monitor and update their networks in real time, and improve the ways in which they manage their networks ...

January 31, 2023

A recent study revealed only an alarming 5% of IT decision makers who report having complete visibility into employee adoption and usage of company-issued applications, demonstrating they are often unknowingly careless when it comes to software investments that can ultimately be costly in terms of time and resources ...

January 30, 2023

Everyone has visibility into their multi-cloud networking environment, but only some are happy with what they see. Unfortunately, this continues a trend. According to EMA's latest research, most network teams have some end-to-end visibility across their multi-cloud networks. Still, only 23.6% are fully satisfied with their multi-cloud network monitoring and troubleshooting capabilities ...

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...

January 18, 2023

While most thought that distraction and motivation would be the main contributors to low productivity in a work-from-home environment, many organizations discovered that it was gaps in their IT systems that created some of the most significant challenges ...