Lack of Automation Hinders Speed of Response to IT Outages and Incidents
January 24, 2017

Vincent Geffray
Everbridge

Share this

It's an eye opener to see that while companies have implemented service management for the most part — more than 90 percent of companies reporting that they have an IT Service Management system (ITSM) — only 11 percent of companies stated that they have automated the process for organizing their response to IT outages and incidents, according to Everbridge's 2016 State of IT Incident Management report.

This finding is significant because 47 percent of the companies reported having a major IT incident at least 6 times a year, the average cost of downtime is $8,662 per minute, and companies take 27 minutes on average to assemble an IT response team. Automated solutions can reduce this average time to 5 minutes or less. Considering the average cost of $8,662 per minute, the savings realized could be higher than $190,000 per major IT incident.

Key findings from the research include:

Most Companies Have an ITSM or Ticketing System

Over 90 percent of companies reported using an ITSM or ticketing system.

Major IT Outages or Incidents Occur Quite Frequently

47 percent of companies experience a major IT outage or incident six times or more a year.

36 percent experience them close to monthly (11 or more times per year).

More than a quarter of respondents reported that their companies experienced more than 21 incidents last year — that's close to two per month.

Only 9 percent of respondents reported that their organization did not report a major IT outage or incident in the past year.

The most common sources of incidents are network outages (experienced by 61 percent of companies), hardware failure or capacity issues (58 percent), internal business application issues (51 percent ), and unplanned maintenance (41 percent).

Responding to IT Outages and Incidents is Complicated and Too Manual

Two thirds (66 percent) of companies have distributed IT organizations with people spread among multiple locations and multiple time zones.

39 percent have more than 25 people included in their IT response teams. 29 percent have more than 50 people who need to be coordinated to respond to an incident. 16 percent more than 100 people.

43 percent of respondents reported that at least part of their process relies on manually calling and reaching out to people to activate the incident response team. Only 11 percent reported using an IT alerting tool to automate the process. These systems can improve response by reaching people through multiple modalities; use schedules to see who is available; automatically escalate to additional people if designated primary contacts do not respond; automatically organize conference bridges; and provide an audit trail of performance.

Response Times Could be Significantly Reduced by Automation

The mean time to activate and assemble a response team was cited as 27 minutes. Automated solutions can reduce this response time to 5 minutes or less.

IT downtime is expensive and hurts productivity

The average cost of IT downtime was reported as $8,662 per minute.

63 percent of respondents stated that IT incidents or outages hurt employee productivity, 60 percent that it caused IT team disruption or distraction, and 34 percent that it decreased customer satisfaction.

13 percent reported that their organization had experienced bad press or publicity due to an IT incident or outage.

Methodology: The sample for the research was 152 IT professionals, including 86% of respondents from companies of 1000 employees or more, and 45% from companies with more than 10,000 employees.

Vincent Geffray is Senior Director, Product Marketing, at Everbridge
Share this

The Latest

October 17, 2019

As the data generated by organizations grows, APM tools are now required to do a lot more than basic monitoring of metrics. Modern data is often raw and unstructured and requires more advanced methods of analysis. The tools must help dig deep into this data for both forensic analysis and predictive analysis. To extract more accurate and cheaper insights, modern APM tools use Big Data techniques to store, access, and analyze the multi-dimensional data ...

October 16, 2019

Modern enterprises are generating data at an unprecedented rate but aren't taking advantage of all the data available to them in order to drive real-time, actionable insights. According to a recent study commissioned by Actian, more than half of enterprises today are unable to efficiently manage nor effectively use data to drive decision-making ...

October 15, 2019

According to a study by Forrester Research, an enhanced UX design can increase the conversion rate by 400%. If UX has become the ultimate arbiter in determining the success or failure of a product or service, let us first understand what UX is all about ...

October 10, 2019

The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...

October 09, 2019

Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...

October 08, 2019

There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...

October 07, 2019
OK, I admit it. "Service modeling" is an awkward term, especially when you're trying to frame three rather controversial acronyms in the same overall place: CMDB, CMS and DDM. Nevertheless, that's exactly what we did in EMA's most recent research: <span style="font-style: italic;">Service Modeling in the Age of Cloud and Containers</span>. The goal was to establish a more holistic context for looking at the synergies and differences across all these areas ...
October 03, 2019

If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...

October 02, 2019

Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...

October 01, 2019

To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...