The Leading Causes of IT Outages - and How to Prevent Them
November 04, 2019

Mark Banfield
LogicMonitor

Share this

IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines.

In March 2019, Facebook and Instagram each experienced 14 hours of downtime. A second IT outage struck both — along with WhatsApp — in April 2019, taking all three platforms offline. And in July 2019, all three platforms experienced availability problems that impacted users. British Airways has also faced a series of high-profile IT outages in the past, including one in April that resulted in 100 canceled flights and 200 delayed flights. An outage back in May 2017 also affected more than 1,000 flights, call centers, BA's website and BA's mobile app.

Given all of these recent disruptive and costly outages, LogicMonitor decided to investigate the causes behind downtime, commissioning an independent study investigating the major causes of downtime, the business impact of outages on organizations, and ways to avoid IT outages and brownouts. The IT Outage Impact Study involved surveying 300 IT decision-makers across the United States, Canada, the United Kingdom, Australia and New Zealand.

Outages Lead to Compliance Failures and High Costs

The number one and number two issues were concerns about performance and availability

Among other insights, the survey revealed the top 5 issues keeping IT decision makers up at night. The number one and number two issues were concerns about performance and availability, beating out security and cost-effectiveness worries.

Unfortunately, those self-reported fears about IT teams' ability to maintain availability are well-founded. In fact, 96% of global survey respondents reported that their organizations had suffered at least one IT outage over the past three years. Such outages can have serious implications, including steep costs and low customer satisfaction scores. Heavily regulated industries, such as healthcare and finance, face another dire consequence beyond service disruptions and costs as a result of outages: compliance failure.

"One of our clients is a radiology company, and they need to be up 24/7," said a service desk support engineer for a solution provider. "If they have more than an hour of downtime a year, probably less than that, that's a serious issue. These guys can never go down, for legal reasons."


Human Error is #1 Cause of IT Outages in the US and Canada

The study found that human error was the #1 cause of IT outages in the United States and Canada, and the #3 cause globally. Given this finding, it was no surprise that Network World covered the story of British Airways' May 2017 outage under the headline, "British Airways' outage, like most data center outages, was caused by humans."

The Network World article describes how an engineer working onsite at a data center near the Heathrow airport disconnected a power supply. When the power supply was reconnected, a surge of power caused the outage. The article also cites a 2016 Ponemon Institute study, which found that human error accounted for 11 percent of outages, more than weather (10%), generator failures (6%) or IT equipment malfunction (4%).

Faced with findings like this, it's no wonder that global IT decision makers said 51% of IT outages are avoidable. As a result, more and more teams worldwide are transitioning to monitoring tools that incorporate AIOps and automation to minimize human error and maximize early warning opportunities.

Monitoring Helps Prevent Outages Through Early Warning Systems

Comprehensive monitoring provides visibility into IT infrastructure and can help organizations get ahead of trends that indicate an outage may be rapidly approaching. The top two causes of outages, according to survey respondents, are declining hardware/software performance and IT teams' failure to notice when usage reaches a dangerous level. Artificial intelligence for IT operations (AIOps) and intelligent monitoring offer an effective solution to both of these outage factors.

To minimize your organizations' outage risk, look for monitoring solutions with the following capabilities:

■ A platform that offers a holistic view of your IT systems via a single pane of glass and integrates with all your technologies

■ A tool that builds in a high level of redundancy to eliminate single points of failure

■ A platform that provides early visibility via an early warning system into trends that could indicate future trouble

■ A solution that is able to scale with your business as it grows, making sure your current and future monitoring needs are met.

Mark Banfield is CRO at LogicMonitor
Share this

The Latest

June 29, 2022

When it comes to AIOps predictions, there's no question of AI's value in predictive intelligence and faster problem resolution for IT teams. In fact, Gartner has reported that there is no future for IT Operations without AIOps. So, where is AIOps headed in five years? Here's what the vendors and thought leaders in the AIOps space had to share ...

June 27, 2022

A new study by OpsRamp on the state of the Managed Service Providers (MSP) market concludes that MSPs face a market of bountiful opportunities but must prepare for this growth by embracing complex technologies like hybrid cloud management, root cause analysis and automation ...

June 27, 2022

Hybrid work adoption and the accelerated pace of digital transformation are driving an increasing need for automation and site reliability engineering (SRE) practices, according to new research. In a new survey almost half of respondents (48.2%) said automation is a way to decrease Mean Time to Resolution/Repair (MTTR) and improve service management ...

June 23, 2022

Digital businesses don't invest in monitoring for monitoring's sake. They do it to make the business run better. Every dollar spent on observability — every hour your team spends using monitoring tools or responding to what they reveal — should tie back directly to business outcomes: conversions, revenues, brand equity. If they don't? You might be missing the forest for the trees ...

June 22, 2022

Every day, companies are missing customer experience (CX) "red flags" because they don't have the tools to observe CX processes or metrics. Even basic errors or defects in automated customer interactions are left undetected for days, weeks or months, leading to widespread customer dissatisfaction. In fact, poor CX and digital technology investments are costing enterprises billions of dollars in lost potential revenue ...

June 21, 2022

Organizations are moving to microservices and cloud native architectures at an increasing pace. The primary incentive for these transformation projects is typically to increase the agility and velocity of software release and product innovation. These dynamic systems, however, are far more complex to manage and monitor, and they generate far higher data volumes ...

June 16, 2022

Global IT teams adapted to remote work in 2021, resolving employee tickets 23% faster than the year before as overall resolution time for IT tickets went down by 7 hours, according to the Freshservice Service Management Benchmark Report from Freshworks ...

June 15, 2022

Once upon a time data lived in the data center. Now data lives everywhere. All this signals the need for a new approach to data management, a next-gen solution ...

June 14, 2022

Findings from the 2022 State of Edge Messaging Report from Ably and Coleman Parkes Research show that most organizations (65%) that have built edge messaging capabilities in house have experienced an outage or significant downtime in the last 12-18 months. Most of the current in-house real-time messaging services aren't cutting it ...

June 13, 2022
Today's users want a complete digital experience when dealing with a software product or system. They are not content with the page load speeds or features alone but want the software to perform optimally in an omnichannel environment comprising multiple platforms, browsers, devices, and networks. This calls into question the role of load testing services to check whether the given software under testing can perform optimally when subjected to peak load ...