Bringing Alert Management into the Present with Advanced Analytics
March 25, 2015

Kevin Conklin
Ipswitch

Share this

We have smart cars on the horizon that will navigate themselves. Mobile apps that make communication, navigation and entertainment an integral part of our daily lives. Your insurance pricing may soon be affected by whether or not you wear a personal health monitoring device. Everywhere you turn, the very latest IT technologies are being leveraged to provide advanced services that were unimaginable even ten years ago. So why is it that the IT environments that provide these services are managed using an analytics technology designed for the 1970s?

The IT landscape has evolved significantly over the past few decades. IT management simply has not kept pace. IT operations teams are anxious that too many problems are reported first by end users. Support teams worry that too many people spend too much time troubleshooting. Over 70 percent of troubleshooting time is actually wasted following false hunches because alerts provide no value to the diagnostic process. Enterprises that are still reliant on yesterday’s management strategies will find it increasingly difficult to solve today’s operations and performance management challenges.

This is not just an issue of falling behind a technology curve. There is a real business impact in increasing incident rates, failing to detect potentially disastrous outages and human resources wasting valuable time. An increasing number of IT shops are anxiously searching for alternatives.

This is where advanced machine learning analytics can help.

Too often operations teams can become engulfed by alerts – getting tens of thousands a day and not knowing which to deal with and when, making it quite possible that something important was ignored while time was wasted on something trivial. Through a powerful combination of machine learning and anomaly detection, advanced analytics can reduce the alarms to a prioritized set that have the largest impact on the environment. By learning which alerts are “normal”, these systems define an operable status quo. In essence, machine learning filters out the “background noise” of alerts that, based on their persistence, have no effect on normal operations. From there, statistical algorithms identify and rank “abnormal” outliers on a scale measuring severity (value of a spike or drop occurrence), rarity (number of previous instances) or impact (quantity of related anomalies). The result is a reduction from hundreds of thousands of noisy alerts a week to a few dozen notifications of real problems.

Despite producing huge volumes of alerts, rules and thresholds implementations often miss problems or report them long after the customer has experienced the impact. The fear of generating even more alerts forces monitoring teams to select fewer KPIs, thus decreasing the likelihood of detection. Problems that slowly approach thresholds go unnoticed until user experience is already impacted. Adopting this advanced analytics approach empowers enterprises to not only identify problems that rules and thresholds miss or simply execute against too late, but also provide their troubleshooting teams with pre-correlated causal data.

By replacing legacy rules and thresholds with machine learning anomaly detection, IT teams can monitor larger sets of performance data in real-time. Monitoring more KPIs enable a higher percentage of issues to be detected before the users report them. Through real-time cross correlation, related anomalies are detected and alerts become more actionable. Early adopters report that they are able to reduce troubleshooting time by 75 percent, with commensurate reductions in the number of people involved by as much as 85 percent.

Advanced machine learning systems will fundamentally change the way data is converted into information over the next few years. If your business is leveraging information to provide competitive services, you can’t afford to be the laggard.

Kevin Conklin is VP of Product Marketing at Ipswitch
Share this

The Latest

May 27, 2020

Keeping networks operational is critical for businesses to run smoothly. The Ponemon Institute estimates that the average cost of an unplanned network outage is $8,850 per minute, a staggering number. In addition to cost, a network failure has a negative effect on application efficiency and user experience ...

May 26, 2020

Nearly 3,700 people told GitLab about their DevOps journeys. Respondents shared that their roles are changing dramatically, no matter where they sit in the organization. The lines surrounding the traditional definitions of dev, sec, ops and test have blurred, and as we enter the second half of 2020, it is perhaps more important than ever for companies to understand how these roles are evolving ...

May 21, 2020

As cloud computing continues to grow, tech pros say they are increasingly prioritizing areas like hybrid infrastructure management, application performance management (APM), and security management to optimize delivery for the organizations they serve, according to SolarWinds IT Trends Report 2020: The Universal Language of IT ...

May 20, 2020

Businesses see digital experience as a growing priority and a key to their success, with execution requiring a more integrated approach across development, IT and business users, according to Digital Experiences: Where the Industry Stands ...

May 19, 2020

Fully 90% of those who use observability tooling say those tools are important to their team's software development success, including 39% who say observability tools are very important ...

May 18, 2020

As our production application systems continuously increase in complexity, the challenges of understanding, debugging, and improving them keep growing by orders of magnitude. The practice of Observability addresses both the social and the technological challenges of wrangling complexity and working toward achieving production excellence. New research shows how observable systems and practices are changing the APM landscape ...

May 14, 2020
Digital technologies have enveloped our lives like never before. Be it on the personal or professional front, we have become dependent on the accurate functioning of digital devices and the software running them. The performance of the software is critical in running the components and levers of the new digital ecosystem. And to ensure our digital ecosystem delivers the required outcomes, a robust performance testing strategy should be instituted ...
May 13, 2020

The enforced change to working from home (WFH) has had a massive impact on businesses, not just in the way they manage their employees and IT systems. As the COVID-19 pandemic progresses, enterprise IT teams are looking to answer key questions such as: Which applications have become more critical for working from home? ...

May 12, 2020

In ancient times — February 2020 — EMA research found that more than 50% of IT leaders surveyed were considering new ITSM platforms in the near future. The future arrived with a bang as IT organizations turbo-pivoted to deliver and support unprecedented levels and types of services to a global workplace suddenly working from home ...

May 11, 2020

The Internet of Things (IoT) is changing the world. From augmented reality advanced analytics to new consumer solutions, IoT and the cloud are together redefining both how we work and how we engage with our audiences. They are changing how we live, as well ...