Alerting Survival Strategies
July 24, 2014

Larry Haig
Intechnica

Share this

(aka – “If that monitoring system wakes me up at 3am one more time … !”)

In considering alerting, the core issue is not whether a given tool will generate alerts, as anything sensible certainly will. Rather, the central problem is what could be termed the actionability of the alerts generated. Failure to flag issues related to poor performance is a clear no-no, but unfortunately over-alerting has the same effect, as these will rapidly be ignored.

Effective alert definition hinges on the determination of “normal” performance. Simplistically, this can be understood by testing across a business cycle (ideally, a minimum of 3-4 weeks). That is fine providing performance is reasonably stable. However, that is often not the case, particularly for applications experiencing large fluctuations in demand at different times of the day, week or year.

In such cases (which are extremely common), the difficulty becomes “at which point of the demand cycle should I base my alert threshold?” Too low, and your system is simply telling you that it’s lunchtime (or the weekend, or whenever greatest demand occurs). Too high, and you will miss issues occurring during periods of lower demand.

There are several approaches to this difficulty, of varying degrees of elegance:

■ Select tooling incorporating a sophisticated baseline algorithm - capable of applying alert thresholds dynamically based on time of day/week/month etc. Surprisingly, many major tools use extremely simplistic baseline models, but some (e.g. App Dynamics APM) certainly have an approach that assists. When selecting tooling, this is definitely an area that repays investigation.

■ Set up independent parallel (active monitoring) tests separated by “maintenance windows”, with different alert thresholds applied depending upon when they are run. This is a messy approach which comes with its own problems.

■ Look for proxies other than pure performance as alert metrics. Using this approach, a “catchall” performance threshold is set for performance that is manifestly poor regardless of when it is generated. This is supplemented by alerting based upon other factors flagging delivery issues – always providing that your monitoring system permits these. Examples include:

- Payload – error pages or partial downloads will have lower byte counts. Redirect failures (e.g. to mobile devices) will have higher than expected page weights.

- Number of objects

- Specific “flag” objects

■ Ensure confirmation before triggering alert. Some tooling will automatically generate confirmatory repeat testing; others enable triggers to be based on a specified number or percentage of total node results.

■ Gotchas – take account of these. Good test design, for example by controlling the bandwidth of end user testing to screen out results based on low connectivity tests, will improve the reliability of both alerts and results generally. As a more recent innovation, the advent of long polling / server push content can be extremely distortive of synthetic external responses, especially if not consistently included. In this case, page load end points need to be defined and incorporated into test scripts to prevent false positive alerts.

RUM based alerting presents its own difficulties. Because it is visitor traffic based, alert triggers based on a certain percentage of outliers may become distorted in very low traffic conditions. For example, a single long delivery time in a 10 minute timeslot where there are only 4 other “normal” visits would represent 20% of total traffic, whereas the same outlier recorded during a peak business period with 200 normal results is less than 1% of the total. RUM tooling that enables alert thresholds to be modified based on traffic are advantageous.

Although it does not address the “normal variation” issue, replacing binary trigger thresholds with dynamic ones (i.e. an alert state exists when the page/transaction slows by more than x% compared to its average over the past) can sometimes be useful.

Some form of trend state messaging (that is, condition worsening/improving) subsequent to initial alerting can serve to mitigate the amount of physical and emotional energy invoked by simple “fire alarm” alerting, particularly in the middle of the night.

An interesting (and long overdue) approach is to work directly on the source of the problem – download raw baseline data to a data warehouse, and apply sophisticated pattern recognition analysis. These algorithms can be developed in-house if time and appropriate skills are available, but unfortunately the mathematics is not necessarily trivial. Some standalone tooling exists and it is expected that more will follow as this approach proves its worth – the baseline management of most APM vendors represents an open goal at present.

Incidentally, such analysis is valuable not only for alerting but also for demand projection and capacity planning.

A few final thoughts on alerts post-generation. The more evolved alert management systems will permit conditional escalation of alerts – that is: alert this primary group first, then inform group B if the condition persists/worsens etc. Systems allowing custom coding around alerts (such as Neustar) are useful here, as are the specific third party alert handling systems available. If using tooling that only permits basic alerting, it is worth considering integration with external alerting, either of the “standalone service” type, or (in larger corporates) integral with central infrastructure management software.

Lastly, delivery mode. Email is the basis for many systems. It is tempting to regard SMS texting as beneficial, particularly in extreme cases. However, as anyone who has been sent a text on New Year’s Eve, only to have it show up 12 hours later knows, such store and forward systems can be false friends.

Larry Haig is Senior Consultant at Intechnica.

Share this

The Latest

January 16, 2018

Looking back on this year, we can see threads of what the future holds in enterprise networking. Specifically, taking a closer look at the biggest news and trends of this year, IT areas where businesses are investing and perspectives from the analyst community, as well as our own experiences, here are five network predictions for the coming year ...

January 12, 2018

As we enter 2018, businesses are busy anticipating what the new year will bring in terms of industry developments, growing trends, and hidden surprises. In 2017, the increased use of automation within testing teams (where Agile development boosted speed of release), led to QA becoming much more embedded within development teams than would have been the case a few years ago. As a result, proper software testing and monitoring assumes ever greater importance. The natural question is – what next? Here are some of the changes we believe will happen within our industry in 2018 ...

January 11, 2018

Application Performance Monitoring (APM) has become a must-have technology for IT organizations. In today’s era of digital transformation, distributed computing and cloud-native services, APM tools enable IT organizations to measure the real experience of users, trace business transactions to identify slowdowns and deliver the code-level visibility needed for optimizing the performance of applications. 2018 will see the requirements and expectations from APM solutions increase in the following ways ...

January 10, 2018

We don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true ...

January 09, 2018

Planning for a new year often includes predicting what’s going to happen. However, we don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true ...

January 08, 2018

The annual list of DevOps Predictions is now a DEVOPSdigest tradition. DevOps experts — analysts and consultants, users and the top vendors — offer predictions on how DevOps and related technologies will evolve and impact business in 2018 ...

January 05, 2018

Industry experts offer predictions on how Network Performance Management (NPM) and related technologies will evolve and impact business in 2018 ...

January 04, 2018

Industry experts offer predictions on how APM and related technologies will evolve and impact business in 2018. Part 6 covers ITOA and data ...

January 03, 2018

Industry experts offer predictions on how APM and related technologies will evolve and impact business in 2018. Part 5 covers NoOps, Analytics, Machine Learning and AI ...

December 21, 2017

Industry experts offer predictions on how APM and related technologies will evolve and impact business in 2018. Part 4 covers the end user experience ...