Alerting Survival Strategies
July 24, 2014

Larry Haig
Intechnica

Share this

(aka – “If that monitoring system wakes me up at 3am one more time … !”)

In considering alerting, the core issue is not whether a given tool will generate alerts, as anything sensible certainly will. Rather, the central problem is what could be termed the actionability of the alerts generated. Failure to flag issues related to poor performance is a clear no-no, but unfortunately over-alerting has the same effect, as these will rapidly be ignored.

Effective alert definition hinges on the determination of “normal” performance. Simplistically, this can be understood by testing across a business cycle (ideally, a minimum of 3-4 weeks). That is fine providing performance is reasonably stable. However, that is often not the case, particularly for applications experiencing large fluctuations in demand at different times of the day, week or year.

In such cases (which are extremely common), the difficulty becomes “at which point of the demand cycle should I base my alert threshold?” Too low, and your system is simply telling you that it’s lunchtime (or the weekend, or whenever greatest demand occurs). Too high, and you will miss issues occurring during periods of lower demand.

There are several approaches to this difficulty, of varying degrees of elegance:

■ Select tooling incorporating a sophisticated baseline algorithm - capable of applying alert thresholds dynamically based on time of day/week/month etc. Surprisingly, many major tools use extremely simplistic baseline models, but some (e.g. App Dynamics APM) certainly have an approach that assists. When selecting tooling, this is definitely an area that repays investigation.

■ Set up independent parallel (active monitoring) tests separated by “maintenance windows”, with different alert thresholds applied depending upon when they are run. This is a messy approach which comes with its own problems.

■ Look for proxies other than pure performance as alert metrics. Using this approach, a “catchall” performance threshold is set for performance that is manifestly poor regardless of when it is generated. This is supplemented by alerting based upon other factors flagging delivery issues – always providing that your monitoring system permits these. Examples include:

- Payload – error pages or partial downloads will have lower byte counts. Redirect failures (e.g. to mobile devices) will have higher than expected page weights.

- Number of objects

- Specific “flag” objects

■ Ensure confirmation before triggering alert. Some tooling will automatically generate confirmatory repeat testing; others enable triggers to be based on a specified number or percentage of total node results.

■ Gotchas – take account of these. Good test design, for example by controlling the bandwidth of end user testing to screen out results based on low connectivity tests, will improve the reliability of both alerts and results generally. As a more recent innovation, the advent of long polling / server push content can be extremely distortive of synthetic external responses, especially if not consistently included. In this case, page load end points need to be defined and incorporated into test scripts to prevent false positive alerts.

RUM based alerting presents its own difficulties. Because it is visitor traffic based, alert triggers based on a certain percentage of outliers may become distorted in very low traffic conditions. For example, a single long delivery time in a 10 minute timeslot where there are only 4 other “normal” visits would represent 20% of total traffic, whereas the same outlier recorded during a peak business period with 200 normal results is less than 1% of the total. RUM tooling that enables alert thresholds to be modified based on traffic are advantageous.

Although it does not address the “normal variation” issue, replacing binary trigger thresholds with dynamic ones (i.e. an alert state exists when the page/transaction slows by more than x% compared to its average over the past) can sometimes be useful.

Some form of trend state messaging (that is, condition worsening/improving) subsequent to initial alerting can serve to mitigate the amount of physical and emotional energy invoked by simple “fire alarm” alerting, particularly in the middle of the night.

An interesting (and long overdue) approach is to work directly on the source of the problem – download raw baseline data to a data warehouse, and apply sophisticated pattern recognition analysis. These algorithms can be developed in-house if time and appropriate skills are available, but unfortunately the mathematics is not necessarily trivial. Some standalone tooling exists and it is expected that more will follow as this approach proves its worth – the baseline management of most APM vendors represents an open goal at present.

Incidentally, such analysis is valuable not only for alerting but also for demand projection and capacity planning.

A few final thoughts on alerts post-generation. The more evolved alert management systems will permit conditional escalation of alerts – that is: alert this primary group first, then inform group B if the condition persists/worsens etc. Systems allowing custom coding around alerts (such as Neustar) are useful here, as are the specific third party alert handling systems available. If using tooling that only permits basic alerting, it is worth considering integration with external alerting, either of the “standalone service” type, or (in larger corporates) integral with central infrastructure management software.

Lastly, delivery mode. Email is the basis for many systems. It is tempting to regard SMS texting as beneficial, particularly in extreme cases. However, as anyone who has been sent a text on New Year’s Eve, only to have it show up 12 hours later knows, such store and forward systems can be false friends.

Larry Haig is Senior Consultant at Intechnica.

Share this

The Latest

October 16, 2017
Hurricane season is in full swing. With the latest incoming cases of mega-storms devastating the Southeastern shoreline, communities are struggling to restore daily normalcy. People have been stepping up and showing remarkable strength and leadership in helping those affected. However, there is another area that we need to remember in these trying times – and that is businesses continuity ...
October 12, 2017

Gartner highlighted the top strategic technology trends that will impact most organizations in 2018. The next trends focus on blending the digital and physical worlds to create an immersive, digitally enhanced environment. The last three refer to exploiting connections between an expanding set of people and businesses, as well as devices, content and services to deliver digital business outcomes ...

October 11, 2017

Gartner highlighted the top strategic technology trends that will impact most organizations in 2018. The first three strategic technology trends explore how artificial intelligence (AI) and machine learning are seeping into virtually everything and represent a major battleground for technology providers over the next five years ...

October 10, 2017
This is the sixth in my series of blogs inspired by EMA's AIA buyer's guide — directed at helping IT invest in Advanced IT Analytics (AIA), what the industry more commonly calls "Operational Analytics." In this blog, I examine scenario-related shopping cart objectives for AIA. At EMA, we evaluated seven unique scenarios relevant to AIA adoptions. Our scenarios included agile/DevOps, Integrated security, change impact awareness, capacity optimization, business impact, business alignment and unifying IT ...
October 06, 2017

In the Riverbed Future of Networking Global Survey, more than half of the respondents acknowledged that achieving operational agility is critical to the success of a modern enterprise, and next-generation networks as well as the technology to support them are key to reaching this goal ...

October 05, 2017

Legacy infrastructures are holding back their cloud and digital strategies, according to the Riverbed Future of Networking Global Survey 2017. Nearly all survey respondents agree that legacy network infrastructure will have difficulty keeping pace with the changing demands of the cloud and hybrid networks ...

October 04, 2017

Digital disruptors are emerging in all industries, and the need for CIOs to embrace digital transformation is urgent, according to Gartner ...

October 02, 2017

Environments indicate "where" the AIA solutions we investigated can be applied. All 13 of the solutions we investigated support cloud for performance, core infrastructure, and application performance and availability. Mainframe had the support of six of our respondents, and IoT and cloud for change and capacity were not yet prime areas of focus for most of the vendors in our AIA buyer's guide ...

September 29, 2017

Cost, overhead, and time to value are often key challenges in adopting AIA solutions. In the past, these factors have often been especially onerous. But we saw strong levels of improvement among many vendors, and surprising areas of innovation among others ...

September 28, 2017
Most senior executives recognize that unified communications and collaboration (UC) are integral applications on the digital transformation path. As a result, many companies are in the process of replacing legacy voice and video infrastructure and disparate messaging and collaboration tools with next-generation UC systems, including cloud-based unified communication as a service (UCaaS). With UC, companies can accelerate time-to-revenue, improve productivity and reduce capex and opex – the three pillars of return on investment (ROI) that drive corporate strategy ...