Alerting Survival Strategies
July 24, 2014

Larry Haig
Intechnica

Share this

(aka – “If that monitoring system wakes me up at 3am one more time … !”)

In considering alerting, the core issue is not whether a given tool will generate alerts, as anything sensible certainly will. Rather, the central problem is what could be termed the actionability of the alerts generated. Failure to flag issues related to poor performance is a clear no-no, but unfortunately over-alerting has the same effect, as these will rapidly be ignored.

Effective alert definition hinges on the determination of “normal” performance. Simplistically, this can be understood by testing across a business cycle (ideally, a minimum of 3-4 weeks). That is fine providing performance is reasonably stable. However, that is often not the case, particularly for applications experiencing large fluctuations in demand at different times of the day, week or year.

In such cases (which are extremely common), the difficulty becomes “at which point of the demand cycle should I base my alert threshold?” Too low, and your system is simply telling you that it’s lunchtime (or the weekend, or whenever greatest demand occurs). Too high, and you will miss issues occurring during periods of lower demand.

There are several approaches to this difficulty, of varying degrees of elegance:

■ Select tooling incorporating a sophisticated baseline algorithm - capable of applying alert thresholds dynamically based on time of day/week/month etc. Surprisingly, many major tools use extremely simplistic baseline models, but some (e.g. App Dynamics APM) certainly have an approach that assists. When selecting tooling, this is definitely an area that repays investigation.

■ Set up independent parallel (active monitoring) tests separated by “maintenance windows”, with different alert thresholds applied depending upon when they are run. This is a messy approach which comes with its own problems.

■ Look for proxies other than pure performance as alert metrics. Using this approach, a “catchall” performance threshold is set for performance that is manifestly poor regardless of when it is generated. This is supplemented by alerting based upon other factors flagging delivery issues – always providing that your monitoring system permits these. Examples include:

- Payload – error pages or partial downloads will have lower byte counts. Redirect failures (e.g. to mobile devices) will have higher than expected page weights.

- Number of objects

- Specific “flag” objects

■ Ensure confirmation before triggering alert. Some tooling will automatically generate confirmatory repeat testing; others enable triggers to be based on a specified number or percentage of total node results.

■ Gotchas – take account of these. Good test design, for example by controlling the bandwidth of end user testing to screen out results based on low connectivity tests, will improve the reliability of both alerts and results generally. As a more recent innovation, the advent of long polling / server push content can be extremely distortive of synthetic external responses, especially if not consistently included. In this case, page load end points need to be defined and incorporated into test scripts to prevent false positive alerts.

RUM based alerting presents its own difficulties. Because it is visitor traffic based, alert triggers based on a certain percentage of outliers may become distorted in very low traffic conditions. For example, a single long delivery time in a 10 minute timeslot where there are only 4 other “normal” visits would represent 20% of total traffic, whereas the same outlier recorded during a peak business period with 200 normal results is less than 1% of the total. RUM tooling that enables alert thresholds to be modified based on traffic are advantageous.

Although it does not address the “normal variation” issue, replacing binary trigger thresholds with dynamic ones (i.e. an alert state exists when the page/transaction slows by more than x% compared to its average over the past) can sometimes be useful.

Some form of trend state messaging (that is, condition worsening/improving) subsequent to initial alerting can serve to mitigate the amount of physical and emotional energy invoked by simple “fire alarm” alerting, particularly in the middle of the night.

An interesting (and long overdue) approach is to work directly on the source of the problem – download raw baseline data to a data warehouse, and apply sophisticated pattern recognition analysis. These algorithms can be developed in-house if time and appropriate skills are available, but unfortunately the mathematics is not necessarily trivial. Some standalone tooling exists and it is expected that more will follow as this approach proves its worth – the baseline management of most APM vendors represents an open goal at present.

Incidentally, such analysis is valuable not only for alerting but also for demand projection and capacity planning.

A few final thoughts on alerts post-generation. The more evolved alert management systems will permit conditional escalation of alerts – that is: alert this primary group first, then inform group B if the condition persists/worsens etc. Systems allowing custom coding around alerts (such as Neustar) are useful here, as are the specific third party alert handling systems available. If using tooling that only permits basic alerting, it is worth considering integration with external alerting, either of the “standalone service” type, or (in larger corporates) integral with central infrastructure management software.

Lastly, delivery mode. Email is the basis for many systems. It is tempting to regard SMS texting as beneficial, particularly in extreme cases. However, as anyone who has been sent a text on New Year’s Eve, only to have it show up 12 hours later knows, such store and forward systems can be false friends.

Larry Haig is Senior Consultant at Intechnica.

Share this

The Latest

December 15, 2017

CIOs around the globe are more determined than ever to achieve digital transformation within their organizations despite setbacks, according to a survey by Logicalis ...

December 14, 2017

The Spiceworks 2018 IT Career Outlook found that 32 percent of IT professionals plan to search for or take an IT job with a new employer in the next 12 months ...

December 12, 2017

Downtime and security risks were present in each cloud environment tested, according to 2016 Private Cloud Resiliency Benchmarks, a report from Continuity Software ...

December 11, 2017

Companies that empower employees with the applications they want and need, and make them readily accessible — anytime, anywhere, on any device — can benefit from measurable gains at the individual and organizational level, according to a survey, The Impact of the Digital Workforce: A New Equilibrium of the Digitally Transformed Enterprise, conducted by VMware ...

December 08, 2017

Metrics-oriented thinking is key to continuous improvement – and a core tenant of any agile or DevOps philosophy. Metrics are factual and once agreed upon, these facts are used to drive discussions and methods. They also allow for a collaborative effort to execute decisions that contribute towards business outcomes ...

December 06, 2017

The recent outage of the University of Cambridge website hosting Stephen Hawking's doctoral thesis is a prime example of what happens when niche websites become exposed to mainstream levels of traffic ...

December 05, 2017

Even as many organizations continue to adopt multi-cloud technologies as part of their dramatic transformation, the mainframe remains a relevant and growing data center hub for many, according to BMC's 12th annual Mainframe Research Report ...

December 04, 2017

Banks are laying the foundation for the digitization of their businesses and anticipate emerging technologies -- from IoT to biometric authentications and blockchain -- to make a substantial imprint on the industry within five years, according to a recent survey of banking professionals commissioned by VMware ...

December 01, 2017

A recent blog on APMdigest — Protecting Network Performance is as Essential as Securing the Network — mentions that performance issues and outages are possible when security tools (like an IPS, WAF, etc.) are inserted inline. However, one easy way to mitigate this concern is to deploy a bypass switch before the inline tool ...

November 30, 2017

While self-service and self-help IT are in common practice, about half of organizations surveyed are still struggling with full deployment and realizing its value, according to a new report by Ivanti and the Service Desk Institute ...