Skip to main content

Alerting Survival Strategies

Larry Haig

(aka – “If that monitoring system wakes me up at 3am one more time … !”)

In considering alerting, the core issue is not whether a given tool will generate alerts, as anything sensible certainly will. Rather, the central problem is what could be termed the actionability of the alerts generated. Failure to flag issues related to poor performance is a clear no-no, but unfortunately over-alerting has the same effect, as these will rapidly be ignored.

Effective alert definition hinges on the determination of “normal” performance. Simplistically, this can be understood by testing across a business cycle (ideally, a minimum of 3-4 weeks). That is fine providing performance is reasonably stable. However, that is often not the case, particularly for applications experiencing large fluctuations in demand at different times of the day, week or year.

In such cases (which are extremely common), the difficulty becomes “at which point of the demand cycle should I base my alert threshold?” Too low, and your system is simply telling you that it’s lunchtime (or the weekend, or whenever greatest demand occurs). Too high, and you will miss issues occurring during periods of lower demand.

There are several approaches to this difficulty, of varying degrees of elegance:

■ Select tooling incorporating a sophisticated baseline algorithm - capable of applying alert thresholds dynamically based on time of day/week/month etc. Surprisingly, many major tools use extremely simplistic baseline models, but some (e.g. App Dynamics APM) certainly have an approach that assists. When selecting tooling, this is definitely an area that repays investigation.

■ Set up independent parallel (active monitoring) tests separated by “maintenance windows”, with different alert thresholds applied depending upon when they are run. This is a messy approach which comes with its own problems.

■ Look for proxies other than pure performance as alert metrics. Using this approach, a “catchall” performance threshold is set for performance that is manifestly poor regardless of when it is generated. This is supplemented by alerting based upon other factors flagging delivery issues – always providing that your monitoring system permits these. Examples include:

- Payload – error pages or partial downloads will have lower byte counts. Redirect failures (e.g. to mobile devices) will have higher than expected page weights.

- Number of objects

- Specific “flag” objects

■ Ensure confirmation before triggering alert. Some tooling will automatically generate confirmatory repeat testing; others enable triggers to be based on a specified number or percentage of total node results.

■ Gotchas – take account of these. Good test design, for example by controlling the bandwidth of end user testing to screen out results based on low connectivity tests, will improve the reliability of both alerts and results generally. As a more recent innovation, the advent of long polling / server push content can be extremely distortive of synthetic external responses, especially if not consistently included. In this case, page load end points need to be defined and incorporated into test scripts to prevent false positive alerts.

RUM based alerting presents its own difficulties. Because it is visitor traffic based, alert triggers based on a certain percentage of outliers may become distorted in very low traffic conditions. For example, a single long delivery time in a 10 minute timeslot where there are only 4 other “normal” visits would represent 20% of total traffic, whereas the same outlier recorded during a peak business period with 200 normal results is less than 1% of the total. RUM tooling that enables alert thresholds to be modified based on traffic are advantageous.

Although it does not address the “normal variation” issue, replacing binary trigger thresholds with dynamic ones (i.e. an alert state exists when the page/transaction slows by more than x% compared to its average over the past) can sometimes be useful.

Some form of trend state messaging (that is, condition worsening/improving) subsequent to initial alerting can serve to mitigate the amount of physical and emotional energy invoked by simple “fire alarm” alerting, particularly in the middle of the night.

An interesting (and long overdue) approach is to work directly on the source of the problem – download raw baseline data to a data warehouse, and apply sophisticated pattern recognition analysis. These algorithms can be developed in-house if time and appropriate skills are available, but unfortunately the mathematics is not necessarily trivial. Some standalone tooling exists and it is expected that more will follow as this approach proves its worth – the baseline management of most APM vendors represents an open goal at present.

Incidentally, such analysis is valuable not only for alerting but also for demand projection and capacity planning.

A few final thoughts on alerts post-generation. The more evolved alert management systems will permit conditional escalation of alerts – that is: alert this primary group first, then inform group B if the condition persists/worsens etc. Systems allowing custom coding around alerts (such as Neustar) are useful here, as are the specific third party alert handling systems available. If using tooling that only permits basic alerting, it is worth considering integration with external alerting, either of the “standalone service” type, or (in larger corporates) integral with central infrastructure management software.

Lastly, delivery mode. Email is the basis for many systems. It is tempting to regard SMS texting as beneficial, particularly in extreme cases. However, as anyone who has been sent a text on New Year’s Eve, only to have it show up 12 hours later knows, such store and forward systems can be false friends.

Larry Haig is Senior Consultant at Intechnica.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Alerting Survival Strategies

Larry Haig

(aka – “If that monitoring system wakes me up at 3am one more time … !”)

In considering alerting, the core issue is not whether a given tool will generate alerts, as anything sensible certainly will. Rather, the central problem is what could be termed the actionability of the alerts generated. Failure to flag issues related to poor performance is a clear no-no, but unfortunately over-alerting has the same effect, as these will rapidly be ignored.

Effective alert definition hinges on the determination of “normal” performance. Simplistically, this can be understood by testing across a business cycle (ideally, a minimum of 3-4 weeks). That is fine providing performance is reasonably stable. However, that is often not the case, particularly for applications experiencing large fluctuations in demand at different times of the day, week or year.

In such cases (which are extremely common), the difficulty becomes “at which point of the demand cycle should I base my alert threshold?” Too low, and your system is simply telling you that it’s lunchtime (or the weekend, or whenever greatest demand occurs). Too high, and you will miss issues occurring during periods of lower demand.

There are several approaches to this difficulty, of varying degrees of elegance:

■ Select tooling incorporating a sophisticated baseline algorithm - capable of applying alert thresholds dynamically based on time of day/week/month etc. Surprisingly, many major tools use extremely simplistic baseline models, but some (e.g. App Dynamics APM) certainly have an approach that assists. When selecting tooling, this is definitely an area that repays investigation.

■ Set up independent parallel (active monitoring) tests separated by “maintenance windows”, with different alert thresholds applied depending upon when they are run. This is a messy approach which comes with its own problems.

■ Look for proxies other than pure performance as alert metrics. Using this approach, a “catchall” performance threshold is set for performance that is manifestly poor regardless of when it is generated. This is supplemented by alerting based upon other factors flagging delivery issues – always providing that your monitoring system permits these. Examples include:

- Payload – error pages or partial downloads will have lower byte counts. Redirect failures (e.g. to mobile devices) will have higher than expected page weights.

- Number of objects

- Specific “flag” objects

■ Ensure confirmation before triggering alert. Some tooling will automatically generate confirmatory repeat testing; others enable triggers to be based on a specified number or percentage of total node results.

■ Gotchas – take account of these. Good test design, for example by controlling the bandwidth of end user testing to screen out results based on low connectivity tests, will improve the reliability of both alerts and results generally. As a more recent innovation, the advent of long polling / server push content can be extremely distortive of synthetic external responses, especially if not consistently included. In this case, page load end points need to be defined and incorporated into test scripts to prevent false positive alerts.

RUM based alerting presents its own difficulties. Because it is visitor traffic based, alert triggers based on a certain percentage of outliers may become distorted in very low traffic conditions. For example, a single long delivery time in a 10 minute timeslot where there are only 4 other “normal” visits would represent 20% of total traffic, whereas the same outlier recorded during a peak business period with 200 normal results is less than 1% of the total. RUM tooling that enables alert thresholds to be modified based on traffic are advantageous.

Although it does not address the “normal variation” issue, replacing binary trigger thresholds with dynamic ones (i.e. an alert state exists when the page/transaction slows by more than x% compared to its average over the past) can sometimes be useful.

Some form of trend state messaging (that is, condition worsening/improving) subsequent to initial alerting can serve to mitigate the amount of physical and emotional energy invoked by simple “fire alarm” alerting, particularly in the middle of the night.

An interesting (and long overdue) approach is to work directly on the source of the problem – download raw baseline data to a data warehouse, and apply sophisticated pattern recognition analysis. These algorithms can be developed in-house if time and appropriate skills are available, but unfortunately the mathematics is not necessarily trivial. Some standalone tooling exists and it is expected that more will follow as this approach proves its worth – the baseline management of most APM vendors represents an open goal at present.

Incidentally, such analysis is valuable not only for alerting but also for demand projection and capacity planning.

A few final thoughts on alerts post-generation. The more evolved alert management systems will permit conditional escalation of alerts – that is: alert this primary group first, then inform group B if the condition persists/worsens etc. Systems allowing custom coding around alerts (such as Neustar) are useful here, as are the specific third party alert handling systems available. If using tooling that only permits basic alerting, it is worth considering integration with external alerting, either of the “standalone service” type, or (in larger corporates) integral with central infrastructure management software.

Lastly, delivery mode. Email is the basis for many systems. It is tempting to regard SMS texting as beneficial, particularly in extreme cases. However, as anyone who has been sent a text on New Year’s Eve, only to have it show up 12 hours later knows, such store and forward systems can be false friends.

Larry Haig is Senior Consultant at Intechnica.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...