Skip to main content

Alert Thresholds: Aggravating Mess or Indispensable Friends?

Setting up a network or application monitoring system involves creating alerts for critical parameters that need attention. Alerts are an integral part of monitoring and they should be easily understandable, provide actionable knowledge and should not make excessive noise. For an alert to be valuable to the user and meet those criteria, the right set of thresholds is essential. That really is the question then: How do you find the right threshold values for your alerts?

To determine and set the right threshold value, a deep level of understanding about the application, the server that hosts the application and the environment where the servers reside is required. Also needed is an application monitoring system that simplifies the process of isolating abnormal performance patterns in your environment. In the best case, you also have tools to assist with automatic threshold determination based on your real-world environment.

The Challenge of Dynamic Environments

When an application behaves as expected or if there is no significant variation in its day-to-day behavior, setting an alert threshold is a cakewalk. You know what is normal vs. what is unexpected. What if the application does not have a fixed baseline behavior? When it comes to applications with dynamic behavior patterns, even the Subject Matter Experts (SMEs) may find it challenging to set ideal thresholds, or have the patience to maintain and recalibrate them over time.

Let us take a look at some examples of where alerting can be difficult because finding the right threshold is challenging. Take Page Files for example. Their usage depends on workload, kernel and Operating System parameters and thus the usage differs from server to server.

Other examples are LDAP, Exchange, Lotus, etc., all of whose behavior depends on organization size, deployment platform and usage patterns. And then there is SQL - SQL server behavior changes based on number of applications connected to the DB.

The scenarios we saw now contribute to some major problems:

- False alerts flood your inbox leading you to a “Crying Wolf” situation. This occurs because you used a very low threshold and your mail box was flooded with alarms leaving you with no way to identify truly important alerts.

- You use a very high threshold value and there are almost no alerts. In such cases the first alert you may receive will be a critical level ticket raised by a user about application performance.

- The right threshold varies from server to server and also over time. This means you need to constantly monitor your servers and adapt to changing usage patterns. This could require you to invest time and resources on recalculating numerous threshold values, way more often than you'd like. That is easier said than done.

Since thresholds can change over time and from server to server, the investment of time and resources that goes into pulling up multiple reports, recalculating thresholds, and that too for multiple servers with every change can be huge. This is why it is imperative that you use a monitoring tool with the ability to automatically set your thresholds for alerts.

Your monitoring tools should be able to make use of the data it's already collecting for a monitored parameter and do the math to suggest the right threshold for a parameter. Such a tool can save time because you don't have to constantly revisit hundreds or thousands of metrics for every change in the network or services environment, pull reports and recalculate. Math should be reserved for more enjoyable leisure activities like calculating subnets. With automation you won’t have to find mean values and standard deviations to then determine what you think is the right threshold. And all this leads to reducing false alerts and giving you an opportunity to quickly cut thought the clutter and easily identify critical issues before users call the helpdesk.

Because application uptime is critical, automatic threshold capability will leave you with enough time to deal with issues that really need your attention. Alert tuning shouldn't be one more thing on your backlog list, they can be dependable partners who may bring you bad news, but in the best possible way. Who every thought you could enjoy alerts?

ABOUT Praveen Manohar

Praveen Manohar is a Head Geek at SolarWinds, a global IT management software provider based in Austin, Texas. He has 7 years of IT industry experience in roles such as Support Engineer, Product Trainer and Technical Consultant, and his expertise lies in technologies including NetFlow, Flexible NetFlow, Cisco NBAR, Cisco IPSLA, WMI and SNMP. Manohar gives strategic guidance for end users on applications, networks and performance monitoring tools.

Related Links:

www.solarwinds.com

Hot Topics

The Latest

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

A payment gateway fails at 2 AM. Thousands of transactions hang in limbo. Post-mortems reveal failures cascading across dozens of services, each technically sound in isolation. The diagnosis takes hours. The fix requires coordinated deployments across teams ...

Every enterprise technology conversation right now circles back to AI agents. And for once, the excitement isn't running too far ahead of reality. According to a Zapier survey of over 500 enterprise leaders, 72% of enterprises are already using or testing AI agents, and 84% plan to increase their investment over the next 12 months. Those numbers are big. But they also raise a question that doesn't get asked enough: what exactly are companies doing with these agents, and are they actually getting value from them? ...

Many organizations still rely on reactive availability models, taking action only after an outage occurs. However, as applications become more complex, this approach often leads to delayed detection, prolonged disruption, and incomplete recovery. Monitoring is evolving from a basic operational function into a foundational capability for sustaining availability in modern environments ...

In MEAN TIME TO INSIGHT Episode 22, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses DNS Security ... 

The financial stakes of extended service disruption has made operational resilience a top priority, according to 2026 State of AI-First Operations Report, a report from PagerDuty. According to survey findings, 95% of respondents believe their leadership understands the competitive advantage that can be gained from reducing incidents and speeding recovery ...

Alert Thresholds: Aggravating Mess or Indispensable Friends?

Setting up a network or application monitoring system involves creating alerts for critical parameters that need attention. Alerts are an integral part of monitoring and they should be easily understandable, provide actionable knowledge and should not make excessive noise. For an alert to be valuable to the user and meet those criteria, the right set of thresholds is essential. That really is the question then: How do you find the right threshold values for your alerts?

To determine and set the right threshold value, a deep level of understanding about the application, the server that hosts the application and the environment where the servers reside is required. Also needed is an application monitoring system that simplifies the process of isolating abnormal performance patterns in your environment. In the best case, you also have tools to assist with automatic threshold determination based on your real-world environment.

The Challenge of Dynamic Environments

When an application behaves as expected or if there is no significant variation in its day-to-day behavior, setting an alert threshold is a cakewalk. You know what is normal vs. what is unexpected. What if the application does not have a fixed baseline behavior? When it comes to applications with dynamic behavior patterns, even the Subject Matter Experts (SMEs) may find it challenging to set ideal thresholds, or have the patience to maintain and recalibrate them over time.

Let us take a look at some examples of where alerting can be difficult because finding the right threshold is challenging. Take Page Files for example. Their usage depends on workload, kernel and Operating System parameters and thus the usage differs from server to server.

Other examples are LDAP, Exchange, Lotus, etc., all of whose behavior depends on organization size, deployment platform and usage patterns. And then there is SQL - SQL server behavior changes based on number of applications connected to the DB.

The scenarios we saw now contribute to some major problems:

- False alerts flood your inbox leading you to a “Crying Wolf” situation. This occurs because you used a very low threshold and your mail box was flooded with alarms leaving you with no way to identify truly important alerts.

- You use a very high threshold value and there are almost no alerts. In such cases the first alert you may receive will be a critical level ticket raised by a user about application performance.

- The right threshold varies from server to server and also over time. This means you need to constantly monitor your servers and adapt to changing usage patterns. This could require you to invest time and resources on recalculating numerous threshold values, way more often than you'd like. That is easier said than done.

Since thresholds can change over time and from server to server, the investment of time and resources that goes into pulling up multiple reports, recalculating thresholds, and that too for multiple servers with every change can be huge. This is why it is imperative that you use a monitoring tool with the ability to automatically set your thresholds for alerts.

Your monitoring tools should be able to make use of the data it's already collecting for a monitored parameter and do the math to suggest the right threshold for a parameter. Such a tool can save time because you don't have to constantly revisit hundreds or thousands of metrics for every change in the network or services environment, pull reports and recalculate. Math should be reserved for more enjoyable leisure activities like calculating subnets. With automation you won’t have to find mean values and standard deviations to then determine what you think is the right threshold. And all this leads to reducing false alerts and giving you an opportunity to quickly cut thought the clutter and easily identify critical issues before users call the helpdesk.

Because application uptime is critical, automatic threshold capability will leave you with enough time to deal with issues that really need your attention. Alert tuning shouldn't be one more thing on your backlog list, they can be dependable partners who may bring you bad news, but in the best possible way. Who every thought you could enjoy alerts?

ABOUT Praveen Manohar

Praveen Manohar is a Head Geek at SolarWinds, a global IT management software provider based in Austin, Texas. He has 7 years of IT industry experience in roles such as Support Engineer, Product Trainer and Technical Consultant, and his expertise lies in technologies including NetFlow, Flexible NetFlow, Cisco NBAR, Cisco IPSLA, WMI and SNMP. Manohar gives strategic guidance for end users on applications, networks and performance monitoring tools.

Related Links:

www.solarwinds.com

Hot Topics

The Latest

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

A payment gateway fails at 2 AM. Thousands of transactions hang in limbo. Post-mortems reveal failures cascading across dozens of services, each technically sound in isolation. The diagnosis takes hours. The fix requires coordinated deployments across teams ...

Every enterprise technology conversation right now circles back to AI agents. And for once, the excitement isn't running too far ahead of reality. According to a Zapier survey of over 500 enterprise leaders, 72% of enterprises are already using or testing AI agents, and 84% plan to increase their investment over the next 12 months. Those numbers are big. But they also raise a question that doesn't get asked enough: what exactly are companies doing with these agents, and are they actually getting value from them? ...

Many organizations still rely on reactive availability models, taking action only after an outage occurs. However, as applications become more complex, this approach often leads to delayed detection, prolonged disruption, and incomplete recovery. Monitoring is evolving from a basic operational function into a foundational capability for sustaining availability in modern environments ...

In MEAN TIME TO INSIGHT Episode 22, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses DNS Security ... 

The financial stakes of extended service disruption has made operational resilience a top priority, according to 2026 State of AI-First Operations Report, a report from PagerDuty. According to survey findings, 95% of respondents believe their leadership understands the competitive advantage that can be gained from reducing incidents and speeding recovery ...