Skip to main content

Alert Thresholds: Aggravating Mess or Indispensable Friends?

Setting up a network or application monitoring system involves creating alerts for critical parameters that need attention. Alerts are an integral part of monitoring and they should be easily understandable, provide actionable knowledge and should not make excessive noise. For an alert to be valuable to the user and meet those criteria, the right set of thresholds is essential. That really is the question then: How do you find the right threshold values for your alerts?

To determine and set the right threshold value, a deep level of understanding about the application, the server that hosts the application and the environment where the servers reside is required. Also needed is an application monitoring system that simplifies the process of isolating abnormal performance patterns in your environment. In the best case, you also have tools to assist with automatic threshold determination based on your real-world environment.

The Challenge of Dynamic Environments

When an application behaves as expected or if there is no significant variation in its day-to-day behavior, setting an alert threshold is a cakewalk. You know what is normal vs. what is unexpected. What if the application does not have a fixed baseline behavior? When it comes to applications with dynamic behavior patterns, even the Subject Matter Experts (SMEs) may find it challenging to set ideal thresholds, or have the patience to maintain and recalibrate them over time.

Let us take a look at some examples of where alerting can be difficult because finding the right threshold is challenging. Take Page Files for example. Their usage depends on workload, kernel and Operating System parameters and thus the usage differs from server to server.

Other examples are LDAP, Exchange, Lotus, etc., all of whose behavior depends on organization size, deployment platform and usage patterns. And then there is SQL - SQL server behavior changes based on number of applications connected to the DB.

The scenarios we saw now contribute to some major problems:

- False alerts flood your inbox leading you to a “Crying Wolf” situation. This occurs because you used a very low threshold and your mail box was flooded with alarms leaving you with no way to identify truly important alerts.

- You use a very high threshold value and there are almost no alerts. In such cases the first alert you may receive will be a critical level ticket raised by a user about application performance.

- The right threshold varies from server to server and also over time. This means you need to constantly monitor your servers and adapt to changing usage patterns. This could require you to invest time and resources on recalculating numerous threshold values, way more often than you'd like. That is easier said than done.

Since thresholds can change over time and from server to server, the investment of time and resources that goes into pulling up multiple reports, recalculating thresholds, and that too for multiple servers with every change can be huge. This is why it is imperative that you use a monitoring tool with the ability to automatically set your thresholds for alerts.

Your monitoring tools should be able to make use of the data it's already collecting for a monitored parameter and do the math to suggest the right threshold for a parameter. Such a tool can save time because you don't have to constantly revisit hundreds or thousands of metrics for every change in the network or services environment, pull reports and recalculate. Math should be reserved for more enjoyable leisure activities like calculating subnets. With automation you won’t have to find mean values and standard deviations to then determine what you think is the right threshold. And all this leads to reducing false alerts and giving you an opportunity to quickly cut thought the clutter and easily identify critical issues before users call the helpdesk.

Because application uptime is critical, automatic threshold capability will leave you with enough time to deal with issues that really need your attention. Alert tuning shouldn't be one more thing on your backlog list, they can be dependable partners who may bring you bad news, but in the best possible way. Who every thought you could enjoy alerts?

ABOUT Praveen Manohar

Praveen Manohar is a Head Geek at SolarWinds, a global IT management software provider based in Austin, Texas. He has 7 years of IT industry experience in roles such as Support Engineer, Product Trainer and Technical Consultant, and his expertise lies in technologies including NetFlow, Flexible NetFlow, Cisco NBAR, Cisco IPSLA, WMI and SNMP. Manohar gives strategic guidance for end users on applications, networks and performance monitoring tools.

Related Links:

www.solarwinds.com

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Alert Thresholds: Aggravating Mess or Indispensable Friends?

Setting up a network or application monitoring system involves creating alerts for critical parameters that need attention. Alerts are an integral part of monitoring and they should be easily understandable, provide actionable knowledge and should not make excessive noise. For an alert to be valuable to the user and meet those criteria, the right set of thresholds is essential. That really is the question then: How do you find the right threshold values for your alerts?

To determine and set the right threshold value, a deep level of understanding about the application, the server that hosts the application and the environment where the servers reside is required. Also needed is an application monitoring system that simplifies the process of isolating abnormal performance patterns in your environment. In the best case, you also have tools to assist with automatic threshold determination based on your real-world environment.

The Challenge of Dynamic Environments

When an application behaves as expected or if there is no significant variation in its day-to-day behavior, setting an alert threshold is a cakewalk. You know what is normal vs. what is unexpected. What if the application does not have a fixed baseline behavior? When it comes to applications with dynamic behavior patterns, even the Subject Matter Experts (SMEs) may find it challenging to set ideal thresholds, or have the patience to maintain and recalibrate them over time.

Let us take a look at some examples of where alerting can be difficult because finding the right threshold is challenging. Take Page Files for example. Their usage depends on workload, kernel and Operating System parameters and thus the usage differs from server to server.

Other examples are LDAP, Exchange, Lotus, etc., all of whose behavior depends on organization size, deployment platform and usage patterns. And then there is SQL - SQL server behavior changes based on number of applications connected to the DB.

The scenarios we saw now contribute to some major problems:

- False alerts flood your inbox leading you to a “Crying Wolf” situation. This occurs because you used a very low threshold and your mail box was flooded with alarms leaving you with no way to identify truly important alerts.

- You use a very high threshold value and there are almost no alerts. In such cases the first alert you may receive will be a critical level ticket raised by a user about application performance.

- The right threshold varies from server to server and also over time. This means you need to constantly monitor your servers and adapt to changing usage patterns. This could require you to invest time and resources on recalculating numerous threshold values, way more often than you'd like. That is easier said than done.

Since thresholds can change over time and from server to server, the investment of time and resources that goes into pulling up multiple reports, recalculating thresholds, and that too for multiple servers with every change can be huge. This is why it is imperative that you use a monitoring tool with the ability to automatically set your thresholds for alerts.

Your monitoring tools should be able to make use of the data it's already collecting for a monitored parameter and do the math to suggest the right threshold for a parameter. Such a tool can save time because you don't have to constantly revisit hundreds or thousands of metrics for every change in the network or services environment, pull reports and recalculate. Math should be reserved for more enjoyable leisure activities like calculating subnets. With automation you won’t have to find mean values and standard deviations to then determine what you think is the right threshold. And all this leads to reducing false alerts and giving you an opportunity to quickly cut thought the clutter and easily identify critical issues before users call the helpdesk.

Because application uptime is critical, automatic threshold capability will leave you with enough time to deal with issues that really need your attention. Alert tuning shouldn't be one more thing on your backlog list, they can be dependable partners who may bring you bad news, but in the best possible way. Who every thought you could enjoy alerts?

ABOUT Praveen Manohar

Praveen Manohar is a Head Geek at SolarWinds, a global IT management software provider based in Austin, Texas. He has 7 years of IT industry experience in roles such as Support Engineer, Product Trainer and Technical Consultant, and his expertise lies in technologies including NetFlow, Flexible NetFlow, Cisco NBAR, Cisco IPSLA, WMI and SNMP. Manohar gives strategic guidance for end users on applications, networks and performance monitoring tools.

Related Links:

www.solarwinds.com

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...