Setting up a network or application monitoring system involves creating alerts for critical parameters that need attention. Alerts are an integral part of monitoring and they should be easily understandable, provide actionable knowledge and should not make excessive noise. For an alert to be valuable to the user and meet those criteria, the right set of thresholds is essential. That really is the question then: How do you find the right threshold values for your alerts?
To determine and set the right threshold value, a deep level of understanding about the application, the server that hosts the application and the environment where the servers reside is required. Also needed is an application monitoring system that simplifies the process of isolating abnormal performance patterns in your environment. In the best case, you also have tools to assist with automatic threshold determination based on your real-world environment.
The Challenge of Dynamic Environments
When an application behaves as expected or if there is no significant variation in its day-to-day behavior, setting an alert threshold is a cakewalk. You know what is normal vs. what is unexpected. What if the application does not have a fixed baseline behavior? When it comes to applications with dynamic behavior patterns, even the Subject Matter Experts (SMEs) may find it challenging to set ideal thresholds, or have the patience to maintain and recalibrate them over time.
Let us take a look at some examples of where alerting can be difficult because finding the right threshold is challenging. Take Page Files for example. Their usage depends on workload, kernel and Operating System parameters and thus the usage differs from server to server.
Other examples are LDAP, Exchange, Lotus, etc., all of whose behavior depends on organization size, deployment platform and usage patterns. And then there is SQL - SQL server behavior changes based on number of applications connected to the DB.
The scenarios we saw now contribute to some major problems:
- False alerts flood your inbox leading you to a “Crying Wolf” situation. This occurs because you used a very low threshold and your mail box was flooded with alarms leaving you with no way to identify truly important alerts.
- You use a very high threshold value and there are almost no alerts. In such cases the first alert you may receive will be a critical level ticket raised by a user about application performance.
- The right threshold varies from server to server and also over time. This means you need to constantly monitor your servers and adapt to changing usage patterns. This could require you to invest time and resources on recalculating numerous threshold values, way more often than you'd like. That is easier said than done.
Since thresholds can change over time and from server to server, the investment of time and resources that goes into pulling up multiple reports, recalculating thresholds, and that too for multiple servers with every change can be huge. This is why it is imperative that you use a monitoring tool with the ability to automatically set your thresholds for alerts.
Your monitoring tools should be able to make use of the data it's already collecting for a monitored parameter and do the math to suggest the right threshold for a parameter. Such a tool can save time because you don't have to constantly revisit hundreds or thousands of metrics for every change in the network or services environment, pull reports and recalculate. Math should be reserved for more enjoyable leisure activities like calculating subnets. With automation you won’t have to find mean values and standard deviations to then determine what you think is the right threshold. And all this leads to reducing false alerts and giving you an opportunity to quickly cut thought the clutter and easily identify critical issues before users call the helpdesk.
Because application uptime is critical, automatic threshold capability will leave you with enough time to deal with issues that really need your attention. Alert tuning shouldn't be one more thing on your backlog list, they can be dependable partners who may bring you bad news, but in the best possible way. Who every thought you could enjoy alerts?
ABOUT Praveen Manohar
Praveen Manohar is a Head Geek at SolarWinds, a global IT management software provider based in Austin, Texas. He has 7 years of IT industry experience in roles such as Support Engineer, Product Trainer and Technical Consultant, and his expertise lies in technologies including NetFlow, Flexible NetFlow, Cisco NBAR, Cisco IPSLA, WMI and SNMP. Manohar gives strategic guidance for end users on applications, networks and performance monitoring tools.
Related Links:
The Latest
Generative AI may be a great tool for the enterprise to help drive further innovation and meaningful work, but it also runs the risk of generating massive amounts of spam that will counteract its intended benefits. From increased AI spam bots to data maintenance due to large volumes of outputs, enterprise AI applications can create a cascade of issues that end up detracting from productivity gains ...
A long-running study of DevOps practices ... suggests that any historical gains in MTTR reduction have now plateaued. For years now, the time it takes to restore services has stayed about the same: less than a day for high performers but up to a week for middle-tier teams and up to a month for laggards. The fact that progress is flat despite big investments in people, tools and automation is a cause for concern ...
Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...
Customer loyalty is changing as retailers get increasingly competitive. More than 75% of consumers say they would end business with a company after a single bad customer experience. This means that just one price discrepancy, inventory mishap or checkout issue in a physical or digital store, could have customers running out to the next store that can provide them with better service. Retailers must be able to predict business outages in advance, and act proactively before an incident occurs, impacting customer experience ...
Earlier this year, New Relic conducted a study on observability ... The 2023 Observability Forecast reveals observability's impact on the lives of technical professionals and businesses' bottom lines. Here are 10 key takeaways from the forecast ...
Only 33% of executives are "very confident" in their ability to operate in a public cloud environment, according to the 2023 State of CloudOps report from NetApp. This represents an increase from 2022 when only 21% reported feeling very confident ...
The majority of organizations across Australia and New Zealand (A/NZ) breached over the last year had personally identifiable information (PII) compromised, but most have not yet modified their data management policies, according to the Cybersecurity and PII Report from ManageEngine ...