Universal Monitoring Crimes and What to Do About Them - Part 1
May 22, 2018

Leon Adato
SolarWinds

Share this

Monitoring is a critical aspect of any data center operation, yet it often remains the black sheep of an organization's IT strategy: an afterthought rather than a core competency. Because of this, many enterprises have a monitoring solution that appears to have been built by a flock of "IT seagulls" — technicians who swoop in, drop a smelly and offensive payload, and swoop out. Over time, the result is layer upon layer of offensive payloads that are all in the same general place (your monitoring solution) but have no coherent strategy or integration.

Believe it or not, this is a salvageable scenario. By applying a few basic techniques and monitoring discipline, you can turn a disorganized pile of noise into a monitoring solution that provides actionable insight. For the purposes of this piece, let's assume you've at least implemented some type of monitoring solution within your environment.

At its core, the principle of monitoring as a foundational IT discipline is designed to help IT professionals escape the short-term, reactive nature of administration, often caused by insufficient monitoring, and become more proactive and strategic. All too often, however, organizations are instead bogged down by monitoring systems that are improperly tuned — or not tuned at all — for their environment and business needs. This results in unnecessary or incorrect alerts that introduce more chaos and noise than order and insight, and as a result, cause your staff to value monitoring even less.

So, to help your organization increase data center efficiency and get the most benefit out of your monitoring solutions, here are the top five universal monitoring crimes and what you can do about them:

1. Fixed thresholds

Monitoring systems that trigger any type of alert at a fixed value for a group of devices are the "weak tea" of solutions. While general thresholds can be established, it is statistically impossible that every single device is going to adhere to the same one, and extremely improbable that even a majority will.

Even a single server has utilization that varies from day to day. A server that usually runs at 50 percent CPU, for example, but spikes to 95 percent at the end of the month is perfectly normal — but fixed thresholds can cause this spike to trigger. The result is that many organizations create multiple versions of the same alert (CPU Alert for Windows IIS-DMZ; CPU Alert for Windows IIS-core; CPU Alert for Windows Exchange CAS, and so on). And even then, fixed thresholds usually throw more false positives than anyone wants.

What to do about it:

■ GOOD: Enable per-device (and per-service) thresholds. Whether you do this within the tool or via customizations, you should ultimately be able to have a specific threshold for each device so that machines that have a specific threshold trigger at the correct time, and those that do not get the default.

■ BETTER: Use existing monitoring data to establish baselines for "normal" and then trigger when usage deviates from that baseline. Note that you may need to consider how to address edge cases that may require a second condition to help define when a threshold is triggered.

2. Lack of monitoring system oversight

While it's certainly important to have a tool or set of tools that monitor and alert on mission-critical systems, it's also important to have some sort of system in place to identify problems within the monitoring solution itself.

What to do about it: Set up a separate instance of a monitoring solution that keeps track of the primary, or production, monitoring system. It can be another copy of the same tool or tools you are using in production, or a separate solution, such as open source, vendor-provided, etc.

For another option to address this, see the discussion on lab and test environments in Part 2 of this blog.

3. Instant alerts

There are endless reasons why instant alerts — when your monitoring system triggers alerts as soon as a condition is detected — can cause chaos in your data center. For one thing, monitoring systems are not infallible and may detect "false positive" alerts that don't truly require a remediation response. For another, it's not uncommon for problems to appear for a moment and then disappear. Still some other problems aren't actionable until they've persisted for a certain amount of time. You get the idea.

What to do about it: Build a time delay into your monitoring system's trigger logic where a CPU alert, for example, would need to have all of the specified conditions persist for something like 10 minutes before any action would be needed. Spikes lasting longer than 10 minutes would require more direct intervention while anything less represents a temporary spike in activity that doesn't necessarily indicate a true problem.

Read Universal Monitoring Crimes and What to Do About Them - Part 2, for more monitoring tips.

Leon Adato is a Head Geek at SolarWinds
Share this

The Latest

June 21, 2018

There’s no doubt that digital innovations are transforming industries, and business leaders are left with little or no choice – either embrace digital processes or suffer the consequences and get left behind ...

June 20, 2018

Looking ahead to the rest of 2018 and beyond, it seems like many of the trends that shaped 2017 are set to continue, with the key difference being in how they evolve and shift as they become mainstream. Five key factors defining the progression of the digital transformation movement are ...

June 19, 2018

Companies using cloud technologies to automate their legacy applications and IT operations processes are gaining a significant competitive advantage over those behind the curve, according to a new report from Capgemini and Sogeti, The automation advantage: Making legacy IT keep pace with the cloud ...

June 18, 2018

It's every system administrator's worse nightmare. An attempt to restore a database results in empty files, and there is no way to get the data back, ever. Here are five simple tips for keeping things running smoothly and minimizing risk ...

June 15, 2018

When it comes to their own companies, 50% of IT stakeholders think they are leaders and will disrupt, while 50% feel they are behind and will be disrupted by the competition in 2018, according to a new survey of IT stakeholders from Alfresco Software and Dimensional Research. The report, Digital Disruption: Disrupt or Be Disrupted, is a wake-up call for the C-suite ...

June 14, 2018

If you are like most IT professionals, which I am sure you are, you are dealing with a lot issues. Typical issues include ...

June 13, 2018

The importance of artificial intelligence and machine learning for customer insight, product support, operational efficiency, and capacity planning are well-established, however, the benefits of monitoring data in those use cases is still evolving. Three main factors obscuring the benefits of data monitoring are the infinite volume of data, its diversity, and inconsistency ...

June 11, 2018

Imagine this: after a fantastic night's sleep, you walk into the office ready to attack the day. You sit down at your desk ready to go, and your computer starts acting up. You call the help desk, but all IT can do is create a ticket for you and transfer it to another team to help you as soon as possible ...

June 08, 2018

As many IT workers develop greater technology skills and apply them to advance their careers, many digital workers in non-IT departments believe their CIO is out of touch with their technology needs. A Gartner, Inc. survey found that less than 50 percent of workers (both IT and non-IT) believe their CIOs are aware of digital technology problems that affect them ...

June 07, 2018

CIOs of 73% of organizations say the need for speed in digital innovation is putting customer experience at risk, according to an independent global survey of 800 CIOs commissioned by Dynatrace ...