Monitoring Tool Sprawl - Part 1: Hidden Management Complexity and Cost Overruns
July 31, 2017

Vinod Mohan
eG Innovations

Share this

As businesses have become increasingly reliant on technology, monitoring applications and infrastructure is a necessity. Monitoring is a key component of IT management, helping detect anomalies, triage issues, and ensure that the entire infrastructure is healthy.

However, despite their importance, monitoring tools are often an afterthought, deployed after an IT infrastructure is in place and functioning. Without a planned and well-defined monitoring strategy in place, most IT organizations – large and small – find themselves caught in the trap of "too many monitoring tools" – custom in-house tools, open source tools, packaged tools, and more, that add up over time for a variety of reasons.


A recent survey by EMA indicated that 65% of enterprise organizations have more than 10 monitoring tools. These monitoring tools are, of course, not all unnecessary, but the real question is: Does your team need to manage so many monitoring tools? Does every nail require a different hammer? What are the potential consequences?

There are many reasons why enterprises end up having too many monitoring tools. This blog will examine why this occurs, how the situation gets out of hand, and some best practices to consolidate monitoring in a way that benefits all functions and efficiencies across an IT organization.

Monitoring Sprawl: How Did We Get Here?

Specialized Requirements

So, often, a single IT service relies on many technologies and tiers. For example, a web service requires a web server, multiple middleware tiers, plus message queues and databases. It is hosted on a virtualized server and relies on data access from a storage tier. And since each of these technology tiers is very different from the others, all require specialized management skills. IT organizations tend to be structured along the lines of these tiers, and so there are many administrators, each using a different set of tools for his/her domains of expertise.

Even within a specific tier, multiple monitoring tools may be in use: One for monitoring performance, another for analyzing log files, yet another to report on traffic to that tier, and so on.

Further, when an organization frequently relies on short-term solutions to diagnose problems, ad hoc tool choices can lead to further sprawl. That is, when faced with a problem, an IT administrator may implement a new tool simply to solve the specific issue at hand, never to be used again, thus contributing to a growing collection of monitoring tool shelfware that consumes costs and personnel resources.

Another reason for monitoring tool sprawl is simply personal experience with a particular software solution. IT administrators and managers may have used a monitoring tool in past roles that they view as required for the job. Despite having one or more existing monitoring tools in place, the new tool gets implemented, rendering the existing solutions partially or completely redundant.

Inheritance and Bundles

Mergers and acquisitions can add to the software sprawl. Every time two organizations merge, the combined organization inherits monitoring tools from both organizations.

Many hardware purchases include proprietary monitoring software. With almost every storage vendor bundling its own monitoring tool, an organization leveraging storage arrays from multiple vendors can easily end up with a diverse group of storage monitoring tools.

And, software vendors sometimes package monitoring tools with their enterprise environments as well, so organizations that enter into these agreements can find themselves with yet another tool.

SaaS-Based Monitoring Options & Freeware

With the advent of quick-to-deploy SaaS-based monitoring tools, it has become very easy for organizations to keep adding them. SaaS-based helpdesks, monitoring tools, security tools, and more, can be easily purchased from operating budgets, so IT staff members can simply deploy their own open source and free tools, as needed. All of these add up to the overall number of monitoring tools the organization must maintain.

The Problem of Too Many Tools

Needle in the Haystack

Although each monitoring tool offers its own unique focus and strengths, overlap in functionality is extremely common. And, because there is no integration between these tools, in today's environment of many tiers and many monitoring tools, problem diagnosis – perhaps the most critical factor in fast remediation – is tedious and time-consuming. Administrators must first sift through alerts from disparate sources, eliminate duplicates, and then manually correlate reported performance issues to get actionable insights. Further complicating this process, analyzing alerts across tiers often requires a great deal of expertise, potentially adding more resources and more time.

For fast remediation in a multi-tier service delivery, problem diagnosis must be centralized and automated, but this cannot be achieved easily with multiple tools. Finding the needle in the haystack is difficult, but with what appear to be duplicate needles across many haystacks, it is easy to be led astray and waste valuable resources and time.

Of War Rooms and Blame Games

Most monitoring tools are designed for specific subject-matter experts (application, database, network, VDI, etc.). Without unified visibility into the IT environment, war room discussions can easily turn into finger-pointing: An application owner blames the network tier for slowness, a database administrator blames developers that have not used optimal queries, virtualization administrators point to the storage team, and so on.

Everyone believes it is "not my problem." But there is a problem somewhere, and without a single source of truth – a holistic view of service performance – no one can have visibility into what went wrong and where the fix is needed. So, additional time and effort is needed to manually correlate events and solve the problem, while the business and users suffer.

Time and Money

Maintaining a sprawl of monitoring tools adds cost, on many levels. There are hard costs with license renewals and maintenance, plus the time spent in support requests, working with the various vendors, deploying upgrades, and training personnel to handle multiple tools. All impact the total cost of ownership of these tools, with the cost of maintaining shelfware and redundant tools being the most extravagant of them all.

Go to Monitoring Tool Sprawl - Part 2: How to Achieve Unified IT Monitoring

Vinod Mohan is Senior Manager, Product Marketing, at eG Innovations
Share this

The Latest

July 17, 2018

The essential value resulting from data-driven processes has become progressively linked with analytics. Once considered a desired complement to intuitive decision-making, analytics has developed into a main focus of mission-critical applications across industries for any number of use cases ...

July 16, 2018

The question of SaaS-based technology over the past decade has quickly changed from "should we?" to "how soon can we?" even for the most customized and regulated of industries. This macro move toward SaaS has also encouraged a series of IT "best practices" that have potential impacts on the employee digital experience, organizational risk and ultimately, productivity ...

July 11, 2018

Optimization means improving the performance of your human and technology resources while keeping a watchful eye. To accomplish this, we must have clear, crisp visibility into the metrics relevant to the delivery of workspace applications to your end users and to the devices – the endpoints – they use to be productive ...

July 09, 2018

As tech headlines flash across my email, the CMDB, and its federated equivalent, the CMS, are almost never mentioned. And yet when I do research, dialog with IT, or support our consulting team, the CMDB/CMS many times still remains paramount ...

June 28, 2018

Given the size and complexity of today's IT networks it can be almost impossible to detect just when and where a security breach or network failure might occur. It's critical, therefore, that businesses have complete visibility over their IT networks, and any applications and services that run on those networks, in order to protect their customers' information, assure uninterrupted service delivery and, of course, comply with the GDPR ...

June 27, 2018

A new breed of solution has been born that simultaneously provides the precision of packet-based analytics with the speed of flow-based monitoring (at a reasonable cost). Here are more reasons to use these new NPM/APM analytics solutions ...

June 26, 2018

A new breed of solution has been born that simultaneously provides the precision of packet-based analytics with the speed of flow-based monitoring (at a reasonable cost). Here are 6 reasons to use these new NPM/APM analytics solutions ...

June 21, 2018

There’s no doubt that digital innovations are transforming industries, and business leaders are left with little or no choice – either embrace digital processes or suffer the consequences and get left behind ...

June 20, 2018

Looking ahead to the rest of 2018 and beyond, it seems like many of the trends that shaped 2017 are set to continue, with the key difference being in how they evolve and shift as they become mainstream. Five key factors defining the progression of the digital transformation movement are ...

June 19, 2018

Companies using cloud technologies to automate their legacy applications and IT operations processes are gaining a significant competitive advantage over those behind the curve, according to a new report from Capgemini and Sogeti, The automation advantage: Making legacy IT keep pace with the cloud ...