Avoiding Tool Sprawl in Your Observability Practice
January 26, 2023

Anurag Gupta
Calyptia

Share this

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon. A recent Cloud Native Computing Foundation (CNCF) survey asked, “how many different tools does your organization use for monitoring, gathering logging and tracing data, and for metrics." The results were intimidating: 72% of respondents indicated that they were using up to nine different tools, and over a fifth said they were using between 10 and 15.

Too often, these tools lack integration and interoperability. Half of the CNCF survey participants identified tool sprawl as one of the biggest challenges to their observability efforts, making it the most common challenge across all organizations.

Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl.

What is Tool Sprawl?

Let's begin by declaring what observability tool sprawl is not. It is not simply having more than one observability tool in your stack.

A carpenter needs both a saw and a hammer to build a house. While it may be possible to pound in a nail with a saw, it's inefficient and potentially dangerous. And you'd be hard-pressed to cut lumber with a hammer. The trick is to have the right tools for the right tasks. Each tool has a specific role to play in building the house.

Sprawl, then, is having more tools than required. Sean McDermott, a consultant with decades of experience helping companies manage IT software sprawl, defines it as “the redundancy, wasteful spending and system complexity associated with the unnecessary purchase of new IT tools, and the use or misuse of stagnant, legacy systems."

Observability Seems Particularly Prone to Sprawl

Observability efforts seem particularly vulnerable to tool sprawl. In the same CNCF survey, 4% of respondents indicated using more than 15 tools in their observability stack. Several reasons contribute to this.

1. Observability is still early in its development and adoption. Google searches for observability have quadrupled since mid-2020. A recent survey showed that 58% of respondents were considered "beginners" in their observability journey, while another survey showed that 95% of organizations expected to have a fully implemented observability practice by 2025.

As a result, there is still a lot of uncertainty about best practices. Combine that uncertainty with the large number of new and established vendors attempting to secure their share of the rapidly expanding observability market. and you have a perfect environment for tool sprawl.

2. Observability is not easy, and the explosion of containerized microservices increases the difficulty exponentially. The amount of telemetry data generated by these systems is staggering and still growing. Organizations that adopted a single platform approach to observability (e.g., send everything to Splunk) soon found the consumption-based pricing models of some of those platforms to be prohibitive and went searching for solutions to reduce costs, which often meant adopting another tool.

3. Log, metrics, and traces are often referred to as the three pillars of observability. But these are very different types of data, and tools often specialize in processing and analyzing one or the other. That's fine — remember our earlier analogy about trying to pound a nail wwith a saw — there is nothing wrong with using the best tool for a task. But observability applications often are actually a suite of tools: agents deployed on servers for gathering the data, some sort of system for storing the gathered data, and an application for searching and analyzing the stored data. Often these components are vendor-specific, which sometimes results in multiple data gathering and forwarding apps running on each server sending data to their own vendor-specific backend.

The Consequences of Tool Sprawl

Tool sprawl results in inefficiencies, unnecessary expenses and increased risk. Common problems include:

■ Underutilization of tools that are perfectly capable of doing the job currently handled by another tool.

■ Siloization of teams as groups become entrenched in the idea that only their tool can meet their needs.

■ Increased and unnecessary complexity of the observability pipeline, resulting in greater effort by SREs to ensure that everything continues functioning.

■ Reduced efficiency of the systems being observed as more of their resources are consumed by the tools observing them.

■ Increased downtime due to longer times required to diagnose and repair problems (This is particularly ironic given the purpose of implementing an observability practice).

■ Wasted budget on license renewals, training, implementation, consulting, and integration.

■ Increased security risk as every tool represents a possible attack vector.

Tips for Reducing or Avoiding Sprawl

Thankfully, tool sprawl is neither inevitable nor incurable if it has already infected your observability practice. Here are a few tips.

Know your needs

Identify the specific needs of your team and organization: The first step is to clearly define the goals and objectives of your observability practice and to determine the specific data sources, visualization and analysis tools, and integration processes needed to meet these goals. This will help you to identify the specific tools that will be required and to avoid selecting tools that are not well-suited to your needs.

Evaluate the tools you are using

The next step is to carefully evaluate the tools you are currently using and to determine whether they are meeting the needs of your team and organization. This may involve conducting surveys or user interviews to gather feedback and analyzing data to assess the effectiveness of the tools. Look especially for opportunities for consolidation.

Adopt tools that support open standards

Perhaps the worst mistake an organization can make is adopting tools that do not support open standards. Open standards help organizations avoid vendor lock-in, enabling them to more easily swap out tools that no longer meet their needs. When an organization is locked in to a particular vendor due to the effort required to completely rework its entire observability pipelines and platforms, the organization is at the mercy of the vendor when it comes to contract renewals.

OpenTelemetry has become the standard for telemetry data. The open-source project provides a set of standardized vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an Observability backend (i.e., open source or commercial vendor). At a minimum, you should ensure that any observability backend you adopt supports OpenTelemetry.

Next Steps

Reducing tool sprawl can be painful, especially if you have previously invested in tools whose makers view vendor lock-in as a business strategy. However, the results are worth the effort, assuming you follow the advice above. You are likely to see substantially reduced costs, improved efficiency, faster time to insights, and better visibility into your systems.

Anurag Gupta is Co-Founder of Calyptia
Share this

The Latest

March 23, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...

March 22, 2023

CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...

March 21, 2023

Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...

March 20, 2023

Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...

March 16, 2023

Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...

March 15, 2023

Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...

March 14, 2023

Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...

March 13, 2023

Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.

March 09, 2023

An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...

March 08, 2023

Development teams so often find themselves rushing to get a release out on time. When it comes time for testing, the software works fine in the lab. But, when it's released, customers report a bunch of bugs. How does this happen? Why weren't the flaws found in QA? ...