As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon. A recent Cloud Native Computing Foundation (CNCF) survey asked, “how many different tools does your organization use for monitoring, gathering logging and tracing data, and for metrics." The results were intimidating: 72% of respondents indicated that they were using up to nine different tools, and over a fifth said they were using between 10 and 15.
Too often, these tools lack integration and interoperability. Half of the CNCF survey participants identified tool sprawl as one of the biggest challenges to their observability efforts, making it the most common challenge across all organizations.
Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl.
What is Tool Sprawl?
Let's begin by declaring what observability tool sprawl is not. It is not simply having more than one observability tool in your stack.
A carpenter needs both a saw and a hammer to build a house. While it may be possible to pound in a nail with a saw, it's inefficient and potentially dangerous. And you'd be hard-pressed to cut lumber with a hammer. The trick is to have the right tools for the right tasks. Each tool has a specific role to play in building the house.
Sprawl, then, is having more tools than required. Sean McDermott, a consultant with decades of experience helping companies manage IT software sprawl, defines it as “the redundancy, wasteful spending and system complexity associated with the unnecessary purchase of new IT tools, and the use or misuse of stagnant, legacy systems."
Observability Seems Particularly Prone to Sprawl
Observability efforts seem particularly vulnerable to tool sprawl. In the same CNCF survey, 4% of respondents indicated using more than 15 tools in their observability stack. Several reasons contribute to this.
1. Observability is still early in its development and adoption. Google searches for observability have quadrupled since mid-2020. A recent survey showed that 58% of respondents were considered "beginners" in their observability journey, while another survey showed that 95% of organizations expected to have a fully implemented observability practice by 2025.
As a result, there is still a lot of uncertainty about best practices. Combine that uncertainty with the large number of new and established vendors attempting to secure their share of the rapidly expanding observability market. and you have a perfect environment for tool sprawl.
2. Observability is not easy, and the explosion of containerized microservices increases the difficulty exponentially. The amount of telemetry data generated by these systems is staggering and still growing. Organizations that adopted a single platform approach to observability (e.g., send everything to Splunk) soon found the consumption-based pricing models of some of those platforms to be prohibitive and went searching for solutions to reduce costs, which often meant adopting another tool.
3. Log, metrics, and traces are often referred to as the three pillars of observability. But these are very different types of data, and tools often specialize in processing and analyzing one or the other. That's fine — remember our earlier analogy about trying to pound a nail wwith a saw — there is nothing wrong with using the best tool for a task. But observability applications often are actually a suite of tools: agents deployed on servers for gathering the data, some sort of system for storing the gathered data, and an application for searching and analyzing the stored data. Often these components are vendor-specific, which sometimes results in multiple data gathering and forwarding apps running on each server sending data to their own vendor-specific backend.
The Consequences of Tool Sprawl
Tool sprawl results in inefficiencies, unnecessary expenses and increased risk. Common problems include:
■ Underutilization of tools that are perfectly capable of doing the job currently handled by another tool.
■ Siloization of teams as groups become entrenched in the idea that only their tool can meet their needs.
■ Increased and unnecessary complexity of the observability pipeline, resulting in greater effort by SREs to ensure that everything continues functioning.
■ Reduced efficiency of the systems being observed as more of their resources are consumed by the tools observing them.
■ Increased downtime due to longer times required to diagnose and repair problems (This is particularly ironic given the purpose of implementing an observability practice).
■ Wasted budget on license renewals, training, implementation, consulting, and integration.
■ Increased security risk as every tool represents a possible attack vector.
Tips for Reducing or Avoiding Sprawl
Thankfully, tool sprawl is neither inevitable nor incurable if it has already infected your observability practice. Here are a few tips.
Know your needs
Identify the specific needs of your team and organization: The first step is to clearly define the goals and objectives of your observability practice and to determine the specific data sources, visualization and analysis tools, and integration processes needed to meet these goals. This will help you to identify the specific tools that will be required and to avoid selecting tools that are not well-suited to your needs.
Evaluate the tools you are using
The next step is to carefully evaluate the tools you are currently using and to determine whether they are meeting the needs of your team and organization. This may involve conducting surveys or user interviews to gather feedback and analyzing data to assess the effectiveness of the tools. Look especially for opportunities for consolidation.
Adopt tools that support open standards
Perhaps the worst mistake an organization can make is adopting tools that do not support open standards. Open standards help organizations avoid vendor lock-in, enabling them to more easily swap out tools that no longer meet their needs. When an organization is locked in to a particular vendor due to the effort required to completely rework its entire observability pipelines and platforms, the organization is at the mercy of the vendor when it comes to contract renewals.
OpenTelemetry has become the standard for telemetry data. The open-source project provides a set of standardized vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an Observability backend (i.e., open source or commercial vendor). At a minimum, you should ensure that any observability backend you adopt supports OpenTelemetry.
Next Steps
Reducing tool sprawl can be painful, especially if you have previously invested in tools whose makers view vendor lock-in as a business strategy. However, the results are worth the effort, assuming you follow the advice above. You are likely to see substantially reduced costs, improved efficiency, faster time to insights, and better visibility into your systems.
The Latest
From the accelerating adoption of artificial intelligence (AI) and generative AI (GenAI) to the ongoing challenges of cost optimization and security, these IT leaders are navigating a complex and rapidly evolving landscape. Here's what you should know about the top priorities shaping the year ahead ...
In the heat of the holiday online shopping rush, retailers face persistent challenges such as increased web traffic or cyber threats that can lead to high-impact outages. With profit margins under high pressure, retailers are prioritizing strategic investments to help drive business value while improving the customer experience ...
In a fast-paced industry where customer service is a priority, the opportunity to use AI to personalize products and services, revolutionize delivery channels, and effectively manage peaks in demand such as Black Friday and Cyber Monday are vast. By leveraging AI to streamline demand forecasting, optimize inventory, personalize customer interactions, and adjust pricing, retailers can have a better handle on these stress points, and deliver a seamless digital experience ...
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...
If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...