As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" We see the symptoms of bad telemetry — slow incident response, sky-high observability bills, and misleading alerts — but pinpointing the root causes in the instrumentation itself, and driving consistent improvements, remains a significant challenge. What if we could move beyond subjective assessments and cultivate a more data-driven approach to telemetry quality?
For too long, evaluating instrumentation effectiveness has been a subjective exercise. We've lacked a common language or a standard measure to truly understand if our telemetry is enriching our insights or just overgrowing the plot. Today, we're not just talking about a concept; we're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score specification. OllyGarden has initiated this open-source effort to provide a standardized way to measure the quality and effectiveness of OpenTelemetry instrumentation.
What Is the Instrumentation Score?
At its core, the Instrumentation Score is a numerical value derived from analyzing OTLP (OpenTelemetry Protocol) data streams. It's not a black box. The score is calculated based on a set of rules, each targeting a specific aspect of instrumentation quality. Each rule has a defined impact (e.g., Critical, Important, Normal, Low) and a weight, allowing for a nuanced assessment of your telemetry.
The primary focus is on OpenTelemetry, leveraging its semantic conventions and best practices as the bedrock for rule definitions. The score aims to be:
- Objective: Providing a consistent, quantifiable measure, removing subjectivity.
- Actionable: Highlighting specific areas of non-conformance so engineers know exactly what to fix.
- Standardized: Offering a common and portable benchmark for services, teams, or even across organizations over time.
The Role of Rules
The true power of the Instrumentation Score lies in its rules, which codify known best practices and anti-patterns that directly impact data usability, cost, and analytical value. For instance, rules can ensure fundamental data integrity by flagging telemetry missing critical attributes like service.name, which is essential for aggregation, filtering, and ownership in most backends. Other rules address common cost and performance anti-patterns, such as identifying metric attributes with excessively high cardinality that can explode database costs and cripple query performance, or detecting overly large traces that increase network overhead and storage with verbose, low-value data.
Furthermore, the score highlights trace completeness by pointing out broken traces or missing root spans that hinder end-to-end visibility. It also encourages efficient signal usage, for example, by discouraging the use of expensive logs for simple event counting when metrics would be a more performant and cost-effective choice. These examples merely scratch the surface; the specification is designed to be extensible, allowing the community to define rules for a wide array of scenarios, including adherence to specific semantic conventions or the use of appropriate instrumentation SDK versions.
The need for such a standardized approach isn't just theoretical; it's a practical challenge faced by engineering teams globally. James Moessis, Senior Software Engineer in the Observability team at Atlassian, shares this perspective: "Instrumentation Score is a much-needed innovation that fills a critical gap in the observability ecosystem. It's the kind of idea that made me think, 'We should have done this ages ago.' At Atlassian, our observability team is constantly tackling telemetry quality issues across the company's many services. With so many services, it can sometimes feel like we are trying to boil the ocean. Having a standardized 'instrumentation score' would certainly help us identify and report to teams where issues are."
James' sentiment echoes what many in the observability field experience. It underscores the value of a common yardstick to help teams prioritize improvements and communicate effectively about telemetry health. This widespread need is precisely why we believe the Instrumentation Score must be an open, collaborative effort.
An Open Invitation: Why Your Contribution Is Crucial
OllyGarden has kickstarted the Instrumentation Score specification and is committed to its development as an open-source, community-driven standard. We are ensuring this initiative fosters an open governance model, drawing support and contributions from across the industry, including companies like Dash0, New Relic, Splunk, Datadog, and Grafana Labs.
But the true strength and comprehensiveness of this score will come from you — the observability engineers in the trenches. The initial set of rules and conventions provides a solid foundation, but we all know that "good" and "bad" telemetry patterns often emerge from hard-won experience with specific technologies, platforms, or failure modes.
This is where you come in. We are actively seeking contributions to the Instrumentation Score specification:
- Propose New Rules: Encounter a common instrumentation pitfall that isn't covered? Define it. What about rules for serverless environments, service meshes, or emerging technologies like AI/ML observability? Your insights are invaluable.
- Refine Existing Rule Concepts: Have ideas on how to better detect a particular anti-pattern? Suggest improvements to rule definition, impact weighting, or metadata.
- Share Edge Cases and Anti-Patterns: Help us build a comprehensive knowledge base by sharing real-world examples of telemetry that led you astray or cost you a fortune.
- Debate and Discuss: Engage in discussions around rule definitions, ensuring they are clear, actionable, and universally applicable.
By contributing, you help codify the collective wisdom of the observability community into a practical, actionable standard. This isn't just about defining rules; it's about creating a shared understanding and a common language for telemetry excellence. You'll be shaping a tool that benefits the entire ecosystem, helps tame telemetry chaos, and ultimately makes our collective lives as engineers easier.
Getting Started and Making an Impact
The Instrumentation Score is more than just a number; it's a catalyst for conversation and continuous improvement in how we instrument our systems. It's a tool to help us all move from reactive troubleshooting to proactive telemetry optimization.
We invite you to:
1. Explore the Instrumentation Score landing page for an overview.
2. Dive into the specification on GitHub. This is where the collaborative work happens. Familiarize yourself with the current structure and rule ideas.
3. Contribute: Open an issue to discuss a new rule idea or suggest an improvement. Better yet, submit a pull request with your proposed rule definition, including its rationale, suggested severity, and how it could be detected. Let's build this together.
OllyGarden is proud to have planted the first seed for the Instrumentation Score. Now, let's cultivate it together. Let's build a standard that empowers every engineer to confidently answer "Yes, our telemetry is good, and here's how we know."