Skip to main content

Instrumentation Score: Quantifying Telemetry Quality

Juraci Paixão Kröhling
OllyGarden

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" We see the symptoms of bad telemetry — slow incident response, sky-high observability bills, and misleading alerts — but pinpointing the root causes in the instrumentation itself, and driving consistent improvements, remains a significant challenge. What if we could move beyond subjective assessments and cultivate a more data-driven approach to telemetry quality?

For too long, evaluating instrumentation effectiveness has been a subjective exercise. We've lacked a common language or a standard measure to truly understand if our telemetry is enriching our insights or just overgrowing the plot. Today, we're not just talking about a concept; we're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score specification. OllyGarden has initiated this open-source effort to provide a standardized way to measure the quality and effectiveness of OpenTelemetry instrumentation.

What Is the Instrumentation Score?

At its core, the Instrumentation Score is a numerical value derived from analyzing OTLP (OpenTelemetry Protocol) data streams. It's not a black box. The score is calculated based on a set of rules, each targeting a specific aspect of instrumentation quality. Each rule has a defined impact (e.g., Critical, Important, Normal, Low) and a weight, allowing for a nuanced assessment of your telemetry.

The primary focus is on OpenTelemetry, leveraging its semantic conventions and best practices as the bedrock for rule definitions. The score aims to be:

  • Objective: Providing a consistent, quantifiable measure, removing subjectivity.
  • Actionable: Highlighting specific areas of non-conformance so engineers know exactly what to fix.
  • Standardized: Offering a common and portable benchmark for services, teams, or even across organizations over time.

The Role of Rules

The true power of the Instrumentation Score lies in its rules, which codify known best practices and anti-patterns that directly impact data usability, cost, and analytical value. For instance, rules can ensure fundamental data integrity by flagging telemetry missing critical attributes like service.name, which is essential for aggregation, filtering, and ownership in most backends. Other rules address common cost and performance anti-patterns, such as identifying metric attributes with excessively high cardinality that can explode database costs and cripple query performance, or detecting overly large traces that increase network overhead and storage with verbose, low-value data.

Furthermore, the score highlights trace completeness by pointing out broken traces or missing root spans that hinder end-to-end visibility. It also encourages efficient signal usage, for example, by discouraging the use of expensive logs for simple event counting when metrics would be a more performant and cost-effective choice. These examples merely scratch the surface; the specification is designed to be extensible, allowing the community to define rules for a wide array of scenarios, including adherence to specific semantic conventions or the use of appropriate instrumentation SDK versions.

The need for such a standardized approach isn't just theoretical; it's a practical challenge faced by engineering teams globally. James Moessis, Senior Software Engineer in the Observability team at Atlassian, shares this perspective: "Instrumentation Score is a much-needed innovation that fills a critical gap in the observability ecosystem. It's the kind of idea that made me think, 'We should have done this ages ago.' At Atlassian, our observability team is constantly tackling telemetry quality issues across the company's many services. With so many services, it can sometimes feel like we are trying to boil the ocean. Having a standardized 'instrumentation score' would certainly help us identify and report to teams where issues are."

James' sentiment echoes what many in the observability field experience. It underscores the value of a common yardstick to help teams prioritize improvements and communicate effectively about telemetry health. This widespread need is precisely why we believe the Instrumentation Score must be an open, collaborative effort.

An Open Invitation: Why Your Contribution Is Crucial

OllyGarden has kickstarted the Instrumentation Score specification and is committed to its development as an open-source, community-driven standard. We are ensuring this initiative fosters an open governance model, drawing support and contributions from across the industry, including companies like Dash0, New Relic, Splunk, Datadog, and Grafana Labs.

But the true strength and comprehensiveness of this score will come from you — the observability engineers in the trenches. The initial set of rules and conventions provides a solid foundation, but we all know that "good" and "bad" telemetry patterns often emerge from hard-won experience with specific technologies, platforms, or failure modes.

This is where you come in. We are actively seeking contributions to the Instrumentation Score specification:

  • Propose New Rules: Encounter a common instrumentation pitfall that isn't covered? Define it. What about rules for serverless environments, service meshes, or emerging technologies like AI/ML observability? Your insights are invaluable.
  • Refine Existing Rule Concepts: Have ideas on how to better detect a particular anti-pattern? Suggest improvements to rule definition, impact weighting, or metadata.
  • Share Edge Cases and Anti-Patterns: Help us build a comprehensive knowledge base by sharing real-world examples of telemetry that led you astray or cost you a fortune.
  • Debate and Discuss: Engage in discussions around rule definitions, ensuring they are clear, actionable, and universally applicable.

By contributing, you help codify the collective wisdom of the observability community into a practical, actionable standard. This isn't just about defining rules; it's about creating a shared understanding and a common language for telemetry excellence. You'll be shaping a tool that benefits the entire ecosystem, helps tame telemetry chaos, and ultimately makes our collective lives as engineers easier.

Getting Started and Making an Impact

The Instrumentation Score is more than just a number; it's a catalyst for conversation and continuous improvement in how we instrument our systems. It's a tool to help us all move from reactive troubleshooting to proactive telemetry optimization.

We invite you to:

1. Explore the Instrumentation Score landing page for an overview.
2. Dive into the specification on GitHub. This is where the collaborative work happens. Familiarize yourself with the current structure and rule ideas.
3. Contribute: Open an issue to discuss a new rule idea or suggest an improvement. Better yet, submit a pull request with your proposed rule definition, including its rationale, suggested severity, and how it could be detected. Let's build this together.

OllyGarden is proud to have planted the first seed for the Instrumentation Score. Now, let's cultivate it together. Let's build a standard that empowers every engineer to confidently answer "Yes, our telemetry is good, and here's how we know."

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Instrumentation Score: Quantifying Telemetry Quality

Juraci Paixão Kröhling
OllyGarden

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" We see the symptoms of bad telemetry — slow incident response, sky-high observability bills, and misleading alerts — but pinpointing the root causes in the instrumentation itself, and driving consistent improvements, remains a significant challenge. What if we could move beyond subjective assessments and cultivate a more data-driven approach to telemetry quality?

For too long, evaluating instrumentation effectiveness has been a subjective exercise. We've lacked a common language or a standard measure to truly understand if our telemetry is enriching our insights or just overgrowing the plot. Today, we're not just talking about a concept; we're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score specification. OllyGarden has initiated this open-source effort to provide a standardized way to measure the quality and effectiveness of OpenTelemetry instrumentation.

What Is the Instrumentation Score?

At its core, the Instrumentation Score is a numerical value derived from analyzing OTLP (OpenTelemetry Protocol) data streams. It's not a black box. The score is calculated based on a set of rules, each targeting a specific aspect of instrumentation quality. Each rule has a defined impact (e.g., Critical, Important, Normal, Low) and a weight, allowing for a nuanced assessment of your telemetry.

The primary focus is on OpenTelemetry, leveraging its semantic conventions and best practices as the bedrock for rule definitions. The score aims to be:

  • Objective: Providing a consistent, quantifiable measure, removing subjectivity.
  • Actionable: Highlighting specific areas of non-conformance so engineers know exactly what to fix.
  • Standardized: Offering a common and portable benchmark for services, teams, or even across organizations over time.

The Role of Rules

The true power of the Instrumentation Score lies in its rules, which codify known best practices and anti-patterns that directly impact data usability, cost, and analytical value. For instance, rules can ensure fundamental data integrity by flagging telemetry missing critical attributes like service.name, which is essential for aggregation, filtering, and ownership in most backends. Other rules address common cost and performance anti-patterns, such as identifying metric attributes with excessively high cardinality that can explode database costs and cripple query performance, or detecting overly large traces that increase network overhead and storage with verbose, low-value data.

Furthermore, the score highlights trace completeness by pointing out broken traces or missing root spans that hinder end-to-end visibility. It also encourages efficient signal usage, for example, by discouraging the use of expensive logs for simple event counting when metrics would be a more performant and cost-effective choice. These examples merely scratch the surface; the specification is designed to be extensible, allowing the community to define rules for a wide array of scenarios, including adherence to specific semantic conventions or the use of appropriate instrumentation SDK versions.

The need for such a standardized approach isn't just theoretical; it's a practical challenge faced by engineering teams globally. James Moessis, Senior Software Engineer in the Observability team at Atlassian, shares this perspective: "Instrumentation Score is a much-needed innovation that fills a critical gap in the observability ecosystem. It's the kind of idea that made me think, 'We should have done this ages ago.' At Atlassian, our observability team is constantly tackling telemetry quality issues across the company's many services. With so many services, it can sometimes feel like we are trying to boil the ocean. Having a standardized 'instrumentation score' would certainly help us identify and report to teams where issues are."

James' sentiment echoes what many in the observability field experience. It underscores the value of a common yardstick to help teams prioritize improvements and communicate effectively about telemetry health. This widespread need is precisely why we believe the Instrumentation Score must be an open, collaborative effort.

An Open Invitation: Why Your Contribution Is Crucial

OllyGarden has kickstarted the Instrumentation Score specification and is committed to its development as an open-source, community-driven standard. We are ensuring this initiative fosters an open governance model, drawing support and contributions from across the industry, including companies like Dash0, New Relic, Splunk, Datadog, and Grafana Labs.

But the true strength and comprehensiveness of this score will come from you — the observability engineers in the trenches. The initial set of rules and conventions provides a solid foundation, but we all know that "good" and "bad" telemetry patterns often emerge from hard-won experience with specific technologies, platforms, or failure modes.

This is where you come in. We are actively seeking contributions to the Instrumentation Score specification:

  • Propose New Rules: Encounter a common instrumentation pitfall that isn't covered? Define it. What about rules for serverless environments, service meshes, or emerging technologies like AI/ML observability? Your insights are invaluable.
  • Refine Existing Rule Concepts: Have ideas on how to better detect a particular anti-pattern? Suggest improvements to rule definition, impact weighting, or metadata.
  • Share Edge Cases and Anti-Patterns: Help us build a comprehensive knowledge base by sharing real-world examples of telemetry that led you astray or cost you a fortune.
  • Debate and Discuss: Engage in discussions around rule definitions, ensuring they are clear, actionable, and universally applicable.

By contributing, you help codify the collective wisdom of the observability community into a practical, actionable standard. This isn't just about defining rules; it's about creating a shared understanding and a common language for telemetry excellence. You'll be shaping a tool that benefits the entire ecosystem, helps tame telemetry chaos, and ultimately makes our collective lives as engineers easier.

Getting Started and Making an Impact

The Instrumentation Score is more than just a number; it's a catalyst for conversation and continuous improvement in how we instrument our systems. It's a tool to help us all move from reactive troubleshooting to proactive telemetry optimization.

We invite you to:

1. Explore the Instrumentation Score landing page for an overview.
2. Dive into the specification on GitHub. This is where the collaborative work happens. Familiarize yourself with the current structure and rule ideas.
3. Contribute: Open an issue to discuss a new rule idea or suggest an improvement. Better yet, submit a pull request with your proposed rule definition, including its rationale, suggested severity, and how it could be detected. Let's build this together.

OllyGarden is proud to have planted the first seed for the Instrumentation Score. Now, let's cultivate it together. Let's build a standard that empowers every engineer to confidently answer "Yes, our telemetry is good, and here's how we know."

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...