Skip to main content

Instrumentation Score: Quantifying Telemetry Quality

Juraci Paixão Kröhling
OllyGarden

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" We see the symptoms of bad telemetry — slow incident response, sky-high observability bills, and misleading alerts — but pinpointing the root causes in the instrumentation itself, and driving consistent improvements, remains a significant challenge. What if we could move beyond subjective assessments and cultivate a more data-driven approach to telemetry quality?

For too long, evaluating instrumentation effectiveness has been a subjective exercise. We've lacked a common language or a standard measure to truly understand if our telemetry is enriching our insights or just overgrowing the plot. Today, we're not just talking about a concept; we're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score specification. OllyGarden has initiated this open-source effort to provide a standardized way to measure the quality and effectiveness of OpenTelemetry instrumentation.

What Is the Instrumentation Score?

At its core, the Instrumentation Score is a numerical value derived from analyzing OTLP (OpenTelemetry Protocol) data streams. It's not a black box. The score is calculated based on a set of rules, each targeting a specific aspect of instrumentation quality. Each rule has a defined impact (e.g., Critical, Important, Normal, Low) and a weight, allowing for a nuanced assessment of your telemetry.

The primary focus is on OpenTelemetry, leveraging its semantic conventions and best practices as the bedrock for rule definitions. The score aims to be:

  • Objective: Providing a consistent, quantifiable measure, removing subjectivity.
  • Actionable: Highlighting specific areas of non-conformance so engineers know exactly what to fix.
  • Standardized: Offering a common and portable benchmark for services, teams, or even across organizations over time.

The Role of Rules

The true power of the Instrumentation Score lies in its rules, which codify known best practices and anti-patterns that directly impact data usability, cost, and analytical value. For instance, rules can ensure fundamental data integrity by flagging telemetry missing critical attributes like service.name, which is essential for aggregation, filtering, and ownership in most backends. Other rules address common cost and performance anti-patterns, such as identifying metric attributes with excessively high cardinality that can explode database costs and cripple query performance, or detecting overly large traces that increase network overhead and storage with verbose, low-value data.

Furthermore, the score highlights trace completeness by pointing out broken traces or missing root spans that hinder end-to-end visibility. It also encourages efficient signal usage, for example, by discouraging the use of expensive logs for simple event counting when metrics would be a more performant and cost-effective choice. These examples merely scratch the surface; the specification is designed to be extensible, allowing the community to define rules for a wide array of scenarios, including adherence to specific semantic conventions or the use of appropriate instrumentation SDK versions.

The need for such a standardized approach isn't just theoretical; it's a practical challenge faced by engineering teams globally. James Moessis, Senior Software Engineer in the Observability team at Atlassian, shares this perspective: "Instrumentation Score is a much-needed innovation that fills a critical gap in the observability ecosystem. It's the kind of idea that made me think, 'We should have done this ages ago.' At Atlassian, our observability team is constantly tackling telemetry quality issues across the company's many services. With so many services, it can sometimes feel like we are trying to boil the ocean. Having a standardized 'instrumentation score' would certainly help us identify and report to teams where issues are."

James' sentiment echoes what many in the observability field experience. It underscores the value of a common yardstick to help teams prioritize improvements and communicate effectively about telemetry health. This widespread need is precisely why we believe the Instrumentation Score must be an open, collaborative effort.

An Open Invitation: Why Your Contribution Is Crucial

OllyGarden has kickstarted the Instrumentation Score specification and is committed to its development as an open-source, community-driven standard. We are ensuring this initiative fosters an open governance model, drawing support and contributions from across the industry, including companies like Dash0, New Relic, Splunk, Datadog, and Grafana Labs.

But the true strength and comprehensiveness of this score will come from you — the observability engineers in the trenches. The initial set of rules and conventions provides a solid foundation, but we all know that "good" and "bad" telemetry patterns often emerge from hard-won experience with specific technologies, platforms, or failure modes.

This is where you come in. We are actively seeking contributions to the Instrumentation Score specification:

  • Propose New Rules: Encounter a common instrumentation pitfall that isn't covered? Define it. What about rules for serverless environments, service meshes, or emerging technologies like AI/ML observability? Your insights are invaluable.
  • Refine Existing Rule Concepts: Have ideas on how to better detect a particular anti-pattern? Suggest improvements to rule definition, impact weighting, or metadata.
  • Share Edge Cases and Anti-Patterns: Help us build a comprehensive knowledge base by sharing real-world examples of telemetry that led you astray or cost you a fortune.
  • Debate and Discuss: Engage in discussions around rule definitions, ensuring they are clear, actionable, and universally applicable.

By contributing, you help codify the collective wisdom of the observability community into a practical, actionable standard. This isn't just about defining rules; it's about creating a shared understanding and a common language for telemetry excellence. You'll be shaping a tool that benefits the entire ecosystem, helps tame telemetry chaos, and ultimately makes our collective lives as engineers easier.

Getting Started and Making an Impact

The Instrumentation Score is more than just a number; it's a catalyst for conversation and continuous improvement in how we instrument our systems. It's a tool to help us all move from reactive troubleshooting to proactive telemetry optimization.

We invite you to:

1. Explore the Instrumentation Score landing page for an overview.
2. Dive into the specification on GitHub. This is where the collaborative work happens. Familiarize yourself with the current structure and rule ideas.
3. Contribute: Open an issue to discuss a new rule idea or suggest an improvement. Better yet, submit a pull request with your proposed rule definition, including its rationale, suggested severity, and how it could be detected. Let's build this together.

OllyGarden is proud to have planted the first seed for the Instrumentation Score. Now, let's cultivate it together. Let's build a standard that empowers every engineer to confidently answer "Yes, our telemetry is good, and here's how we know."

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Instrumentation Score: Quantifying Telemetry Quality

Juraci Paixão Kröhling
OllyGarden

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" We see the symptoms of bad telemetry — slow incident response, sky-high observability bills, and misleading alerts — but pinpointing the root causes in the instrumentation itself, and driving consistent improvements, remains a significant challenge. What if we could move beyond subjective assessments and cultivate a more data-driven approach to telemetry quality?

For too long, evaluating instrumentation effectiveness has been a subjective exercise. We've lacked a common language or a standard measure to truly understand if our telemetry is enriching our insights or just overgrowing the plot. Today, we're not just talking about a concept; we're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score specification. OllyGarden has initiated this open-source effort to provide a standardized way to measure the quality and effectiveness of OpenTelemetry instrumentation.

What Is the Instrumentation Score?

At its core, the Instrumentation Score is a numerical value derived from analyzing OTLP (OpenTelemetry Protocol) data streams. It's not a black box. The score is calculated based on a set of rules, each targeting a specific aspect of instrumentation quality. Each rule has a defined impact (e.g., Critical, Important, Normal, Low) and a weight, allowing for a nuanced assessment of your telemetry.

The primary focus is on OpenTelemetry, leveraging its semantic conventions and best practices as the bedrock for rule definitions. The score aims to be:

  • Objective: Providing a consistent, quantifiable measure, removing subjectivity.
  • Actionable: Highlighting specific areas of non-conformance so engineers know exactly what to fix.
  • Standardized: Offering a common and portable benchmark for services, teams, or even across organizations over time.

The Role of Rules

The true power of the Instrumentation Score lies in its rules, which codify known best practices and anti-patterns that directly impact data usability, cost, and analytical value. For instance, rules can ensure fundamental data integrity by flagging telemetry missing critical attributes like service.name, which is essential for aggregation, filtering, and ownership in most backends. Other rules address common cost and performance anti-patterns, such as identifying metric attributes with excessively high cardinality that can explode database costs and cripple query performance, or detecting overly large traces that increase network overhead and storage with verbose, low-value data.

Furthermore, the score highlights trace completeness by pointing out broken traces or missing root spans that hinder end-to-end visibility. It also encourages efficient signal usage, for example, by discouraging the use of expensive logs for simple event counting when metrics would be a more performant and cost-effective choice. These examples merely scratch the surface; the specification is designed to be extensible, allowing the community to define rules for a wide array of scenarios, including adherence to specific semantic conventions or the use of appropriate instrumentation SDK versions.

The need for such a standardized approach isn't just theoretical; it's a practical challenge faced by engineering teams globally. James Moessis, Senior Software Engineer in the Observability team at Atlassian, shares this perspective: "Instrumentation Score is a much-needed innovation that fills a critical gap in the observability ecosystem. It's the kind of idea that made me think, 'We should have done this ages ago.' At Atlassian, our observability team is constantly tackling telemetry quality issues across the company's many services. With so many services, it can sometimes feel like we are trying to boil the ocean. Having a standardized 'instrumentation score' would certainly help us identify and report to teams where issues are."

James' sentiment echoes what many in the observability field experience. It underscores the value of a common yardstick to help teams prioritize improvements and communicate effectively about telemetry health. This widespread need is precisely why we believe the Instrumentation Score must be an open, collaborative effort.

An Open Invitation: Why Your Contribution Is Crucial

OllyGarden has kickstarted the Instrumentation Score specification and is committed to its development as an open-source, community-driven standard. We are ensuring this initiative fosters an open governance model, drawing support and contributions from across the industry, including companies like Dash0, New Relic, Splunk, Datadog, and Grafana Labs.

But the true strength and comprehensiveness of this score will come from you — the observability engineers in the trenches. The initial set of rules and conventions provides a solid foundation, but we all know that "good" and "bad" telemetry patterns often emerge from hard-won experience with specific technologies, platforms, or failure modes.

This is where you come in. We are actively seeking contributions to the Instrumentation Score specification:

  • Propose New Rules: Encounter a common instrumentation pitfall that isn't covered? Define it. What about rules for serverless environments, service meshes, or emerging technologies like AI/ML observability? Your insights are invaluable.
  • Refine Existing Rule Concepts: Have ideas on how to better detect a particular anti-pattern? Suggest improvements to rule definition, impact weighting, or metadata.
  • Share Edge Cases and Anti-Patterns: Help us build a comprehensive knowledge base by sharing real-world examples of telemetry that led you astray or cost you a fortune.
  • Debate and Discuss: Engage in discussions around rule definitions, ensuring they are clear, actionable, and universally applicable.

By contributing, you help codify the collective wisdom of the observability community into a practical, actionable standard. This isn't just about defining rules; it's about creating a shared understanding and a common language for telemetry excellence. You'll be shaping a tool that benefits the entire ecosystem, helps tame telemetry chaos, and ultimately makes our collective lives as engineers easier.

Getting Started and Making an Impact

The Instrumentation Score is more than just a number; it's a catalyst for conversation and continuous improvement in how we instrument our systems. It's a tool to help us all move from reactive troubleshooting to proactive telemetry optimization.

We invite you to:

1. Explore the Instrumentation Score landing page for an overview.
2. Dive into the specification on GitHub. This is where the collaborative work happens. Familiarize yourself with the current structure and rule ideas.
3. Contribute: Open an issue to discuss a new rule idea or suggest an improvement. Better yet, submit a pull request with your proposed rule definition, including its rationale, suggested severity, and how it could be detected. Let's build this together.

OllyGarden is proud to have planted the first seed for the Instrumentation Score. Now, let's cultivate it together. Let's build a standard that empowers every engineer to confidently answer "Yes, our telemetry is good, and here's how we know."

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...