Skip to main content

Balancing OTel's Strengths and Struggles-Part 1

Juraci Paixão Kröhling
OllyGarden

OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems. It's the CNCF's second-largest project after Kubernetes, signifying massive industry investment and hope. But beyond the hype and the GitHub stars, what's the ground truth for the observability engineers and SREs implementing and maintaining OTel day-to-day?

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

We'll structure this exploration around the two sides of the OTel coin, echoing the format inspired by the classic "Linux Sucks" talks: first, the frustrations and hurdles — the "OTel Sucks" moments — and then, the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Sucks - The Real-World Hurdles and Headaches

No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered.

1. The Ever-Shifting Sands: Stability and Semantic Conventions

A recurring theme was the challenge of keeping up with OTel's rapid development pace, particularly concerning the Collector and semantic conventions. Elena Kovalenko from Delivery Hero pinpointed the "absence of a stable collector version" and the "quick pace of change" as significant operational burdens. While progress is good, frequent updates demand constant vigilance, testing, and adaptation to avoid breaking production pipelines. Each Collector update, though potentially bringing valuable features or fixes, also carries the risk of subtle incompatibilities or requires configuration tweaks, adding overhead to the platform team's workload.

This instability extends crucially to semantic conventions — the standardized names and attributes for telemetry data. James Moessis from Atlassian and Alexandre Magno Prado Machado from Pismo both shared frustrations here. When conventions change, it's not a simple find-and-replace. It breaks dashboards, alerts, and any tooling reliant on the old conventions. As Alexandre highlighted, rolling out these changes across a large organization is a significant undertaking, requiring coordination across multiple teams and potentially impacting developer velocity. Imagine telling dozens or hundreds of developers they need to update their instrumentation — it's often met with resistance, especially when the perceived value isn't immediately clear to them. This friction point touches upon the challenge of maintaining good telemetry; inconsistent or outdated attributes diminish the data's value.

2. Auto-Instrumentation: The Double-Edged Sword

Auto-instrumentation is often pitched as OTel's magic bullet — drop in an agent, and poof, instant observability. The reality, as hinted at in our conversations, is more complex. While it lowers the barrier to entry, it often generates a high volume of generic, sometimes noisy, telemetry. Adriel Perkins from Liatrio touched upon the initial ease of getting started but also the subsequent need for refinement.

The challenge lies in the signal-to-noise ratio. Auto-instrumentation might capture every single HTTP request or database call, but is all that data equally valuable? Often, it lacks the specific business context that makes telemetry truly actionable. This can lead to "bad telemetry" — data that is voluminous and costly to store and process but provides limited insight during an actual incident. Furthermore, customizing auto-instrumentation to add that crucial context or filter out noise can sometimes be as complex as manual instrumentation, negating some of the initial ease-of-use benefits. Teams often find themselves needing to layer manual instrumentation on top or invest heavily in configuring the auto-instrumentation agents, blurring the lines between the two approaches.

3. The Complexity of Configuration and Deployment

While the OTel Collector is lauded for its flexibility (more on that later), configuring it, especially for complex scenarios involving multiple pipelines, processors, and exporters, can be daunting. Elena mentioned the learning curve associated with mastering the Collector's configuration YAML and understanding the nuances of its various components. Debugging issues within a complex Collector pipeline — Why is data being dropped? Why is latency high? — requires deep expertise.

James Moessis also alluded to the intricacies of implementing advanced features like tail sampling. While head sampling is straightforward (make a decision upfront), tail sampling (decide after seeing the whole trace) is far more complex, requiring stateful processing and careful resource management. Building or deploying robust sampling strategies often involves significant engineering effort beyond just configuring the standard OTel components, as evidenced by Atlassian's decision to build and open-source their own tail sampler.

4. Documentation and Guidance Gaps

While the OTel documentation is extensive, practitioners sometimes find gaps when dealing with specific edge cases or advanced configurations. Finding clear, concise guidance on best practices for structuring Collector configurations at scale, managing semantic convention updates gracefully, or optimizing performance for specific workloads can sometimes involve piecing together information from GitHub issues, Slack channels, and blog posts. The rapid evolution means documentation can occasionally lag behind the latest features or changes.

These challenges aren't reasons to dismiss OTel, but acknowledging them is crucial for any team embarking on or scaling their OTel journey. It requires commitment, expertise, and a willingness to navigate a rapidly evolving landscape.

Got to: Balancing OTel's Strengths and Struggles - Part 2

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

Balancing OTel's Strengths and Struggles-Part 1

Juraci Paixão Kröhling
OllyGarden

OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems. It's the CNCF's second-largest project after Kubernetes, signifying massive industry investment and hope. But beyond the hype and the GitHub stars, what's the ground truth for the observability engineers and SREs implementing and maintaining OTel day-to-day?

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

We'll structure this exploration around the two sides of the OTel coin, echoing the format inspired by the classic "Linux Sucks" talks: first, the frustrations and hurdles — the "OTel Sucks" moments — and then, the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Sucks - The Real-World Hurdles and Headaches

No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered.

1. The Ever-Shifting Sands: Stability and Semantic Conventions

A recurring theme was the challenge of keeping up with OTel's rapid development pace, particularly concerning the Collector and semantic conventions. Elena Kovalenko from Delivery Hero pinpointed the "absence of a stable collector version" and the "quick pace of change" as significant operational burdens. While progress is good, frequent updates demand constant vigilance, testing, and adaptation to avoid breaking production pipelines. Each Collector update, though potentially bringing valuable features or fixes, also carries the risk of subtle incompatibilities or requires configuration tweaks, adding overhead to the platform team's workload.

This instability extends crucially to semantic conventions — the standardized names and attributes for telemetry data. James Moessis from Atlassian and Alexandre Magno Prado Machado from Pismo both shared frustrations here. When conventions change, it's not a simple find-and-replace. It breaks dashboards, alerts, and any tooling reliant on the old conventions. As Alexandre highlighted, rolling out these changes across a large organization is a significant undertaking, requiring coordination across multiple teams and potentially impacting developer velocity. Imagine telling dozens or hundreds of developers they need to update their instrumentation — it's often met with resistance, especially when the perceived value isn't immediately clear to them. This friction point touches upon the challenge of maintaining good telemetry; inconsistent or outdated attributes diminish the data's value.

2. Auto-Instrumentation: The Double-Edged Sword

Auto-instrumentation is often pitched as OTel's magic bullet — drop in an agent, and poof, instant observability. The reality, as hinted at in our conversations, is more complex. While it lowers the barrier to entry, it often generates a high volume of generic, sometimes noisy, telemetry. Adriel Perkins from Liatrio touched upon the initial ease of getting started but also the subsequent need for refinement.

The challenge lies in the signal-to-noise ratio. Auto-instrumentation might capture every single HTTP request or database call, but is all that data equally valuable? Often, it lacks the specific business context that makes telemetry truly actionable. This can lead to "bad telemetry" — data that is voluminous and costly to store and process but provides limited insight during an actual incident. Furthermore, customizing auto-instrumentation to add that crucial context or filter out noise can sometimes be as complex as manual instrumentation, negating some of the initial ease-of-use benefits. Teams often find themselves needing to layer manual instrumentation on top or invest heavily in configuring the auto-instrumentation agents, blurring the lines between the two approaches.

3. The Complexity of Configuration and Deployment

While the OTel Collector is lauded for its flexibility (more on that later), configuring it, especially for complex scenarios involving multiple pipelines, processors, and exporters, can be daunting. Elena mentioned the learning curve associated with mastering the Collector's configuration YAML and understanding the nuances of its various components. Debugging issues within a complex Collector pipeline — Why is data being dropped? Why is latency high? — requires deep expertise.

James Moessis also alluded to the intricacies of implementing advanced features like tail sampling. While head sampling is straightforward (make a decision upfront), tail sampling (decide after seeing the whole trace) is far more complex, requiring stateful processing and careful resource management. Building or deploying robust sampling strategies often involves significant engineering effort beyond just configuring the standard OTel components, as evidenced by Atlassian's decision to build and open-source their own tail sampler.

4. Documentation and Guidance Gaps

While the OTel documentation is extensive, practitioners sometimes find gaps when dealing with specific edge cases or advanced configurations. Finding clear, concise guidance on best practices for structuring Collector configurations at scale, managing semantic convention updates gracefully, or optimizing performance for specific workloads can sometimes involve piecing together information from GitHub issues, Slack channels, and blog posts. The rapid evolution means documentation can occasionally lag behind the latest features or changes.

These challenges aren't reasons to dismiss OTel, but acknowledging them is crucial for any team embarking on or scaling their OTel journey. It requires commitment, expertise, and a willingness to navigate a rapidly evolving landscape.

Got to: Balancing OTel's Strengths and Struggles - Part 2

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...