Skip to main content

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...