Skip to main content

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths ... Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments ...

OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems ... No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered ...

Enterprises are turning to AI-powered software platforms to make IT management more intelligent and ensure their systems and technology meet business needs for efficiency, lowers costs and innovation, according to new research from Information Services Group ...

The power of Kubernetes lies in its ability to orchestrate containerized applications with unparalleled efficiency. Yet, this power comes at a cost: the dynamic, distributed, and ephemeral nature of its architecture creates a monitoring challenge akin to tracking a constantly shifting, interconnected network of fleeting entities ... Due to the dynamic and complex nature of Kubernetes, monitoring poses a substantial challenge for DevOps and platform engineers. Here are the primary obstacles ...

The perception of IT has undergone a remarkable transformation in recent years. What was once viewed primarily as a cost center has transformed into a pivotal force driving business innovation and market leadership ... As someone who has witnessed and helped drive this evolution, it's become clear to me that the most successful organizations share a common thread: they've mastered the art of leveraging IT advancements to achieve measurable business outcomes ...

More than half (51%) of companies are already leveraging AI agents, according to the PagerDuty Agentic AI Survey. Agentic AI adoption is poised to accelerate faster than generative AI (GenAI) while reshaping automation and decision-making across industries ...

Image
Pagerduty

 

Real privacy protection thanks to technology and processes is often portrayed as too hard and too costly to implement. So the most common strategy is to do as little as possible just to conform to formal requirements of current and incoming regulations. This is a missed opportunity ...

The expanding use of AI is driving enterprise interest in data operations (DataOps) to orchestrate data integration and processing and improve data quality and validity, according to a new report from Information Services Group (ISG) ...

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths ... Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments ...

OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems ... No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered ...

Enterprises are turning to AI-powered software platforms to make IT management more intelligent and ensure their systems and technology meet business needs for efficiency, lowers costs and innovation, according to new research from Information Services Group ...

The power of Kubernetes lies in its ability to orchestrate containerized applications with unparalleled efficiency. Yet, this power comes at a cost: the dynamic, distributed, and ephemeral nature of its architecture creates a monitoring challenge akin to tracking a constantly shifting, interconnected network of fleeting entities ... Due to the dynamic and complex nature of Kubernetes, monitoring poses a substantial challenge for DevOps and platform engineers. Here are the primary obstacles ...

The perception of IT has undergone a remarkable transformation in recent years. What was once viewed primarily as a cost center has transformed into a pivotal force driving business innovation and market leadership ... As someone who has witnessed and helped drive this evolution, it's become clear to me that the most successful organizations share a common thread: they've mastered the art of leveraging IT advancements to achieve measurable business outcomes ...

More than half (51%) of companies are already leveraging AI agents, according to the PagerDuty Agentic AI Survey. Agentic AI adoption is poised to accelerate faster than generative AI (GenAI) while reshaping automation and decision-making across industries ...

Image
Pagerduty

 

Real privacy protection thanks to technology and processes is often portrayed as too hard and too costly to implement. So the most common strategy is to do as little as possible just to conform to formal requirements of current and incoming regulations. This is a missed opportunity ...

The expanding use of AI is driving enterprise interest in data operations (DataOps) to orchestrate data integration and processing and improve data quality and validity, according to a new report from Information Services Group (ISG) ...