Skip to main content

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...