Skip to main content

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Balancing OTel's Strengths and Struggles-Part 2

Juraci Paixão Kröhling
OllyGarden

Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.

Start with Balancing OTel's Strengths and Struggles - Part 1

Part 2 of this blog covers the powerful advantages and breakthroughs — the "OTel Rocks" moments.

OTel Rocks - The Power, Flexibility, and Future-Proofing

Despite the frustrations, every engineer we spoke with ultimately affirmed the value and power of OpenTelemetry. The "sucks" moments are often the flip side of its greatest strengths.

1. Vendor Neutrality: Freedom and Flexibility

This is arguably OTel's foundational promise and a major win cited by all interviewees. Before OTel, choosing an observability vendor often meant committing to their proprietary agents and data formats. Switching vendors was a painful, resource-intensive process involving re-instrumenting applications.

OTel breaks this lock-in. By instrumenting applications with OTel SDKs and using the OTel Collector to process and route data, organizations gain the freedom to choose best-of-breed backend platforms for different signals or to switch vendors with minimal disruption to the application teams. Alexandre Magno emphasized the strategic importance of this, allowing Pismo to control their data destiny and optimize costs. Adriel Perkins also valued the ability to send telemetry to multiple destinations simultaneously, enabling gradual migrations or specialized analysis in different tools. This decoupling is a massive strategic advantage in a market with rapidly evolving vendor capabilities and pricing models.

2. The Collector: A Swiss Army Knife for Telemetry

While its configuration can be complex, the OTel Collector's power and flexibility were universally praised. Elena Kovalenko, despite noting the update challenges, called it the "best option" for Delivery Hero's complex needs. The Collector acts as a central hub for receiving, processing, and exporting telemetry data.

Its processor pipeline allows teams to enrich data (e.g., adding Kubernetes metadata), filter noise (e.g., dropping health checks), ensure compliance (e.g., masking sensitive data), and manage costs (e.g., sampling). James Moessis highlighted this modularity: "When OTel does suck, the good thing is that it's designed in a way that doesn't suck so that you can replace little modular bits here and there." Need custom processing? Write a custom processor. Need to export to a new backend? Add an exporter. This extensibility allows teams to tailor their observability pipeline precisely to their needs without being constrained by a specific vendor's agent capabilities. It's the key enabler for managing telemetry quality and cost at scale.

3. Unification and Standardization

Before OTel, teams often wrestled with disparate agents and libraries for traces, metrics, and logs, leading to inconsistent data and correlation challenges. OTel provides a unified approach — standardized SDKs, APIs, and data protocols (OTLP) across signals. This simplifies instrumentation efforts and, crucially, enables better correlation between different telemetry types. Seeing a spike in metric latency? OTel makes it easier to jump to the corresponding traces to understand the cause. This unified view is essential for truly understanding the behavior of complex, distributed systems.

4. Enabling Cost Optimization and Deeper Insights

Alexandre Magno shared compelling examples of how Pismo leveraged OTel (specifically, sampling via the Collector) to achieve significant cost savings on their observability spend — potentially millions of dollars. By gaining fine-grained control over what data is sent where, teams can optimize for both cost and performance.

Furthermore, the rich, standardized data OTel provides enables deeper insights that might be harder to achieve with proprietary formats. Consistent attribute propagation across services allows for more accurate distributed tracing and analysis of end-to-end user journeys.

5. A Vibrant, Collaborative Community

OpenTelemetry isn't just code; it's a massive community effort. Adriel Perkins spoke positively about the welcoming nature of the community and the opportunities to learn and contribute. James Moessis echoed this, noting the responsiveness of maintainers and the rigorous code review process, which ultimately improves the quality of the project.

While navigating the community and contributing might have its own learning curve, the fact that OTel is developed in the open means users aren't reliant on a single vendor's roadmap. If a feature is missing or a bug is impacting you, there's a pathway (though sometimes challenging) to influence the direction or contribute a fix. This collaborative aspect fosters innovation and ensures OTel evolves based on the real-world needs of its users. The existence of initiatives like the contributor experience survey shows a commitment to making the community accessible and effective.

The Verdict: Worth the Climb?

The experiences of Adriel, Alexandre, Elena, and James paint a clear picture: OpenTelemetry is immensely powerful, but it's not a plug-and-play panacea. It demands investment — in learning, in configuration, in keeping pace with its evolution, and in carefully managing the quality and volume of telemetry data generated, especially when relying heavily on auto-instrumentation.

The "sucks" moments — the breaking changes, the configuration complexity, the occasional documentation gaps, the challenge of taming auto-instrumentation noise — are real and require dedicated engineering effort to overcome. However, the "rocks" moments — unparalleled flexibility, vendor freedom, a unified data model, powerful processing capabilities via the Collector, and a vibrant community — represent a fundamental shift in how we approach observability.

For observability engineers navigating today's complex cloud-native environments, OTel offers a path towards a more standardized, flexible, and future-proof observability strategy. It requires embracing the complexities and contributing back to the ecosystem, but the rewards — deeper insights, greater control, and freedom from lock-in — appear to be well worth the climb. The journey might have its frustrations, but OpenTelemetry is undeniably shaping the future of the field.

A special thank you to Adriel Perkins, Alexandre Magno Prado Machado, Elena Kovalenko, and James Moessis for generously sharing their time and candid experiences for this ongoing conversation about OpenTelemetry in the real world.

Juraci Paixão Kröhling is a Software Engineer at OllyGarden, OpenTelemetry Governing Board Member and CNCF Ambassador

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...