OpenTelemetry (OTel) arrived with a grand promise: a unified, vendor-neutral standard for observability data (traces, metrics, logs) that would free engineers from vendor lock-in and provide deeper insights into complex systems. It's the CNCF's second-largest project after Kubernetes, signifying massive industry investment and hope. But beyond the hype and the GitHub stars, what's the ground truth for the observability engineers and SREs implementing and maintaining OTel day-to-day?
Following up on our previous exploration as part of a KubeCon London 2025 talk, OTel Sucks (But Also Rocks!), we wanted to dive deeper into the candid conversations we had with practitioners from companies like Atlassian, Delivery Hero, Liatrio, and Pismo. While our KubeCon talk shared snippets of these experiences, much more was left on the cutting room floor. This two-part piece aims to bring those richer details to light, offering fellow observability professionals an unvarnished look at the real-world challenges and triumphs of adopting OpenTelemetry.
We'll structure this exploration around the two sides of the OTel coin, echoing the format inspired by the classic "Linux Sucks" talks: first, the frustrations and hurdles — the "OTel Sucks" moments — and then, the powerful advantages and breakthroughs — the "OTel Rocks" moments.
OTel Sucks - The Real-World Hurdles and Headaches
No powerful technology comes without its challenges, and OpenTelemetry is no exception. The engineers we spoke with were frank about the friction points they've encountered.
1. The Ever-Shifting Sands: Stability and Semantic Conventions
A recurring theme was the challenge of keeping up with OTel's rapid development pace, particularly concerning the Collector and semantic conventions. Elena Kovalenko from Delivery Hero pinpointed the "absence of a stable collector version" and the "quick pace of change" as significant operational burdens. While progress is good, frequent updates demand constant vigilance, testing, and adaptation to avoid breaking production pipelines. Each Collector update, though potentially bringing valuable features or fixes, also carries the risk of subtle incompatibilities or requires configuration tweaks, adding overhead to the platform team's workload.
This instability extends crucially to semantic conventions — the standardized names and attributes for telemetry data. James Moessis from Atlassian and Alexandre Magno Prado Machado from Pismo both shared frustrations here. When conventions change, it's not a simple find-and-replace. It breaks dashboards, alerts, and any tooling reliant on the old conventions. As Alexandre highlighted, rolling out these changes across a large organization is a significant undertaking, requiring coordination across multiple teams and potentially impacting developer velocity. Imagine telling dozens or hundreds of developers they need to update their instrumentation — it's often met with resistance, especially when the perceived value isn't immediately clear to them. This friction point touches upon the challenge of maintaining good telemetry; inconsistent or outdated attributes diminish the data's value.
2. Auto-Instrumentation: The Double-Edged Sword
Auto-instrumentation is often pitched as OTel's magic bullet — drop in an agent, and poof, instant observability. The reality, as hinted at in our conversations, is more complex. While it lowers the barrier to entry, it often generates a high volume of generic, sometimes noisy, telemetry. Adriel Perkins from Liatrio touched upon the initial ease of getting started but also the subsequent need for refinement.
The challenge lies in the signal-to-noise ratio. Auto-instrumentation might capture every single HTTP request or database call, but is all that data equally valuable? Often, it lacks the specific business context that makes telemetry truly actionable. This can lead to "bad telemetry" — data that is voluminous and costly to store and process but provides limited insight during an actual incident. Furthermore, customizing auto-instrumentation to add that crucial context or filter out noise can sometimes be as complex as manual instrumentation, negating some of the initial ease-of-use benefits. Teams often find themselves needing to layer manual instrumentation on top or invest heavily in configuring the auto-instrumentation agents, blurring the lines between the two approaches.
3. The Complexity of Configuration and Deployment
While the OTel Collector is lauded for its flexibility (more on that later), configuring it, especially for complex scenarios involving multiple pipelines, processors, and exporters, can be daunting. Elena mentioned the learning curve associated with mastering the Collector's configuration YAML and understanding the nuances of its various components. Debugging issues within a complex Collector pipeline — Why is data being dropped? Why is latency high? — requires deep expertise.
James Moessis also alluded to the intricacies of implementing advanced features like tail sampling. While head sampling is straightforward (make a decision upfront), tail sampling (decide after seeing the whole trace) is far more complex, requiring stateful processing and careful resource management. Building or deploying robust sampling strategies often involves significant engineering effort beyond just configuring the standard OTel components, as evidenced by Atlassian's decision to build and open-source their own tail sampler.
4. Documentation and Guidance Gaps
While the OTel documentation is extensive, practitioners sometimes find gaps when dealing with specific edge cases or advanced configurations. Finding clear, concise guidance on best practices for structuring Collector configurations at scale, managing semantic convention updates gracefully, or optimizing performance for specific workloads can sometimes involve piecing together information from GitHub issues, Slack channels, and blog posts. The rapid evolution means documentation can occasionally lag behind the latest features or changes.
These challenges aren't reasons to dismiss OTel, but acknowledging them is crucial for any team embarking on or scaling their OTel journey. It requires commitment, expertise, and a willingness to navigate a rapidly evolving landscape.