A Guide to OpenTelemetry - Part 3: The Advantages
October 19, 2022

Pete Goldin

Share this

One of the reasons OpenTelemetry is becoming so popular is because of the many advantages. In A Guide to OpenTelemetry, APMdigest breaks these advantages down into two groups: the beneficial capabilities of OpenTelemetry and the results users can expect from OpenTelemetry. In Part 3, we cover the capabilities.

Start with: A Guide to OpenTelemetry - Part 1

Start with: A Guide to OpenTelemetry - Part 2: When Will OTel Be Ready?

Universal Observability Tool

"One specification to rule them all — Companies will be able to rely on OTel for all languages and types of telemetry (logs, metrics, traces, etc) rather than distribute these capabilities among several tools" says Michael Haberman, CTO and Co-Founder of Aspecto.

Standardized Instrumentation

"Working with distributed systems is confusing enough; we need to simplify it by standardizing on a consistent set of tools," explains Mike Loukides, VP of Emerging Tech Content at O'Reilly Media. "What happens if your IT group develops part of a product, but buys several important components from a vendor? You're going to have to debug and maintain the whole system. That's going to be a nightmare if the different components don't speak the same language when saving information about their activity."

"Opentelemetry is an instrumentation standard," says Pranay Prateek, Co-Founder of SigNoz. "You can use any backend and storage layer to store telemetry data, and any front end to visualize that data. So as long as these components support the OTLP format (OpenTelemetry's format), they can process and visualize OTel data."


"OpenTelemetry will be valuable for the same reason that other standards are: interoperability," says Loukides from O'Reilly. "It will make it easier for developers to write software that is observable by using a single standard API and being able to plug in standard libraries. It will make it easier for people responsible for operations to integrate with existing observability platforms. If the protocol that applications use to talk to observability platforms is standardized, operations staff can mix and match dashboards, debugging tools, automation tools (AIOps), and much more."

Automated Instrumentation

"Companies no longer need their developers to spend a lot of time and headache on manually instrumenting their stack," explains Torsten Volk, Managing Research Director, Containers, DevOps, Machine Learning and Artificial Intelligence, at Enterprise Management Associates (EMA). "Instead developers can augment the automatically instrumented app stack by adding telemetry variables to their own code to tie together application behavior and infrastructure performance. DevOps engineers and SREs automatically receive a more comprehensive and complete view of their app environment and its context. DevOps, Ops and dev all will benefit from the more consistent instrumentation through OpenTelemetry compared to manual instrumentation, as this consistency lowers the risk of blind spots within the observability dashboard."

"Instrumentation can now be shifted left by making auto instrumentation part of any type of artifact used throughout the DevOps process," he continues. "Container images, VMs, software libraries, machine learning models, and database can all come pre-instrumented to simplify the DevOps toolchain and lower the risk of critical parts of the stack flying 'under the radar' in terms of observability and visibility."

Future-Proof Instrumentation

"The main business benefit that we see from using OpenTelemetry is that it is future-proof," says Prateek from SigNoz. "OpenTelemetry is an open standard and open source implementation with contributors from companies like AWS, Microsoft, Splunk, etc. It provides instrumentation libraries in almost all major programming languages and covers most of the popular open source frameworks. If tomorrow your team decides to use a new open source library in the tech stack, you can have the peace of mind that OpenTelemetry will provide instrumentation for it."

"In a hyper-dynamic environment where services come and go, and instances can be scaled in a reactive fashion, the OpenTelemetry project aims to provide a single path for full stack visibility which is future proof and easy to apply," adds Cedric Ziel, Grafana Labs Senior Product Manager.

Cost-Effective Observability

OpenTelemetry makes observability more cost-effective in several ways.

First, it provides cost control because it is open source.

"Organizations had large opportunity-costs in the past when they switched observability providers that forced them to use proprietary SDKs and APIs," says Ziel from Grafana Labs. "Customers are demanding compatibility and a path with OpenTelemetry and are less likely to accept proprietary solutions than a few years ago."

"No vendor lock-in means more control over observability costs," Prateek from SigNoz elaborates. "The freedom to choose an observability vendor of your choice while having access to world-class instrumentation is a huge advantage to the business."

"OpenTelemetry can also help reduce the cost associated with ramping up your engineering team," he continues. "Using an open source standard helps engineering teams to create a knowledge base that is consistent and improves with time."

Second, OpenTelemetry reduces cost because it is easy to use and reduces development time.

"Standardizing generation and exporting signals provides consistency across the development organization and leads to less development cost/time," says Nitin Navare, CTO of LogicMonitor.

Go to: A Guide to OpenTelemetry - Part 4: The Results

Pete Goldin is Editor and Publisher of APMdigest
Share this

The Latest

March 27, 2023

To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...

March 23, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...

March 22, 2023

CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...

March 21, 2023

Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...

March 20, 2023

Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...

March 16, 2023

Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...

March 15, 2023

Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...

March 14, 2023

Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...

March 13, 2023

Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.

March 09, 2023

An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...