A Guide to OpenTelemetry - Part 3: The Advantages
October 19, 2022

Pete Goldin
APMdigest

Share this

One of the reasons OpenTelemetry is becoming so popular is because of the many advantages. In A Guide to OpenTelemetry, APMdigest breaks these advantages down into two groups: the beneficial capabilities of OpenTelemetry and the results users can expect from OpenTelemetry. In Part 3, we cover the capabilities.

Start with: A Guide to OpenTelemetry - Part 1

Start with: A Guide to OpenTelemetry - Part 2: When Will OTel Be Ready?

Universal Observability Tool

"One specification to rule them all — Companies will be able to rely on OTel for all languages and types of telemetry (logs, metrics, traces, etc) rather than distribute these capabilities among several tools" says Michael Haberman, CTO and Co-Founder of Aspecto.

Standardized Instrumentation

"Working with distributed systems is confusing enough; we need to simplify it by standardizing on a consistent set of tools," explains Mike Loukides, VP of Emerging Tech Content at O'Reilly Media. "What happens if your IT group develops part of a product, but buys several important components from a vendor? You're going to have to debug and maintain the whole system. That's going to be a nightmare if the different components don't speak the same language when saving information about their activity."

"Opentelemetry is an instrumentation standard," says Pranay Prateek, Co-Founder of SigNoz. "You can use any backend and storage layer to store telemetry data, and any front end to visualize that data. So as long as these components support the OTLP format (OpenTelemetry's format), they can process and visualize OTel data."

Interoperability

"OpenTelemetry will be valuable for the same reason that other standards are: interoperability," says Loukides from O'Reilly. "It will make it easier for developers to write software that is observable by using a single standard API and being able to plug in standard libraries. It will make it easier for people responsible for operations to integrate with existing observability platforms. If the protocol that applications use to talk to observability platforms is standardized, operations staff can mix and match dashboards, debugging tools, automation tools (AIOps), and much more."

Automated Instrumentation

"Companies no longer need their developers to spend a lot of time and headache on manually instrumenting their stack," explains Torsten Volk, Managing Research Director, Containers, DevOps, Machine Learning and Artificial Intelligence, at Enterprise Management Associates (EMA). "Instead developers can augment the automatically instrumented app stack by adding telemetry variables to their own code to tie together application behavior and infrastructure performance. DevOps engineers and SREs automatically receive a more comprehensive and complete view of their app environment and its context. DevOps, Ops and dev all will benefit from the more consistent instrumentation through OpenTelemetry compared to manual instrumentation, as this consistency lowers the risk of blind spots within the observability dashboard."

"Instrumentation can now be shifted left by making auto instrumentation part of any type of artifact used throughout the DevOps process," he continues. "Container images, VMs, software libraries, machine learning models, and database can all come pre-instrumented to simplify the DevOps toolchain and lower the risk of critical parts of the stack flying 'under the radar' in terms of observability and visibility."

Future-Proof Instrumentation

"The main business benefit that we see from using OpenTelemetry is that it is future-proof," says Prateek from SigNoz. "OpenTelemetry is an open standard and open source implementation with contributors from companies like AWS, Microsoft, Splunk, etc. It provides instrumentation libraries in almost all major programming languages and covers most of the popular open source frameworks. If tomorrow your team decides to use a new open source library in the tech stack, you can have the peace of mind that OpenTelemetry will provide instrumentation for it."

"In a hyper-dynamic environment where services come and go, and instances can be scaled in a reactive fashion, the OpenTelemetry project aims to provide a single path for full stack visibility which is future proof and easy to apply," adds Cedric Ziel, Grafana Labs Senior Product Manager.

Cost-Effective Observability

OpenTelemetry makes observability more cost-effective in several ways.

First, it provides cost control because it is open source.

"Organizations had large opportunity-costs in the past when they switched observability providers that forced them to use proprietary SDKs and APIs," says Ziel from Grafana Labs. "Customers are demanding compatibility and a path with OpenTelemetry and are less likely to accept proprietary solutions than a few years ago."

"No vendor lock-in means more control over observability costs," Prateek from SigNoz elaborates. "The freedom to choose an observability vendor of your choice while having access to world-class instrumentation is a huge advantage to the business."

"OpenTelemetry can also help reduce the cost associated with ramping up your engineering team," he continues. "Using an open source standard helps engineering teams to create a knowledge base that is consistent and improves with time."

Second, OpenTelemetry reduces cost because it is easy to use and reduces development time.

"Standardizing generation and exporting signals provides consistency across the development organization and leads to less development cost/time," says Nitin Navare, CTO of LogicMonitor.

Go to: A Guide to OpenTelemetry - Part 4: The Results

Pete Goldin is Editor and Publisher of APMdigest
Share this

The Latest

April 25, 2024

The use of hybrid multicloud models is forecasted to double over the next one to three years as IT decision makers are facing new pressures to modernize IT infrastructures because of drivers like AI, security, and sustainability, according to the Enterprise Cloud Index (ECI) report from Nutanix ...

April 24, 2024

Over the last 20 years Digital Employee Experience has become a necessity for companies committed to digital transformation and improving IT experiences. In fact, by 2025, more than 50% of IT organizations will use digital employee experience to prioritize and measure digital initiative success ...

April 23, 2024

While most companies are now deploying cloud-based technologies, the 2024 Secure Cloud Networking Field Report from Aviatrix found that there is a silent struggle to maximize value from those investments. Many of the challenges organizations have faced over the past several years have evolved, but continue today ...

April 22, 2024

In our latest research, Cisco's The App Attention Index 2023: Beware the Application Generation, 62% of consumers report their expectations for digital experiences are far higher than they were two years ago, and 64% state they are less forgiving of poor digital services than they were just 12 months ago ...

April 19, 2024

In MEAN TIME TO INSIGHT Episode 5, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the network source of truth ...

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...