A Guide to OpenTelemetry - Part 3: The Advantages
October 19, 2022

Pete Goldin
APMdigest

Share this

One of the reasons OpenTelemetry is becoming so popular is because of the many advantages. In A Guide to OpenTelemetry, APMdigest breaks these advantages down into two groups: the beneficial capabilities of OpenTelemetry and the results users can expect from OpenTelemetry. In Part 3, we cover the capabilities.

Start with: A Guide to OpenTelemetry - Part 1

Start with: A Guide to OpenTelemetry - Part 2: When Will OTel Be Ready?

Universal Observability Tool

"One specification to rule them all — Companies will be able to rely on OTel for all languages and types of telemetry (logs, metrics, traces, etc) rather than distribute these capabilities among several tools" says Michael Haberman, CTO and Co-Founder of Aspecto.

Standardized Instrumentation

"Working with distributed systems is confusing enough; we need to simplify it by standardizing on a consistent set of tools," explains Mike Loukides, VP of Emerging Tech Content at O'Reilly Media. "What happens if your IT group develops part of a product, but buys several important components from a vendor? You're going to have to debug and maintain the whole system. That's going to be a nightmare if the different components don't speak the same language when saving information about their activity."

"Opentelemetry is an instrumentation standard," says Pranay Prateek, Co-Founder of SigNoz. "You can use any backend and storage layer to store telemetry data, and any front end to visualize that data. So as long as these components support the OTLP format (OpenTelemetry's format), they can process and visualize OTel data."

Interoperability

"OpenTelemetry will be valuable for the same reason that other standards are: interoperability," says Loukides from O'Reilly. "It will make it easier for developers to write software that is observable by using a single standard API and being able to plug in standard libraries. It will make it easier for people responsible for operations to integrate with existing observability platforms. If the protocol that applications use to talk to observability platforms is standardized, operations staff can mix and match dashboards, debugging tools, automation tools (AIOps), and much more."

Automated Instrumentation

"Companies no longer need their developers to spend a lot of time and headache on manually instrumenting their stack," explains Torsten Volk, Managing Research Director, Containers, DevOps, Machine Learning and Artificial Intelligence, at Enterprise Management Associates (EMA). "Instead developers can augment the automatically instrumented app stack by adding telemetry variables to their own code to tie together application behavior and infrastructure performance. DevOps engineers and SREs automatically receive a more comprehensive and complete view of their app environment and its context. DevOps, Ops and dev all will benefit from the more consistent instrumentation through OpenTelemetry compared to manual instrumentation, as this consistency lowers the risk of blind spots within the observability dashboard."

"Instrumentation can now be shifted left by making auto instrumentation part of any type of artifact used throughout the DevOps process," he continues. "Container images, VMs, software libraries, machine learning models, and database can all come pre-instrumented to simplify the DevOps toolchain and lower the risk of critical parts of the stack flying 'under the radar' in terms of observability and visibility."

Future-Proof Instrumentation

"The main business benefit that we see from using OpenTelemetry is that it is future-proof," says Prateek from SigNoz. "OpenTelemetry is an open standard and open source implementation with contributors from companies like AWS, Microsoft, Splunk, etc. It provides instrumentation libraries in almost all major programming languages and covers most of the popular open source frameworks. If tomorrow your team decides to use a new open source library in the tech stack, you can have the peace of mind that OpenTelemetry will provide instrumentation for it."

"In a hyper-dynamic environment where services come and go, and instances can be scaled in a reactive fashion, the OpenTelemetry project aims to provide a single path for full stack visibility which is future proof and easy to apply," adds Cedric Ziel, Grafana Labs Senior Product Manager.

Cost-Effective Observability

OpenTelemetry makes observability more cost-effective in several ways.

First, it provides cost control because it is open source.

"Organizations had large opportunity-costs in the past when they switched observability providers that forced them to use proprietary SDKs and APIs," says Ziel from Grafana Labs. "Customers are demanding compatibility and a path with OpenTelemetry and are less likely to accept proprietary solutions than a few years ago."

"No vendor lock-in means more control over observability costs," Prateek from SigNoz elaborates. "The freedom to choose an observability vendor of your choice while having access to world-class instrumentation is a huge advantage to the business."

"OpenTelemetry can also help reduce the cost associated with ramping up your engineering team," he continues. "Using an open source standard helps engineering teams to create a knowledge base that is consistent and improves with time."

Second, OpenTelemetry reduces cost because it is easy to use and reduces development time.

"Standardizing generation and exporting signals provides consistency across the development organization and leads to less development cost/time," says Nitin Navare, CTO of LogicMonitor.

Go to: A Guide to OpenTelemetry - Part 4: The Results

Pete Goldin is Editor and Publisher of APMdigest
Share this

The Latest

March 01, 2024

As organizations continue to navigate the complexities of the digital era, which has been marked by exponential advancements in AI and technology, the strategic deployment of modern, practical applications has become indispensable for sustaining competitive advantage and realizing business goals. The Info-Tech Research Group report, Applications Priorities 2024, explores the following five initiatives for emerging and leading-edge technologies and practices that can enable IT and applications leaders to optimize their application portfolio and improve on capabilities needed to meet the ambitions of their organizations ...

February 29, 2024

Despite the growth in popularity of artificial intelligence (AI) and ML across a number of industries, there is still a huge amount of unrealized potential, with many businesses playing catch-up and still planning how ML solutions can best facilitate processes. Further progression could be limited without investment in specialized technical teams to drive development and integration ...

February 28, 2024

With over 200 streaming services to choose from, including multiple platforms featuring similar types of entertainment, users have little incentive to remain loyal to any given platform if it exhibits performance issues. Big names in streaming like Hulu, Amazon Prime and HBO Max invest thousands of hours into engineering observability and closed-loop monitoring to combat infrastructure and application issues, but smaller platforms struggle to remain competitive without access to the same resources ...

February 27, 2024

Generative AI has recently experienced unprecedented dramatic growth, making it one of the most exciting transformations the tech industry has seen in some time. However, this growth also poses a challenge for tech leaders who will be expected to deliver on the promise of new technology. In 2024, delivering tangible outcomes that meet the potential of AI, and setting up incubator projects for the future will be key tasks ...

February 26, 2024

SAP is a tool for automating business processes. Managing SAP solutions, especially with the shift to the cloud-based S/4HANA platform, can be intricate. To explore the concerns of SAP users during operational transformations and automation, a survey was conducted in mid-2023 by Digitate and Americas' SAP Users' Group ...

February 22, 2024

Some companies are just starting to dip their toes into developing AI capabilities, while (few) others can claim they have built a truly AI-first product. Regardless of where a company is on the AI journey, leaders must understand what it means to build every aspect of their product with AI in mind ...

February 21, 2024

Generative AI will usher in advantages within various industries. However, the technology is still nascent, and according to the recent Dynatrace survey there are many challenges and risks that organizations need to overcome to use this technology effectively ...

February 20, 2024

In today's digital era, monitoring and observability are indispensable in software and application development. Their efficacy lies in empowering developers to swiftly identify and address issues, enhance performance, and deliver flawless user experiences. Achieving these objectives requires meticulous planning, strategic implementation, and consistent ongoing maintenance. In this blog, we're sharing our five best practices to fortify your approach to application performance monitoring (APM) and observability ...

February 16, 2024

In MEAN TIME TO INSIGHT Episode 3, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses network security with Chris Steffen, VP of Research Covering Information Security, Risk, and Compliance Management at EMA ...

February 15, 2024

In a time where we're constantly bombarded with new buzzwords and technological advancements, it can be challenging for businesses to determine what is real, what is useful, and what they truly need. Over the years, we've witnessed the rise and fall of various tech trends, such as the promises (and fears) of AI becoming sentient and replacing humans to the declaration that data is the new oil. At the end of the day, one fundamental question remains: How can companies navigate through the tech buzz and make informed decisions for their future? ...