Skip to main content

New EMA Report: OpenTelemetry's Emerging Role in IT Performance and Availability

Pete Goldin
APMdigest

OpenTelemetry is quickly becoming a foundational element of observability, according to a new report I wrote in partnership with Dan Twing, President and COO of Enterprise Management Associates (EMA), titled Taking Observability to the Next Level: OpenTelemetry's Emerging Role in IT Performance and Reliability. The report was sponsored by Elastic, an APMdigest sponsor, as well as Apica, Beta Systems, Dynatrace, Embrace and SolarWinds.

WEBINAR APRIL 15: Unlocking the Future of Observability: OpenTelemetry’s Role in IT Performance and Innovation

OpenTelemetry (OTel) is an open source CNCF project offering a framework and suite of tools including APIs and SDKs that facilitate the generation, collection, and exporting of telemetry data for observability platforms and related tools. OTel collects logs, metrics and traces, and is expanding data types to include profiling and many other possibilities.

This report comes at just the right time, with OpenTelemetry emerging as an essential component of modern observability. Our first objective for the research was to assess the awareness and perception of OpenTelemetry in the IT industry. We assumed the research would show that the project has some good momentum, but the results were even a bit higher than expected, with a majority (68.3%) of respondents saying they are moderately or very familiar with OTel.

OpenTelemetry also enjoys a positive perception, with half of respondents considering OpenTelemetry mature enough for implementation today, and another 31% considering it moderately mature and useful. So more than 80% basically feel that OpenTelemetry can be used now. And almost everyone surveyed (98.7%) expresses support for where OpenTelemetry is heading — a very strong vote of confidence. BTW those last two groupings include respondents that are only marginally familiar with OpenTelemetry, which suggests that OTel has a rock solid reputation.

The majority also say OpenTelemetry's role in observability is important — 61% believe OpenTelemetry is a very important or critical enabler of observability, and 57% place a similar value on the importance of OpenTelemetry to their own observability strategy.

The usage numbers are also encouraging. The report states, "Almost half (48.5%) of respondents currently use OpenTelemetry. Another 25.3% are not using OpenTelemetry yet, but are planning to implement. This means that just under 75% are either using or planning to use OpenTelemetry, a statistic that bodes well for the future of the standard. The remaining 24.8% are still evaluating, while only 1.5% of respondents had no plans to implement."

The survey findings further reflect the momentum of OpenTelemetry by showing how observability maturity correlates directly with the awareness, perception and even adoption of OpenTelemetry. A majority (64%) of survey respondents assess their own observability practices as mature or very mature, and 45% of that group are very familiar with OpenTelemetry; 67% see OpenTelemetry as very important or critical to their own observability strategy; and 61% already use OpenTelemetry.

Image
EMA

The EMA report holds much more interesting stats about OpenTelemetry that can be valuable to both observability practitioners and IT product vendors, answering questions such as:

  • Where are users deploying OpenTelemetry?
  • What are the concerns and challenges?
  • What are the benefits of OpenTelemetry?
  • What level of ROI are users gaining?
  • What are the expectations for OpenTelemetry's future?

One of the final points we made in the report: OpenTelemetry will become a competitive advantage for organizations across most industries. "One of the most consequential points to consider: the survey findings suggest that your competitors have already started using OpenTelemetry to improve digital performance, availability, and the user experience. With this in mind, if you have not already adopted OpenTelemetry, the time to start is now."

Pete Goldin is Editor and Publisher of APMdigest

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

New EMA Report: OpenTelemetry's Emerging Role in IT Performance and Availability

Pete Goldin
APMdigest

OpenTelemetry is quickly becoming a foundational element of observability, according to a new report I wrote in partnership with Dan Twing, President and COO of Enterprise Management Associates (EMA), titled Taking Observability to the Next Level: OpenTelemetry's Emerging Role in IT Performance and Reliability. The report was sponsored by Elastic, an APMdigest sponsor, as well as Apica, Beta Systems, Dynatrace, Embrace and SolarWinds.

WEBINAR APRIL 15: Unlocking the Future of Observability: OpenTelemetry’s Role in IT Performance and Innovation

OpenTelemetry (OTel) is an open source CNCF project offering a framework and suite of tools including APIs and SDKs that facilitate the generation, collection, and exporting of telemetry data for observability platforms and related tools. OTel collects logs, metrics and traces, and is expanding data types to include profiling and many other possibilities.

This report comes at just the right time, with OpenTelemetry emerging as an essential component of modern observability. Our first objective for the research was to assess the awareness and perception of OpenTelemetry in the IT industry. We assumed the research would show that the project has some good momentum, but the results were even a bit higher than expected, with a majority (68.3%) of respondents saying they are moderately or very familiar with OTel.

OpenTelemetry also enjoys a positive perception, with half of respondents considering OpenTelemetry mature enough for implementation today, and another 31% considering it moderately mature and useful. So more than 80% basically feel that OpenTelemetry can be used now. And almost everyone surveyed (98.7%) expresses support for where OpenTelemetry is heading — a very strong vote of confidence. BTW those last two groupings include respondents that are only marginally familiar with OpenTelemetry, which suggests that OTel has a rock solid reputation.

The majority also say OpenTelemetry's role in observability is important — 61% believe OpenTelemetry is a very important or critical enabler of observability, and 57% place a similar value on the importance of OpenTelemetry to their own observability strategy.

The usage numbers are also encouraging. The report states, "Almost half (48.5%) of respondents currently use OpenTelemetry. Another 25.3% are not using OpenTelemetry yet, but are planning to implement. This means that just under 75% are either using or planning to use OpenTelemetry, a statistic that bodes well for the future of the standard. The remaining 24.8% are still evaluating, while only 1.5% of respondents had no plans to implement."

The survey findings further reflect the momentum of OpenTelemetry by showing how observability maturity correlates directly with the awareness, perception and even adoption of OpenTelemetry. A majority (64%) of survey respondents assess their own observability practices as mature or very mature, and 45% of that group are very familiar with OpenTelemetry; 67% see OpenTelemetry as very important or critical to their own observability strategy; and 61% already use OpenTelemetry.

Image
EMA

The EMA report holds much more interesting stats about OpenTelemetry that can be valuable to both observability practitioners and IT product vendors, answering questions such as:

  • Where are users deploying OpenTelemetry?
  • What are the concerns and challenges?
  • What are the benefits of OpenTelemetry?
  • What level of ROI are users gaining?
  • What are the expectations for OpenTelemetry's future?

One of the final points we made in the report: OpenTelemetry will become a competitive advantage for organizations across most industries. "One of the most consequential points to consider: the survey findings suggest that your competitors have already started using OpenTelemetry to improve digital performance, availability, and the user experience. With this in mind, if you have not already adopted OpenTelemetry, the time to start is now."

Pete Goldin is Editor and Publisher of APMdigest

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...