Skip to main content

New OpenTelemetry Network Protocol OpAMP - Game Changer for DevOps and Observability

Paul Stefanski
observIQ

Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.

OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.

Why is OpenTelemetry's OpAmp special?

It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.

OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.

Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.

That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.

OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.

OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.

Paul Stefanski is Product Marketing Manager at observIQ

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

New OpenTelemetry Network Protocol OpAMP - Game Changer for DevOps and Observability

Paul Stefanski
observIQ

Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.

OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.

Why is OpenTelemetry's OpAmp special?

It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.

OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.

Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.

That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.

OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.

OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.

Paul Stefanski is Product Marketing Manager at observIQ

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...