Skip to main content

New OpenTelemetry Network Protocol OpAMP - Game Changer for DevOps and Observability

Paul Stefanski
observIQ

Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.

OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.

Why is OpenTelemetry's OpAmp special?

It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.

OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.

Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.

That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.

OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.

OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.

Paul Stefanski is Product Marketing Manager at observIQ

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...

New OpenTelemetry Network Protocol OpAMP - Game Changer for DevOps and Observability

Paul Stefanski
observIQ

Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.

OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.

Why is OpenTelemetry's OpAmp special?

It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.

OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.

Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.

That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.

OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.

OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.

Paul Stefanski is Product Marketing Manager at observIQ

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...