Skip to main content

New OpenTelemetry Network Protocol OpAMP - Game Changer for DevOps and Observability

Paul Stefanski
observIQ

Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.

OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.

Why is OpenTelemetry's OpAmp special?

It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.

OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.

Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.

That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.

OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.

OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.

Paul Stefanski is Product Marketing Manager at observIQ

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

New OpenTelemetry Network Protocol OpAMP - Game Changer for DevOps and Observability

Paul Stefanski
observIQ

Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.

OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.

Why is OpenTelemetry's OpAmp special?

It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.

OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.

Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.

That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.

OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.

OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.

Paul Stefanski is Product Marketing Manager at observIQ

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...