Observability is one of the fastest growing industries in the world today — by both market size and data volume. Since the 90s, cloud monitoring has become a must-have for businesses in nearly every sector, not just technology. The exponentially increasing size of cloud infrastructures and data volume is creating two bubbles for customers seeking to collect and generate value from their data. Both are ready to burst. Both of these problems relate to how data collection agents are configured and managed, and new open source technologies by industry leaders are seeking to change the paradigm.
OpenTelemetry, a collaborative open source observability project, has introduced a new network protocol that addresses the infrastructure management headache, coupled with collector configuration options to filter and reduce data volume. Open Agent Management Protocol (OpAmp) is a new network protocol by the OpenTelemetry project that enables remote management of OpenTelemetry collectors (agents). In simple terms, it's a free and open source technology that dramatically reduces the effort and complexity of deploying and managing agents and data pipelines for DevOps teams.
Why is OpenTelemetry's OpAmp special?
It offers a simple and versatile method for remotely configuring and maintaining telemetry agents across massive environments with very little overhead. This is particularly useful for large cloud environments, and headless environments, where agent management would otherwise require manual management of every agent on every server.
OpAMP also enables agents to report information to multiple remote management destinations simultaneously, such as their status, properties, connections, configuration, operating system, version, agent CPU and RAM usage, data collection rate, and more. OpAMP can integrate with access credential management systems to keep environments secure. It also has a secure auto-update capability that makes maintaining large environments easy.
Similar technology is available through a handful of proprietary technologies, but the addition of OpAMP to OpenTelemetry is the launch point for industry-wide, vendor-agnostic adoption of the technology. Keeping with the open source mission, observability vendors collaborate on overarching technologies that benefit the whole industry, and focus independently on servicing specific niches.
That's what makes OpAMP so unique — as an open source technology built by experts from every major telemetry organization, it's completely vendor agnostic. It's available now as part of OpenTelemetry, but it's not dependent on OpenTelemetry as a whole.
OpAMP can be used to manage many agent types. Agents can collect data from any platform in any environment, and ship data to any, or multiple, data management or analysis platforms. Say you prefer a specific tool for data analysis, but another unrelated tool for data storage, and you also maintain an environment with servers across multiple cloud platforms; with OpAMP, you can manage different agent types across multiple environments through one place. Some agents, like OpenTelemetry agents, can ship to many analysis and storage tools simultaneously, and filter where specific data goes based on your configuration. With OpAMP, those agents and configurations are easily and remotely manageable at any scale, from source to destination.
OpenTelemetry is not meant to overrule or undercut any existing telemetry solutions. In fact it's exactly the opposite — it gives end users the freedom to use exactly the tools they want for their specific needs in conjunction with each other. As the observability industry continues to grow, and data volume swells, foundational technologies like OpAMP are critical to maintaining manageable technology infrastructures for both vendors and customers alike.
The Latest
Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...
In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...
The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...
In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...
eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...
Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...
Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...
Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...
This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...