Skip to main content

Legacy Application Performance Management (APM) vs Modern Observability - Part 2

Colin Fallwell
Sumo Logic

In Part 1 of this series, we introduced APM and Modern Observability. If you haven't read it, you can find it here.

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not.

Operations usually bought APM and would almost always struggle with finding and improving signal quality, having too much data, having the wrong data, and interpreting the data. Developers didn't have to care about how things were observed and had no real ownership in the journey of keeping things reliable. This has almost always led to a higher degree of low-quality software and higher MTTR.

The High Cost of Exclusivity

APM vendors have struggled with Cloud-Native architectures. Their agents were never designed for the Cloud and are almost always overkill for small microservices and ephemeral containers. Their agent code remains exclusive, lacks interoperability with one another, and provides features (such as heap analysis and thread dumps) that are no longer relevant in the cloud.

Despite this, legacy APM vendors today are touting support for Modern Observability and Open Telemetry. There is a caveat in that they provide this support by requiring customers to continue leveraging their proprietary agents (for the broadest support).

Keeping customers dependent on the vendor-owned code to equal out-of-the-box CNCF capabilities to me is counter-intuitive. The primary reason for this mindset and approach stems from their legacy beginnings. Generally speaking, their backends are not compatible with modern open-schemas of metadata and tags. To work around the limitations of being born in the legacy world, they must leverage proprietary agents as an abstraction layer to transform and map open standards to their closed ecosystem. This benefits these vendors but leaves customers locked into a single vendor's agent codebase (or more likely, multiple vendors' agent codebases to cover different domains such as logging, metrics, and traces), which come loaded with technical debt and are serviceable by only a small team of developers.

In relation to modern observability, the only argument we could try to make for proprietary agents might center around the following:

■ The agents are good at abstracting the control plane, simplifying telemetry acquisition via remote management and UI.

■ They provide features for dynamic instrumentation of the services, and environments they operate in.

Fortunately for the industry at large, this benefit is rapidly eroding with projects such as OpAmp (Open telemetry's Open Agent Management Protocol) and recent significant advances in auto-instrumentation frameworks and capabilities like span-events. The future does not look good for vendors pushing organizations to remain locked in exclusive, black box software to acquire their telemetry.

We are seeing more and more organizations realizing the enormous benefits that come with owning their telemetry from the outset. These companies are ditching proprietary agents and embracing open standards for telemetry.

Indeed, there is a new mantra emerging in the industry, "Supply vendors your telemetry, don't rely on you vendors to supply your telemetry."

Over the years, I have worked at many APM companies and have witnessed the downsides of exclusivity. For the customers, they've had to endure an extremely high cost of ownership related to:

■ Agent deployment and version maintenance

■ Massive tech debt in agent codebases

■ Specialized and expensive training

■ Ever-changing pricing models to support cloud-architectures

Exclusivity was born out of complexity. Simply put, it used to be very hard to collect telemetry in this way. APM vendors were truly successful at abstracting the complexity of acquiring telemetry.

In the early days, there were only a handful of developers in the world that really understood Java well enough under the hood and could build an agent capable of dynamically rewriting byte-code at runtime to capture the timings of code execution without breaking the application.

Some vendors fared worse than others supporting "dynamic" languages such as Python, PHP, etc. Nearly all of them struggle to maintain support for new frameworks and stacks and lag the market. This is in stark contrast to how Open Source contributions and innovation happen today. The net result is a yearly backlog of unhappy customers and support cases to resolve broken correlations in trace collection while waiting for vendors to support, for example, the next version of NodeJS or React that's been out for months.

Legacy APM is a great choice for the legacy, monolithic, on-prem environment. It is not my preferred choice for Cloud-Native architectures where things evolve quickly, are small down to the size of a function, and are highly ephemeral.

None of the legacy APM vendors invested in logging and even downplayed logging as unnecessary if you could trace it. This brought up questions from them such as:

Why log if you can capture errors and stack traces in the APM world?

Who wants to clean up all the exception logging to understand and rely on log content for knowing if something is healthy?

Most developers I worked with over my career did not want to take on that effort as technical debt.

In these APM solutions, the metrics being collected and presented were only those that were included when you installed the agent. Rarely did they provide an easy way of capturing custom metrics, nor was there really much in way of metric correlation across the layers of the stacks. These platforms lacked scalability and suffered from an architecture that didn't include time-series datastores. In fact, the scaling factor has always been the achilles heel of legacy APM vendors because none were born cloud-native and all must support proprietary data schemas, and progress on re-writing APM platforms to be compliant with the modern cloud has been painfully slow.

In the final installment (Part 3) of this series, I dive into the birth and history of modern observability.

Colin Fallwell is Field CTO of Sumo Logic

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Legacy Application Performance Management (APM) vs Modern Observability - Part 2

Colin Fallwell
Sumo Logic

In Part 1 of this series, we introduced APM and Modern Observability. If you haven't read it, you can find it here.

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not.

Operations usually bought APM and would almost always struggle with finding and improving signal quality, having too much data, having the wrong data, and interpreting the data. Developers didn't have to care about how things were observed and had no real ownership in the journey of keeping things reliable. This has almost always led to a higher degree of low-quality software and higher MTTR.

The High Cost of Exclusivity

APM vendors have struggled with Cloud-Native architectures. Their agents were never designed for the Cloud and are almost always overkill for small microservices and ephemeral containers. Their agent code remains exclusive, lacks interoperability with one another, and provides features (such as heap analysis and thread dumps) that are no longer relevant in the cloud.

Despite this, legacy APM vendors today are touting support for Modern Observability and Open Telemetry. There is a caveat in that they provide this support by requiring customers to continue leveraging their proprietary agents (for the broadest support).

Keeping customers dependent on the vendor-owned code to equal out-of-the-box CNCF capabilities to me is counter-intuitive. The primary reason for this mindset and approach stems from their legacy beginnings. Generally speaking, their backends are not compatible with modern open-schemas of metadata and tags. To work around the limitations of being born in the legacy world, they must leverage proprietary agents as an abstraction layer to transform and map open standards to their closed ecosystem. This benefits these vendors but leaves customers locked into a single vendor's agent codebase (or more likely, multiple vendors' agent codebases to cover different domains such as logging, metrics, and traces), which come loaded with technical debt and are serviceable by only a small team of developers.

In relation to modern observability, the only argument we could try to make for proprietary agents might center around the following:

■ The agents are good at abstracting the control plane, simplifying telemetry acquisition via remote management and UI.

■ They provide features for dynamic instrumentation of the services, and environments they operate in.

Fortunately for the industry at large, this benefit is rapidly eroding with projects such as OpAmp (Open telemetry's Open Agent Management Protocol) and recent significant advances in auto-instrumentation frameworks and capabilities like span-events. The future does not look good for vendors pushing organizations to remain locked in exclusive, black box software to acquire their telemetry.

We are seeing more and more organizations realizing the enormous benefits that come with owning their telemetry from the outset. These companies are ditching proprietary agents and embracing open standards for telemetry.

Indeed, there is a new mantra emerging in the industry, "Supply vendors your telemetry, don't rely on you vendors to supply your telemetry."

Over the years, I have worked at many APM companies and have witnessed the downsides of exclusivity. For the customers, they've had to endure an extremely high cost of ownership related to:

■ Agent deployment and version maintenance

■ Massive tech debt in agent codebases

■ Specialized and expensive training

■ Ever-changing pricing models to support cloud-architectures

Exclusivity was born out of complexity. Simply put, it used to be very hard to collect telemetry in this way. APM vendors were truly successful at abstracting the complexity of acquiring telemetry.

In the early days, there were only a handful of developers in the world that really understood Java well enough under the hood and could build an agent capable of dynamically rewriting byte-code at runtime to capture the timings of code execution without breaking the application.

Some vendors fared worse than others supporting "dynamic" languages such as Python, PHP, etc. Nearly all of them struggle to maintain support for new frameworks and stacks and lag the market. This is in stark contrast to how Open Source contributions and innovation happen today. The net result is a yearly backlog of unhappy customers and support cases to resolve broken correlations in trace collection while waiting for vendors to support, for example, the next version of NodeJS or React that's been out for months.

Legacy APM is a great choice for the legacy, monolithic, on-prem environment. It is not my preferred choice for Cloud-Native architectures where things evolve quickly, are small down to the size of a function, and are highly ephemeral.

None of the legacy APM vendors invested in logging and even downplayed logging as unnecessary if you could trace it. This brought up questions from them such as:

Why log if you can capture errors and stack traces in the APM world?

Who wants to clean up all the exception logging to understand and rely on log content for knowing if something is healthy?

Most developers I worked with over my career did not want to take on that effort as technical debt.

In these APM solutions, the metrics being collected and presented were only those that were included when you installed the agent. Rarely did they provide an easy way of capturing custom metrics, nor was there really much in way of metric correlation across the layers of the stacks. These platforms lacked scalability and suffered from an architecture that didn't include time-series datastores. In fact, the scaling factor has always been the achilles heel of legacy APM vendors because none were born cloud-native and all must support proprietary data schemas, and progress on re-writing APM platforms to be compliant with the modern cloud has been painfully slow.

In the final installment (Part 3) of this series, I dive into the birth and history of modern observability.

Colin Fallwell is Field CTO of Sumo Logic

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...