Skip to main content

Legacy Application Performance Management (APM) vs Modern Observability - Part 2

Colin Fallwell
Sumo Logic

In Part 1 of this series, we introduced APM and Modern Observability. If you haven't read it, you can find it here.

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not.

Operations usually bought APM and would almost always struggle with finding and improving signal quality, having too much data, having the wrong data, and interpreting the data. Developers didn't have to care about how things were observed and had no real ownership in the journey of keeping things reliable. This has almost always led to a higher degree of low-quality software and higher MTTR.

The High Cost of Exclusivity

APM vendors have struggled with Cloud-Native architectures. Their agents were never designed for the Cloud and are almost always overkill for small microservices and ephemeral containers. Their agent code remains exclusive, lacks interoperability with one another, and provides features (such as heap analysis and thread dumps) that are no longer relevant in the cloud.

Despite this, legacy APM vendors today are touting support for Modern Observability and Open Telemetry. There is a caveat in that they provide this support by requiring customers to continue leveraging their proprietary agents (for the broadest support).

Keeping customers dependent on the vendor-owned code to equal out-of-the-box CNCF capabilities to me is counter-intuitive. The primary reason for this mindset and approach stems from their legacy beginnings. Generally speaking, their backends are not compatible with modern open-schemas of metadata and tags. To work around the limitations of being born in the legacy world, they must leverage proprietary agents as an abstraction layer to transform and map open standards to their closed ecosystem. This benefits these vendors but leaves customers locked into a single vendor's agent codebase (or more likely, multiple vendors' agent codebases to cover different domains such as logging, metrics, and traces), which come loaded with technical debt and are serviceable by only a small team of developers.

In relation to modern observability, the only argument we could try to make for proprietary agents might center around the following:

■ The agents are good at abstracting the control plane, simplifying telemetry acquisition via remote management and UI.

■ They provide features for dynamic instrumentation of the services, and environments they operate in.

Fortunately for the industry at large, this benefit is rapidly eroding with projects such as OpAmp (Open telemetry's Open Agent Management Protocol) and recent significant advances in auto-instrumentation frameworks and capabilities like span-events. The future does not look good for vendors pushing organizations to remain locked in exclusive, black box software to acquire their telemetry.

We are seeing more and more organizations realizing the enormous benefits that come with owning their telemetry from the outset. These companies are ditching proprietary agents and embracing open standards for telemetry.

Indeed, there is a new mantra emerging in the industry, "Supply vendors your telemetry, don't rely on you vendors to supply your telemetry."

Over the years, I have worked at many APM companies and have witnessed the downsides of exclusivity. For the customers, they've had to endure an extremely high cost of ownership related to:

■ Agent deployment and version maintenance

■ Massive tech debt in agent codebases

■ Specialized and expensive training

■ Ever-changing pricing models to support cloud-architectures

Exclusivity was born out of complexity. Simply put, it used to be very hard to collect telemetry in this way. APM vendors were truly successful at abstracting the complexity of acquiring telemetry.

In the early days, there were only a handful of developers in the world that really understood Java well enough under the hood and could build an agent capable of dynamically rewriting byte-code at runtime to capture the timings of code execution without breaking the application.

Some vendors fared worse than others supporting "dynamic" languages such as Python, PHP, etc. Nearly all of them struggle to maintain support for new frameworks and stacks and lag the market. This is in stark contrast to how Open Source contributions and innovation happen today. The net result is a yearly backlog of unhappy customers and support cases to resolve broken correlations in trace collection while waiting for vendors to support, for example, the next version of NodeJS or React that's been out for months.

Legacy APM is a great choice for the legacy, monolithic, on-prem environment. It is not my preferred choice for Cloud-Native architectures where things evolve quickly, are small down to the size of a function, and are highly ephemeral.

None of the legacy APM vendors invested in logging and even downplayed logging as unnecessary if you could trace it. This brought up questions from them such as:

Why log if you can capture errors and stack traces in the APM world?

Who wants to clean up all the exception logging to understand and rely on log content for knowing if something is healthy?

Most developers I worked with over my career did not want to take on that effort as technical debt.

In these APM solutions, the metrics being collected and presented were only those that were included when you installed the agent. Rarely did they provide an easy way of capturing custom metrics, nor was there really much in way of metric correlation across the layers of the stacks. These platforms lacked scalability and suffered from an architecture that didn't include time-series datastores. In fact, the scaling factor has always been the achilles heel of legacy APM vendors because none were born cloud-native and all must support proprietary data schemas, and progress on re-writing APM platforms to be compliant with the modern cloud has been painfully slow.

In the final installment (Part 3) of this series, I dive into the birth and history of modern observability.

Colin Fallwell is Field CTO of Sumo Logic

Hot Topics

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Legacy Application Performance Management (APM) vs Modern Observability - Part 2

Colin Fallwell
Sumo Logic

In Part 1 of this series, we introduced APM and Modern Observability. If you haven't read it, you can find it here.

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not.

Operations usually bought APM and would almost always struggle with finding and improving signal quality, having too much data, having the wrong data, and interpreting the data. Developers didn't have to care about how things were observed and had no real ownership in the journey of keeping things reliable. This has almost always led to a higher degree of low-quality software and higher MTTR.

The High Cost of Exclusivity

APM vendors have struggled with Cloud-Native architectures. Their agents were never designed for the Cloud and are almost always overkill for small microservices and ephemeral containers. Their agent code remains exclusive, lacks interoperability with one another, and provides features (such as heap analysis and thread dumps) that are no longer relevant in the cloud.

Despite this, legacy APM vendors today are touting support for Modern Observability and Open Telemetry. There is a caveat in that they provide this support by requiring customers to continue leveraging their proprietary agents (for the broadest support).

Keeping customers dependent on the vendor-owned code to equal out-of-the-box CNCF capabilities to me is counter-intuitive. The primary reason for this mindset and approach stems from their legacy beginnings. Generally speaking, their backends are not compatible with modern open-schemas of metadata and tags. To work around the limitations of being born in the legacy world, they must leverage proprietary agents as an abstraction layer to transform and map open standards to their closed ecosystem. This benefits these vendors but leaves customers locked into a single vendor's agent codebase (or more likely, multiple vendors' agent codebases to cover different domains such as logging, metrics, and traces), which come loaded with technical debt and are serviceable by only a small team of developers.

In relation to modern observability, the only argument we could try to make for proprietary agents might center around the following:

■ The agents are good at abstracting the control plane, simplifying telemetry acquisition via remote management and UI.

■ They provide features for dynamic instrumentation of the services, and environments they operate in.

Fortunately for the industry at large, this benefit is rapidly eroding with projects such as OpAmp (Open telemetry's Open Agent Management Protocol) and recent significant advances in auto-instrumentation frameworks and capabilities like span-events. The future does not look good for vendors pushing organizations to remain locked in exclusive, black box software to acquire their telemetry.

We are seeing more and more organizations realizing the enormous benefits that come with owning their telemetry from the outset. These companies are ditching proprietary agents and embracing open standards for telemetry.

Indeed, there is a new mantra emerging in the industry, "Supply vendors your telemetry, don't rely on you vendors to supply your telemetry."

Over the years, I have worked at many APM companies and have witnessed the downsides of exclusivity. For the customers, they've had to endure an extremely high cost of ownership related to:

■ Agent deployment and version maintenance

■ Massive tech debt in agent codebases

■ Specialized and expensive training

■ Ever-changing pricing models to support cloud-architectures

Exclusivity was born out of complexity. Simply put, it used to be very hard to collect telemetry in this way. APM vendors were truly successful at abstracting the complexity of acquiring telemetry.

In the early days, there were only a handful of developers in the world that really understood Java well enough under the hood and could build an agent capable of dynamically rewriting byte-code at runtime to capture the timings of code execution without breaking the application.

Some vendors fared worse than others supporting "dynamic" languages such as Python, PHP, etc. Nearly all of them struggle to maintain support for new frameworks and stacks and lag the market. This is in stark contrast to how Open Source contributions and innovation happen today. The net result is a yearly backlog of unhappy customers and support cases to resolve broken correlations in trace collection while waiting for vendors to support, for example, the next version of NodeJS or React that's been out for months.

Legacy APM is a great choice for the legacy, monolithic, on-prem environment. It is not my preferred choice for Cloud-Native architectures where things evolve quickly, are small down to the size of a function, and are highly ephemeral.

None of the legacy APM vendors invested in logging and even downplayed logging as unnecessary if you could trace it. This brought up questions from them such as:

Why log if you can capture errors and stack traces in the APM world?

Who wants to clean up all the exception logging to understand and rely on log content for knowing if something is healthy?

Most developers I worked with over my career did not want to take on that effort as technical debt.

In these APM solutions, the metrics being collected and presented were only those that were included when you installed the agent. Rarely did they provide an easy way of capturing custom metrics, nor was there really much in way of metric correlation across the layers of the stacks. These platforms lacked scalability and suffered from an architecture that didn't include time-series datastores. In fact, the scaling factor has always been the achilles heel of legacy APM vendors because none were born cloud-native and all must support proprietary data schemas, and progress on re-writing APM platforms to be compliant with the modern cloud has been painfully slow.

In the final installment (Part 3) of this series, I dive into the birth and history of modern observability.

Colin Fallwell is Field CTO of Sumo Logic

Hot Topics

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...