Skip to main content

Legacy Application Performance Management (APM) vs Modern Observability - Part 1

Colin Fallwell
Sumo Logic

In this 3 part series, I will explore, contrast, and discuss the differences as well as the history of APM and the meteoric rise of Modern Observability, why these two are related but simultaneously are vastly different in outcome. Indeed, Modern Observability is disrupting the world, and organizations doing it right are realizing massive gains in innovation, reaping the benefits of higher performance and optimization across numerous dimensions including:

■ IT governance

■ Revenue growth

■ Vendor cost reduction

■ Tool Consolidation

■ DevOps toil and churn

■ Application performance and customer experiences

■ Reliability and Security

■ Employee satisfaction

■ Data Science and Business Analytics

■ AI-controlled automation (AIOps)

Modern Observability is becoming the foundation upon which organizations are able to reduce the toil and churn associated with capital spending across initiatives such as Cloud Migrations, App Modernization, Digital Transformation, and AIOps by leveraging new methodologies such as Observability-Driven-Development (ODD).

Traditional APM is a mature, vendor-led industry, and was built at a time when the world was developing monolithic, 3-tier architectures and when software was typically released once or twice a year. APM is a closed ecosystem, with patented protocols and agents which are deployed to run on every node, injected into runtimes with startup parameters, and have little to no impact on how software is designed or developed.

This is a good thing, right?

In contrast to Modern Observability, and for organizations moving to the cloud, APM is loaded with hidden costs and unintended consequences. From a process perspective, APM does not live within the developer ecosystem and has historically been funded by Ops teams or DevOps/SRE groups that have largely been out of the immediate workstream of software development. This nuance means developers have no real ownership interest in APM and don't feel compelled in taking responsibility for declaring what it means to make something "observable." What enterprises desire most are reliable pipelines of telemetry that provide accurate data inferring the internal state of systems including usage and behavioral insights of end-users, code execution, infrastructure health, and overall performance. Most developers have been poor adopters of APM.

A major characteristic of Modern Observability is in how it becomes designed into the fabric of the applications, services, and infrastructure by DevOps teams, implemented through models such as GitOps, which in turn provides numerous benefits to organizations that legacy APM really does not align to. It is within this point of view or context that I base my opinions on throughout this series. Many organizations still relying on APM vendors will struggle to increase the intrinsic value of data within the organization. It's my firm argument that the most important attribute of Modern Observability lies in its "programmable" nature, whereby the acquisition of telemetry becomes woven into the fabric of developing software and the services offered by anyone competing in this global software-driven economy.

There are many other dimensions of contrast, but I personally believe this to be the most important with respect to organizations embracing digital transformation, or for those that just want to improve maturity, growth, and innovation, or anyone wishing to own their own destiny when it comes to data intelligence.

In the next installment (Part 2) of this series, we dive into the history of APM and how it became a 6 Billion USD market and explore some of the challenges that come with APM.

Colin Fallwell is Field CTO of Sumo Logic

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

Legacy Application Performance Management (APM) vs Modern Observability - Part 1

Colin Fallwell
Sumo Logic

In this 3 part series, I will explore, contrast, and discuss the differences as well as the history of APM and the meteoric rise of Modern Observability, why these two are related but simultaneously are vastly different in outcome. Indeed, Modern Observability is disrupting the world, and organizations doing it right are realizing massive gains in innovation, reaping the benefits of higher performance and optimization across numerous dimensions including:

■ IT governance

■ Revenue growth

■ Vendor cost reduction

■ Tool Consolidation

■ DevOps toil and churn

■ Application performance and customer experiences

■ Reliability and Security

■ Employee satisfaction

■ Data Science and Business Analytics

■ AI-controlled automation (AIOps)

Modern Observability is becoming the foundation upon which organizations are able to reduce the toil and churn associated with capital spending across initiatives such as Cloud Migrations, App Modernization, Digital Transformation, and AIOps by leveraging new methodologies such as Observability-Driven-Development (ODD).

Traditional APM is a mature, vendor-led industry, and was built at a time when the world was developing monolithic, 3-tier architectures and when software was typically released once or twice a year. APM is a closed ecosystem, with patented protocols and agents which are deployed to run on every node, injected into runtimes with startup parameters, and have little to no impact on how software is designed or developed.

This is a good thing, right?

In contrast to Modern Observability, and for organizations moving to the cloud, APM is loaded with hidden costs and unintended consequences. From a process perspective, APM does not live within the developer ecosystem and has historically been funded by Ops teams or DevOps/SRE groups that have largely been out of the immediate workstream of software development. This nuance means developers have no real ownership interest in APM and don't feel compelled in taking responsibility for declaring what it means to make something "observable." What enterprises desire most are reliable pipelines of telemetry that provide accurate data inferring the internal state of systems including usage and behavioral insights of end-users, code execution, infrastructure health, and overall performance. Most developers have been poor adopters of APM.

A major characteristic of Modern Observability is in how it becomes designed into the fabric of the applications, services, and infrastructure by DevOps teams, implemented through models such as GitOps, which in turn provides numerous benefits to organizations that legacy APM really does not align to. It is within this point of view or context that I base my opinions on throughout this series. Many organizations still relying on APM vendors will struggle to increase the intrinsic value of data within the organization. It's my firm argument that the most important attribute of Modern Observability lies in its "programmable" nature, whereby the acquisition of telemetry becomes woven into the fabric of developing software and the services offered by anyone competing in this global software-driven economy.

There are many other dimensions of contrast, but I personally believe this to be the most important with respect to organizations embracing digital transformation, or for those that just want to improve maturity, growth, and innovation, or anyone wishing to own their own destiny when it comes to data intelligence.

In the next installment (Part 2) of this series, we dive into the history of APM and how it became a 6 Billion USD market and explore some of the challenges that come with APM.

Colin Fallwell is Field CTO of Sumo Logic

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...