Legacy Application Performance Management (APM) vs Modern Observability - Part 1
May 03, 2022

Colin Fallwell
Sumo Logic

Share this

In this 3 part series, I will explore, contrast, and discuss the differences as well as the history of APM and the meteoric rise of Modern Observability, why these two are related but simultaneously are vastly different in outcome. Indeed, Modern Observability is disrupting the world, and organizations doing it right are realizing massive gains in innovation, reaping the benefits of higher performance and optimization across numerous dimensions including:

■ IT governance

■ Revenue growth

■ Vendor cost reduction

■ Tool Consolidation

■ DevOps toil and churn

■ Application performance and customer experiences

■ Reliability and Security

■ Employee satisfaction

■ Data Science and Business Analytics

■ AI-controlled automation (AIOps)

Modern Observability is becoming the foundation upon which organizations are able to reduce the toil and churn associated with capital spending across initiatives such as Cloud Migrations, App Modernization, Digital Transformation, and AIOps by leveraging new methodologies such as Observability-Driven-Development (ODD).

Traditional APM is a mature, vendor-led industry, and was built at a time when the world was developing monolithic, 3-tier architectures and when software was typically released once or twice a year. APM is a closed ecosystem, with patented protocols and agents which are deployed to run on every node, injected into runtimes with startup parameters, and have little to no impact on how software is designed or developed.

This is a good thing, right?

In contrast to Modern Observability, and for organizations moving to the cloud, APM is loaded with hidden costs and unintended consequences. From a process perspective, APM does not live within the developer ecosystem and has historically been funded by Ops teams or DevOps/SRE groups that have largely been out of the immediate workstream of software development. This nuance means developers have no real ownership interest in APM and don't feel compelled in taking responsibility for declaring what it means to make something "observable." What enterprises desire most are reliable pipelines of telemetry that provide accurate data inferring the internal state of systems including usage and behavioral insights of end-users, code execution, infrastructure health, and overall performance. Most developers have been poor adopters of APM.

A major characteristic of Modern Observability is in how it becomes designed into the fabric of the applications, services, and infrastructure by DevOps teams, implemented through models such as GitOps, which in turn provides numerous benefits to organizations that legacy APM really does not align to. It is within this point of view or context that I base my opinions on throughout this series. Many organizations still relying on APM vendors will struggle to increase the intrinsic value of data within the organization. It's my firm argument that the most important attribute of Modern Observability lies in its "programmable" nature, whereby the acquisition of telemetry becomes woven into the fabric of developing software and the services offered by anyone competing in this global software-driven economy.

There are many other dimensions of contrast, but I personally believe this to be the most important with respect to organizations embracing digital transformation, or for those that just want to improve maturity, growth, and innovation, or anyone wishing to own their own destiny when it comes to data intelligence.

In the next installment (Part 2) of this series, we dive into the history of APM and how it became a 6 Billion USD market and explore some of the challenges that come with APM.

Colin Fallwell is Field CTO of Sumo Logic
Share this

The Latest

September 27, 2022

Users have high expectations around applications — quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities — all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks ...

September 26, 2022

In Italy, it is customary to treat wine as part of the meal ... Too often, testing is treated with the same reverence as the post-meal task of loading the dishwasher, when it should be treated like an elegant wine pairing ...

September 23, 2022

In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...

September 22, 2022

As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...

September 21, 2022

US government agencies are bringing more of their employees back into the office and implementing hybrid work schedules, but federal workers are worried that their agencies' IT architectures aren't built to handle the "new normal." They fear that the reactive, manual methods used by the current systems in dealing with user, IT architecture and application problems will degrade the user experience and negatively affect productivity. In fact, according to a recent survey, many federal employees are concerned that they won't work as effectively back in the office as they did at home ...

September 20, 2022

Users today expect a seamless, uninterrupted experience when interacting with their web and mobile apps. Their expectations have continued to grow in tandem with their appetite for new features and consistent updates. Mobile apps have responded by increasing their release cadence by up to 40%, releasing a new full version of their app every 4-5 days, as determined in this year's SmartBear State of Software Quality | Application Stability Index report ...

September 19, 2022

In this second part of the blog series, we look at how adopting AIOps capabilities can drive business value for an organization ...

September 16, 2022

ITOPS and DevOps is in the midst of a surge of innovation. New devices and new systems are appearing at an unprecedented rate. There are many drivers of this phenomenon, from virtualization and containerization of applications and services to the need for improved security and the proliferation of 5G and IOT devices. The interconnectedness and the interdependencies of these technologies also greatly increase systems complexity and therefore increase the sheer volume of things that need to be integrated, monitored, and maintained ...

September 15, 2022

IT talent acquisition challenges are now heavily influencing technology investment decisions, according to new research from Salesforce's MuleSoft. The 2022 IT Leaders Pulse Report reveals that almost three quarters (73%) of senior IT leaders agree that acquiring IT talent has never been harder, and nearly all (98%) respondents say attracting IT talent influences their organization's technology investment choices ...

September 14, 2022

The findings of the 2022 Observability Forecast offer a detailed view of how this practice is shaping engineering and the technologies of the future. Here are 10 key takeaways from the forecast ...