The APM Word of the Decade is: EPHEMERAL! - Part 1
March 10, 2020

Chris Farrell
Instana

Share this

Once Upon A Time …

… there was a magical black box called Java. The wizards in development loved the magical black box because it made it so easy to build new applications. The magical black box made it easier to deploy applications into production. All Operations had to do was create a space (or server) big enough for the black box. Everything was great!

Then one day, things went haywire. No matter what they tried, Operations couldn't keep the application running. Worse, everything pointed to the magic box as the cause of the problem, but alas, nobody could see anything inside it. QA couldn't create a test environment to match production — and try as they might, the development wizards couldn't replicate the conditions on their singular systems.

Sometimes, an application outage would last for days or weeks. Some outage conditions were put up with for years, randomly taking down important systems and even impacting financial stability.

Enter the APM Heroes

That was the scenario that unfolded 20 years ago, as IT Operations teams around the world needed a way to know when their J2EE Applications began having problems, and how to fix them when they occurred. That was the onus for my favorite enterprise IT technology: Application Performance Management (APM).

It's been two decades since APM began appearing in IT shops, and the industry has evolved quite a bit. There have two tectonic shifts — the first to SOA about 10 years ago; the second to containers and microservices, which began about 5 years ago, but has already reached a critical mass of adoption.

3 Generations of APM — One Key Concept

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY.

To solve production application problems, we need to see inside them — that means inside the black boxes (yep, they still exist). Tied to visibility is the correlated concept of observability. The nuanced differences in definition will have to wait for another time. For now, let's focus on APM tools that built a way to get visibility themselves, without requiring code changes.

But even with this singular focus of providing visibility, each generation (coincidentally landing on the start of a decade) includes unique aspects of operating — those key differences being tied to the application platforms that the tools must manage.

The Turn of the Millennium Turned on "Instrumentation"

The problems faced by operations teams in 2000 were twofold:

1. See the actual architecture and code inside the black box of a J2EE App Server

2. Find Where requests were breaking down, and get an idea of how to fix them

Back then, developers only had profilers available to them, which couldn't run in production. But Java allowed an interesting trick — bytecode instrumentation — and the management vendors figured out a way to inject their monitoring code without requiring code changes.

In the beginning, bytecode instrumentation (BCI) was far from a standard thing. Those first solutions created their own wrappers and instrumentation engines to inject monitoring into production code. But BCI did provide a methodical, repeatable way to put monitoring agents into individual software components (like Servlets and Beans).

The biggest issue with the original BCI solutions was the manual work (reverse engineering and instrumentation configuration) to get all the important metrics exposed (like specific method timing).

It's worth noting that the early vendors and the JVM providers worked together to create automatic instrumentation hooks and standard specifications — which helped open the door for a myriad of tools to show up in generations 2 and 3.

Go to The APM Word of the Decade is: EPHEMERAL! - Part 2

Chris Farrell is Observability and APM Strategist at Instana
Share this

The Latest

September 26, 2023

Generative AI may be a great tool for the enterprise to help drive further innovation and meaningful work, but it also runs the risk of generating massive amounts of spam that will counteract its intended benefits. From increased AI spam bots to data maintenance due to large volumes of outputs, enterprise AI applications can create a cascade of issues that end up detracting from productivity gains ...

September 25, 2023

A long-running study of DevOps practices ... suggests that any historical gains in MTTR reduction have now plateaued. For years now, the time it takes to restore services has stayed about the same: less than a day for high performers but up to a week for middle-tier teams and up to a month for laggards. The fact that progress is flat despite big investments in people, tools and automation is a cause for concern ...

September 21, 2023

Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...

September 20, 2023

IT leaders are driving an increasing number of automation initiatives as a way to stay competitive, reduce costs and scale as they navigate an unpredictable social and economic environment, according to the 2023 State of Automation in IT survey conducted by Jitterbit ...

September 19, 2023

Customer loyalty is changing as retailers get increasingly competitive. More than 75% of consumers say they would end business with a company after a single bad customer experience. This means that just one price discrepancy, inventory mishap or checkout issue in a physical or digital store, could have customers running out to the next store that can provide them with better service. Retailers must be able to predict business outages in advance, and act proactively before an incident occurs, impacting customer experience ...