Skip to main content

The APM Word of the Decade is: EPHEMERAL! - Part 1

Chris Farrell

Once Upon A Time …

… there was a magical black box called Java. The wizards in development loved the magical black box because it made it so easy to build new applications. The magical black box made it easier to deploy applications into production. All Operations had to do was create a space (or server) big enough for the black box. Everything was great!

Then one day, things went haywire. No matter what they tried, Operations couldn't keep the application running. Worse, everything pointed to the magic box as the cause of the problem, but alas, nobody could see anything inside it. QA couldn't create a test environment to match production — and try as they might, the development wizards couldn't replicate the conditions on their singular systems.

Sometimes, an application outage would last for days or weeks. Some outage conditions were put up with for years, randomly taking down important systems and even impacting financial stability.

Enter the APM Heroes

That was the scenario that unfolded 20 years ago, as IT Operations teams around the world needed a way to know when their J2EE Applications began having problems, and how to fix them when they occurred. That was the onus for my favorite enterprise IT technology: Application Performance Management (APM).

It's been two decades since APM began appearing in IT shops, and the industry has evolved quite a bit. There have two tectonic shifts — the first to SOA about 10 years ago; the second to containers and microservices, which began about 5 years ago, but has already reached a critical mass of adoption.

3 Generations of APM — One Key Concept

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY.

To solve production application problems, we need to see inside them — that means inside the black boxes (yep, they still exist). Tied to visibility is the correlated concept of observability. The nuanced differences in definition will have to wait for another time. For now, let's focus on APM tools that built a way to get visibility themselves, without requiring code changes.

But even with this singular focus of providing visibility, each generation (coincidentally landing on the start of a decade) includes unique aspects of operating — those key differences being tied to the application platforms that the tools must manage.

The Turn of the Millennium Turned on "Instrumentation"

The problems faced by operations teams in 2000 were twofold:

1. See the actual architecture and code inside the black box of a J2EE App Server

2. Find Where requests were breaking down, and get an idea of how to fix them

Back then, developers only had profilers available to them, which couldn't run in production. But Java allowed an interesting trick — bytecode instrumentation — and the management vendors figured out a way to inject their monitoring code without requiring code changes.

In the beginning, bytecode instrumentation (BCI) was far from a standard thing. Those first solutions created their own wrappers and instrumentation engines to inject monitoring into production code. But BCI did provide a methodical, repeatable way to put monitoring agents into individual software components (like Servlets and Beans).

The biggest issue with the original BCI solutions was the manual work (reverse engineering and instrumentation configuration) to get all the important metrics exposed (like specific method timing).

It's worth noting that the early vendors and the JVM providers worked together to create automatic instrumentation hooks and standard specifications — which helped open the door for a myriad of tools to show up in generations 2 and 3.

Go to The APM Word of the Decade is: EPHEMERAL! - Part 2

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

The APM Word of the Decade is: EPHEMERAL! - Part 1

Chris Farrell

Once Upon A Time …

… there was a magical black box called Java. The wizards in development loved the magical black box because it made it so easy to build new applications. The magical black box made it easier to deploy applications into production. All Operations had to do was create a space (or server) big enough for the black box. Everything was great!

Then one day, things went haywire. No matter what they tried, Operations couldn't keep the application running. Worse, everything pointed to the magic box as the cause of the problem, but alas, nobody could see anything inside it. QA couldn't create a test environment to match production — and try as they might, the development wizards couldn't replicate the conditions on their singular systems.

Sometimes, an application outage would last for days or weeks. Some outage conditions were put up with for years, randomly taking down important systems and even impacting financial stability.

Enter the APM Heroes

That was the scenario that unfolded 20 years ago, as IT Operations teams around the world needed a way to know when their J2EE Applications began having problems, and how to fix them when they occurred. That was the onus for my favorite enterprise IT technology: Application Performance Management (APM).

It's been two decades since APM began appearing in IT shops, and the industry has evolved quite a bit. There have two tectonic shifts — the first to SOA about 10 years ago; the second to containers and microservices, which began about 5 years ago, but has already reached a critical mass of adoption.

3 Generations of APM — One Key Concept

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY.

To solve production application problems, we need to see inside them — that means inside the black boxes (yep, they still exist). Tied to visibility is the correlated concept of observability. The nuanced differences in definition will have to wait for another time. For now, let's focus on APM tools that built a way to get visibility themselves, without requiring code changes.

But even with this singular focus of providing visibility, each generation (coincidentally landing on the start of a decade) includes unique aspects of operating — those key differences being tied to the application platforms that the tools must manage.

The Turn of the Millennium Turned on "Instrumentation"

The problems faced by operations teams in 2000 were twofold:

1. See the actual architecture and code inside the black box of a J2EE App Server

2. Find Where requests were breaking down, and get an idea of how to fix them

Back then, developers only had profilers available to them, which couldn't run in production. But Java allowed an interesting trick — bytecode instrumentation — and the management vendors figured out a way to inject their monitoring code without requiring code changes.

In the beginning, bytecode instrumentation (BCI) was far from a standard thing. Those first solutions created their own wrappers and instrumentation engines to inject monitoring into production code. But BCI did provide a methodical, repeatable way to put monitoring agents into individual software components (like Servlets and Beans).

The biggest issue with the original BCI solutions was the manual work (reverse engineering and instrumentation configuration) to get all the important metrics exposed (like specific method timing).

It's worth noting that the early vendors and the JVM providers worked together to create automatic instrumentation hooks and standard specifications — which helped open the door for a myriad of tools to show up in generations 2 and 3.

Go to The APM Word of the Decade is: EPHEMERAL! - Part 2

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...