Skip to main content

The APM Word of the Decade is: EPHEMERAL! - Part 2

Chris Farrell

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY.

Start with The APM Word of the Decade is: EPHEMERAL! - Part 1

"Distributed" was The Word for 2010

Eventually, new application platforms appeared, and Service Oriented Architecture became the model of choice for building enterprise applications. This helped development and operations teams react faster to market needs and build software faster — but the centralized aspect of monolithic applications disappeared, which created performance management challenges.

Developers were able to build applications encompassing more complex processes. SOA was also a catalyst for development strategies that focused on re-using building blocks and beginning to think of organizations as software factories.

The biggest difference between standard J2EE applications and SOA applications can be summed up in one word — Distributed.

Without a singular central core of business logic that gated all requests, the first-generation tools struggled. All those visibility holes they filled in reappeared in distributed environments.

New features became a requirement for distributed applications — with component discovery and mapping the most visible of those, with some sampled tracing and production profiling as well.

These requirements were the basis of a new — and very successful — generation of tools that were purpose built to deal with distributed applications. These tools allowed for flexible architectural design, provided end-to-end mapping and could be configured to understand the different relationships that could exist in a complex distributed app.

The Word for the Twenties? Ephemeral — Yes, Ephemeral

Dictionary.com defines ‘ephemeral' as "lasting a very short time; short-lived; transitory."

Why is ephemeral the defining word for APM this decade? To understand, we have to back up just a bit.

A few years ago, application technology shifted again with the introduction of containers and microservices. Unlike previous introductions of new application technologies, containers became a smashing hit — FAST — even in the enterprise. What took Java almost a decade to achieve (enterprise acceptance and prevalent usage), containers achieved in 2-3 years.

One of the more endearing container concepts is the ability to spin up new containers whenever needed to deliver more scalable applications through on-demand resources.

Yes, Containers — But Microservices, Too

If containers was the only recent application breakthrough, that would be ephemeral enough, but containers are just the beginning of the dynamic nature of modern applications.

The microservices-based architecture enables a new way of thinking about designing, building, deploying and updating application components, becoming even more distributed and changing the way technology platforms are chosen and used.

But Wait, There's More — or is it Less?

As if that weren't enough, now consider functions running on Serverless platforms. The most recognizable is AWS Lambda, but each cloud provider has a version of Serverless that provides the least amount of possible resources needed to execute a small piece of repeatable code

So the answer to the question "what's ephemeral about modern applications?" is "Well — everything!"

Managing performance of such dynamic applications requires the automated visibility introduced by the first generation of tools, the dynamic mapping associated with the second generation, PLUS the understanding that change is constant.

And that means being able to do some things in real time — detect new or updated infrastructure, detect changes to the application code, and trace all requests since each one is probably different. And last, but not least, is the ability to provide immediate feedback whenever an update occurs.

After all, the importance and need for Application Performance Management tools is the same as it was twenty years ago — to help optimize the user experience by minimizing service impacts and solving problems quickly when they occur.

It's more important today to a much broader set of customers because applications ARE the business in many cases. The nice thing is that new APM vendors come along to solve new problems — so that you can keep your applications running at optimum levels.

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

The APM Word of the Decade is: EPHEMERAL! - Part 2

Chris Farrell

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY.

Start with The APM Word of the Decade is: EPHEMERAL! - Part 1

"Distributed" was The Word for 2010

Eventually, new application platforms appeared, and Service Oriented Architecture became the model of choice for building enterprise applications. This helped development and operations teams react faster to market needs and build software faster — but the centralized aspect of monolithic applications disappeared, which created performance management challenges.

Developers were able to build applications encompassing more complex processes. SOA was also a catalyst for development strategies that focused on re-using building blocks and beginning to think of organizations as software factories.

The biggest difference between standard J2EE applications and SOA applications can be summed up in one word — Distributed.

Without a singular central core of business logic that gated all requests, the first-generation tools struggled. All those visibility holes they filled in reappeared in distributed environments.

New features became a requirement for distributed applications — with component discovery and mapping the most visible of those, with some sampled tracing and production profiling as well.

These requirements were the basis of a new — and very successful — generation of tools that were purpose built to deal with distributed applications. These tools allowed for flexible architectural design, provided end-to-end mapping and could be configured to understand the different relationships that could exist in a complex distributed app.

The Word for the Twenties? Ephemeral — Yes, Ephemeral

Dictionary.com defines ‘ephemeral' as "lasting a very short time; short-lived; transitory."

Why is ephemeral the defining word for APM this decade? To understand, we have to back up just a bit.

A few years ago, application technology shifted again with the introduction of containers and microservices. Unlike previous introductions of new application technologies, containers became a smashing hit — FAST — even in the enterprise. What took Java almost a decade to achieve (enterprise acceptance and prevalent usage), containers achieved in 2-3 years.

One of the more endearing container concepts is the ability to spin up new containers whenever needed to deliver more scalable applications through on-demand resources.

Yes, Containers — But Microservices, Too

If containers was the only recent application breakthrough, that would be ephemeral enough, but containers are just the beginning of the dynamic nature of modern applications.

The microservices-based architecture enables a new way of thinking about designing, building, deploying and updating application components, becoming even more distributed and changing the way technology platforms are chosen and used.

But Wait, There's More — or is it Less?

As if that weren't enough, now consider functions running on Serverless platforms. The most recognizable is AWS Lambda, but each cloud provider has a version of Serverless that provides the least amount of possible resources needed to execute a small piece of repeatable code

So the answer to the question "what's ephemeral about modern applications?" is "Well — everything!"

Managing performance of such dynamic applications requires the automated visibility introduced by the first generation of tools, the dynamic mapping associated with the second generation, PLUS the understanding that change is constant.

And that means being able to do some things in real time — detect new or updated infrastructure, detect changes to the application code, and trace all requests since each one is probably different. And last, but not least, is the ability to provide immediate feedback whenever an update occurs.

After all, the importance and need for Application Performance Management tools is the same as it was twenty years ago — to help optimize the user experience by minimizing service impacts and solving problems quickly when they occur.

It's more important today to a much broader set of customers because applications ARE the business in many cases. The nice thing is that new APM vendors come along to solve new problems — so that you can keep your applications running at optimum levels.

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...