Skip to main content

Can APM Really Handle Serverless? - Part 2

Chris Farrell

The "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

Start with: Can APM Really Handle Serverless? - Part 1

And Then There's "Observability"

There are three ways conventional tools deliver service performance data to your monitoring tools:

1. API built into the platform — the consummate example of this is Lambda and Xray. This at least provides some level of performance detail, but it's nowhere near the richness and depth DevOps teams are used to (or need). PLUS: X-Ray provides data about the specific instance, AND ONLY the specific instance; but applications are distributed connected things — getting information about a single service without any knowledge of connected systems doesn't help understand what is getting in the way of distributed performance issues.

2. Pre-instrument the code — Like the way that some application monitoring tools tackled the container incompatibility issue, you could always run the code through an instrumentation step. While this allows the APM solution to get its hooks into the code, it loses the benefit of years of technology advancement in real-time instrumentation which allows decisions to be made on how much (or how little) to measure.

3. Open Source Observability — one or more of the observability APIs could always be put into place — of course, this requires some, if not a ton of, developer time to put the API instrumentation into their code:

■ Deciding what to instrument

■ Selecting which metrics to provide

■ Coding it in

■ Identifying those metrics for the tool

■ Selecting a visualization (If possible)

■ Analyzing logs for serverless events

All three of these approaches actually run counter to the value and efficiency promise of using Serverless Functions in a distributed application.

Option (1) simply doesn't have the juice to provide the detailed information needed for complex applications — and ZERO information about distributed functions, their dependencies (upstream and downstream) with other services, and no context or understanding of traces or end users to examine performance against.

(2) and (3) have similar visibility problems, depending on how much instrumentation is turned on and how much time you're willing to invest in your developers writing performance monitoring instead of their functional code. However, even though those decision points aren't trivial, the real problem comes in the way of cost and performance overhead.

After all, regardless of whether you load code pre-instrumented with a tool or code that your developers added monitoring lines of code, you are essentially operating at 10, 20, even 50% more code, cycles, overhead and cost than just your functional code. Replicate that overhead enough times and not only are you impacting your user service levels, you're blowing through all your serverless "savings" by paying for additional non-functional code.

There Are Options

Look, all is not doom and gloom. There are methods and ways to get the performance data you need across your distributed application, without blowing your budget or your error budget. Look for non-traditional APM tools that don't rely on either legacy instrumentation methods OR open source observability (BONUS, though, if the tool can actually run its own monitoring AND support observability instrumentation).

The key to these tools is that they're more intricately connected with the serverless infrastructure than a legacy APM tool might have. Good news — this means that there are solutions out there that can instrument serverless on the fly, using their connections with the infrastructure. Bad news — if the tool and infrastructure don't match up, you're back to square one. Sometimes that means you may change your infrastructure choice — and sometimes, that means you have to go with the basic instance-based metrics — and use your EUM to the best of your ability.

Anyway, don't be discouraged by this. You can still effectively use Serverless functions to create a more cost effective and efficient multi-cloud application ... and you don't necessarily have to give up that application visibility you've become accustomed to seeing. You will have to check (up front, hopefully) that you have the right tools and right infrastructure to do both. Happy Serverlessing!!!!

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

Can APM Really Handle Serverless? - Part 2

Chris Farrell

The "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

Start with: Can APM Really Handle Serverless? - Part 1

And Then There's "Observability"

There are three ways conventional tools deliver service performance data to your monitoring tools:

1. API built into the platform — the consummate example of this is Lambda and Xray. This at least provides some level of performance detail, but it's nowhere near the richness and depth DevOps teams are used to (or need). PLUS: X-Ray provides data about the specific instance, AND ONLY the specific instance; but applications are distributed connected things — getting information about a single service without any knowledge of connected systems doesn't help understand what is getting in the way of distributed performance issues.

2. Pre-instrument the code — Like the way that some application monitoring tools tackled the container incompatibility issue, you could always run the code through an instrumentation step. While this allows the APM solution to get its hooks into the code, it loses the benefit of years of technology advancement in real-time instrumentation which allows decisions to be made on how much (or how little) to measure.

3. Open Source Observability — one or more of the observability APIs could always be put into place — of course, this requires some, if not a ton of, developer time to put the API instrumentation into their code:

■ Deciding what to instrument

■ Selecting which metrics to provide

■ Coding it in

■ Identifying those metrics for the tool

■ Selecting a visualization (If possible)

■ Analyzing logs for serverless events

All three of these approaches actually run counter to the value and efficiency promise of using Serverless Functions in a distributed application.

Option (1) simply doesn't have the juice to provide the detailed information needed for complex applications — and ZERO information about distributed functions, their dependencies (upstream and downstream) with other services, and no context or understanding of traces or end users to examine performance against.

(2) and (3) have similar visibility problems, depending on how much instrumentation is turned on and how much time you're willing to invest in your developers writing performance monitoring instead of their functional code. However, even though those decision points aren't trivial, the real problem comes in the way of cost and performance overhead.

After all, regardless of whether you load code pre-instrumented with a tool or code that your developers added monitoring lines of code, you are essentially operating at 10, 20, even 50% more code, cycles, overhead and cost than just your functional code. Replicate that overhead enough times and not only are you impacting your user service levels, you're blowing through all your serverless "savings" by paying for additional non-functional code.

There Are Options

Look, all is not doom and gloom. There are methods and ways to get the performance data you need across your distributed application, without blowing your budget or your error budget. Look for non-traditional APM tools that don't rely on either legacy instrumentation methods OR open source observability (BONUS, though, if the tool can actually run its own monitoring AND support observability instrumentation).

The key to these tools is that they're more intricately connected with the serverless infrastructure than a legacy APM tool might have. Good news — this means that there are solutions out there that can instrument serverless on the fly, using their connections with the infrastructure. Bad news — if the tool and infrastructure don't match up, you're back to square one. Sometimes that means you may change your infrastructure choice — and sometimes, that means you have to go with the basic instance-based metrics — and use your EUM to the best of your ability.

Anyway, don't be discouraged by this. You can still effectively use Serverless functions to create a more cost effective and efficient multi-cloud application ... and you don't necessarily have to give up that application visibility you've become accustomed to seeing. You will have to check (up front, hopefully) that you have the right tools and right infrastructure to do both. Happy Serverlessing!!!!

Hot Topics

The Latest

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ...