Skip to main content

Can APM Really Handle Serverless? - Part 2

Chris Farrell

The "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

Start with: Can APM Really Handle Serverless? - Part 1

And Then There's "Observability"

There are three ways conventional tools deliver service performance data to your monitoring tools:

1. API built into the platform — the consummate example of this is Lambda and Xray. This at least provides some level of performance detail, but it's nowhere near the richness and depth DevOps teams are used to (or need). PLUS: X-Ray provides data about the specific instance, AND ONLY the specific instance; but applications are distributed connected things — getting information about a single service without any knowledge of connected systems doesn't help understand what is getting in the way of distributed performance issues.

2. Pre-instrument the code — Like the way that some application monitoring tools tackled the container incompatibility issue, you could always run the code through an instrumentation step. While this allows the APM solution to get its hooks into the code, it loses the benefit of years of technology advancement in real-time instrumentation which allows decisions to be made on how much (or how little) to measure.

3. Open Source Observability — one or more of the observability APIs could always be put into place — of course, this requires some, if not a ton of, developer time to put the API instrumentation into their code:

■ Deciding what to instrument

■ Selecting which metrics to provide

■ Coding it in

■ Identifying those metrics for the tool

■ Selecting a visualization (If possible)

■ Analyzing logs for serverless events

All three of these approaches actually run counter to the value and efficiency promise of using Serverless Functions in a distributed application.

Option (1) simply doesn't have the juice to provide the detailed information needed for complex applications — and ZERO information about distributed functions, their dependencies (upstream and downstream) with other services, and no context or understanding of traces or end users to examine performance against.

(2) and (3) have similar visibility problems, depending on how much instrumentation is turned on and how much time you're willing to invest in your developers writing performance monitoring instead of their functional code. However, even though those decision points aren't trivial, the real problem comes in the way of cost and performance overhead.

After all, regardless of whether you load code pre-instrumented with a tool or code that your developers added monitoring lines of code, you are essentially operating at 10, 20, even 50% more code, cycles, overhead and cost than just your functional code. Replicate that overhead enough times and not only are you impacting your user service levels, you're blowing through all your serverless "savings" by paying for additional non-functional code.

There Are Options

Look, all is not doom and gloom. There are methods and ways to get the performance data you need across your distributed application, without blowing your budget or your error budget. Look for non-traditional APM tools that don't rely on either legacy instrumentation methods OR open source observability (BONUS, though, if the tool can actually run its own monitoring AND support observability instrumentation).

The key to these tools is that they're more intricately connected with the serverless infrastructure than a legacy APM tool might have. Good news — this means that there are solutions out there that can instrument serverless on the fly, using their connections with the infrastructure. Bad news — if the tool and infrastructure don't match up, you're back to square one. Sometimes that means you may change your infrastructure choice — and sometimes, that means you have to go with the basic instance-based metrics — and use your EUM to the best of your ability.

Anyway, don't be discouraged by this. You can still effectively use Serverless functions to create a more cost effective and efficient multi-cloud application ... and you don't necessarily have to give up that application visibility you've become accustomed to seeing. You will have to check (up front, hopefully) that you have the right tools and right infrastructure to do both. Happy Serverlessing!!!!

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

Can APM Really Handle Serverless? - Part 2

Chris Farrell

The "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

Start with: Can APM Really Handle Serverless? - Part 1

And Then There's "Observability"

There are three ways conventional tools deliver service performance data to your monitoring tools:

1. API built into the platform — the consummate example of this is Lambda and Xray. This at least provides some level of performance detail, but it's nowhere near the richness and depth DevOps teams are used to (or need). PLUS: X-Ray provides data about the specific instance, AND ONLY the specific instance; but applications are distributed connected things — getting information about a single service without any knowledge of connected systems doesn't help understand what is getting in the way of distributed performance issues.

2. Pre-instrument the code — Like the way that some application monitoring tools tackled the container incompatibility issue, you could always run the code through an instrumentation step. While this allows the APM solution to get its hooks into the code, it loses the benefit of years of technology advancement in real-time instrumentation which allows decisions to be made on how much (or how little) to measure.

3. Open Source Observability — one or more of the observability APIs could always be put into place — of course, this requires some, if not a ton of, developer time to put the API instrumentation into their code:

■ Deciding what to instrument

■ Selecting which metrics to provide

■ Coding it in

■ Identifying those metrics for the tool

■ Selecting a visualization (If possible)

■ Analyzing logs for serverless events

All three of these approaches actually run counter to the value and efficiency promise of using Serverless Functions in a distributed application.

Option (1) simply doesn't have the juice to provide the detailed information needed for complex applications — and ZERO information about distributed functions, their dependencies (upstream and downstream) with other services, and no context or understanding of traces or end users to examine performance against.

(2) and (3) have similar visibility problems, depending on how much instrumentation is turned on and how much time you're willing to invest in your developers writing performance monitoring instead of their functional code. However, even though those decision points aren't trivial, the real problem comes in the way of cost and performance overhead.

After all, regardless of whether you load code pre-instrumented with a tool or code that your developers added monitoring lines of code, you are essentially operating at 10, 20, even 50% more code, cycles, overhead and cost than just your functional code. Replicate that overhead enough times and not only are you impacting your user service levels, you're blowing through all your serverless "savings" by paying for additional non-functional code.

There Are Options

Look, all is not doom and gloom. There are methods and ways to get the performance data you need across your distributed application, without blowing your budget or your error budget. Look for non-traditional APM tools that don't rely on either legacy instrumentation methods OR open source observability (BONUS, though, if the tool can actually run its own monitoring AND support observability instrumentation).

The key to these tools is that they're more intricately connected with the serverless infrastructure than a legacy APM tool might have. Good news — this means that there are solutions out there that can instrument serverless on the fly, using their connections with the infrastructure. Bad news — if the tool and infrastructure don't match up, you're back to square one. Sometimes that means you may change your infrastructure choice — and sometimes, that means you have to go with the basic instance-based metrics — and use your EUM to the best of your ability.

Anyway, don't be discouraged by this. You can still effectively use Serverless functions to create a more cost effective and efficient multi-cloud application ... and you don't necessarily have to give up that application visibility you've become accustomed to seeing. You will have to check (up front, hopefully) that you have the right tools and right infrastructure to do both. Happy Serverlessing!!!!

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...