Can APM Really Handle Serverless? - Part 2
October 28, 2020

Chris Farrell
Instana

Share this

The "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

Start with: Can APM Really Handle Serverless? - Part 1

And Then There's "Observability"

There are three ways conventional tools deliver service performance data to your monitoring tools:

1. API built into the platform — the consummate example of this is Lambda and Xray. This at least provides some level of performance detail, but it's nowhere near the richness and depth DevOps teams are used to (or need). PLUS: X-Ray provides data about the specific instance, AND ONLY the specific instance; but applications are distributed connected things — getting information about a single service without any knowledge of connected systems doesn't help understand what is getting in the way of distributed performance issues.

2. Pre-instrument the code — Like the way that some application monitoring tools tackled the container incompatibility issue, you could always run the code through an instrumentation step. While this allows the APM solution to get its hooks into the code, it loses the benefit of years of technology advancement in real-time instrumentation which allows decisions to be made on how much (or how little) to measure.

3. Open Source Observability — one or more of the observability APIs could always be put into place — of course, this requires some, if not a ton of, developer time to put the API instrumentation into their code:

■ Deciding what to instrument

■ Selecting which metrics to provide

■ Coding it in

■ Identifying those metrics for the tool

■ Selecting a visualization (If possible)

■ Analyzing logs for serverless events

All three of these approaches actually run counter to the value and efficiency promise of using Serverless Functions in a distributed application.

Option (1) simply doesn't have the juice to provide the detailed information needed for complex applications — and ZERO information about distributed functions, their dependencies (upstream and downstream) with other services, and no context or understanding of traces or end users to examine performance against.

(2) and (3) have similar visibility problems, depending on how much instrumentation is turned on and how much time you're willing to invest in your developers writing performance monitoring instead of their functional code. However, even though those decision points aren't trivial, the real problem comes in the way of cost and performance overhead.

After all, regardless of whether you load code pre-instrumented with a tool or code that your developers added monitoring lines of code, you are essentially operating at 10, 20, even 50% more code, cycles, overhead and cost than just your functional code. Replicate that overhead enough times and not only are you impacting your user service levels, you're blowing through all your serverless "savings" by paying for additional non-functional code.

There Are Options

Look, all is not doom and gloom. There are methods and ways to get the performance data you need across your distributed application, without blowing your budget or your error budget. Look for non-traditional APM tools that don't rely on either legacy instrumentation methods OR open source observability (BONUS, though, if the tool can actually run its own monitoring AND support observability instrumentation).

The key to these tools is that they're more intricately connected with the serverless infrastructure than a legacy APM tool might have. Good news — this means that there are solutions out there that can instrument serverless on the fly, using their connections with the infrastructure. Bad news — if the tool and infrastructure don't match up, you're back to square one. Sometimes that means you may change your infrastructure choice — and sometimes, that means you have to go with the basic instance-based metrics — and use your EUM to the best of your ability.

Anyway, don't be discouraged by this. You can still effectively use Serverless functions to create a more cost effective and efficient multi-cloud application ... and you don't necessarily have to give up that application visibility you've become accustomed to seeing. You will have to check (up front, hopefully) that you have the right tools and right infrastructure to do both. Happy Serverlessing!!!!

Chris Farrell is Observability and APM Strategist at Instana
Share this

The Latest

September 23, 2021

The Internet played a greater role than ever in supporting enterprise productivity over the past year-plus, as newly remote workers logged onto the job via residential links that, it turns out, left much to be desired in terms of enabling work ...

September 22, 2021

The world's appetite for cloud services has increased but now, more than 18 months since the beginning of the pandemic, organizations are assessing their cloud spend and trying to better understand the IT investments that were made under pressure. This is a huge challenge in and of itself, with the added complexity of embracing hybrid work ...

September 21, 2021

After a year of unprecedented challenges and change, tech pros responding to this year’s survey, IT Pro Day 2021 survey: Bring IT On from SolarWinds, report a positive perception of their roles and say they look forward to what lies ahead ...

September 20, 2021

One of the key performance indicators for IT Ops is MTTR (Mean-Time-To-Resolution). MTTR essentially measures the length of your incident management lifecycle: from detection; through assignment, triage and investigation; to remediation and resolution. IT Ops teams strive to shorten their incident management lifecycle and lower their MTTR, to meet their SLAs and maintain healthy infrastructures and services. But that's often easier said than done, with incident triage being a key factor in that challenge ...

September 16, 2021

Achieve more with less. How many of you feel that pressure — or, even worse, hear those words — trickle down from leadership? The reality is that overworked and under-resourced IT departments will only lead to chronic errors, missed deadlines and service assurance failures. After all, we're only human. So what are overburdened IT departments to do? Reduce the human factor. In a word: automate ...

September 15, 2021

On average, data innovators release twice as many products and increase employee productivity at double the rate of organizations with less mature data strategies, according to the State of Data Innovation report from Splunk ...

September 14, 2021

While 90% of respondents believe observability is important and strategic to their business — and 94% believe it to be strategic to their role — just 26% noted mature observability practices within their business, according to the 2021 Observability Forecast ...

September 13, 2021

Let's explore a few of the most prominent app success indicators and how app engineers can shift their development strategy to better meet the needs of today's app users ...

September 09, 2021

Business enterprises aiming at digital transformation or IT companies developing new software applications face challenges in developing eye-catching, robust, fast-loading, mobile-friendly, content-rich, and user-friendly software. However, with increased pressure to reduce costs and save time, business enterprises often give a short shrift to performance testing services ...

September 08, 2021

DevOps, SRE and other operations teams use observability solutions with AIOps to ingest and normalize data to get visibility into tech stacks from a centralized system, reduce noise and understand the data's context for quicker mean time to recovery (MTTR). With AI using these processes to produce actionable insights, teams are free to spend more time innovating and providing superior service assurance. Let's explore AI's role in ingestion and normalization, and then dive into correlation and deduplication too ...