Can APM Really Handle Serverless? - Part 2
October 28, 2020

Chris Farrell
Instana

Share this

The "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

Start with: Can APM Really Handle Serverless? - Part 1

And Then There's "Observability"

There are three ways conventional tools deliver service performance data to your monitoring tools:

1. API built into the platform — the consummate example of this is Lambda and Xray. This at least provides some level of performance detail, but it's nowhere near the richness and depth DevOps teams are used to (or need). PLUS: X-Ray provides data about the specific instance, AND ONLY the specific instance; but applications are distributed connected things — getting information about a single service without any knowledge of connected systems doesn't help understand what is getting in the way of distributed performance issues.

2. Pre-instrument the code — Like the way that some application monitoring tools tackled the container incompatibility issue, you could always run the code through an instrumentation step. While this allows the APM solution to get its hooks into the code, it loses the benefit of years of technology advancement in real-time instrumentation which allows decisions to be made on how much (or how little) to measure.

3. Open Source Observability — one or more of the observability APIs could always be put into place — of course, this requires some, if not a ton of, developer time to put the API instrumentation into their code:

■ Deciding what to instrument

■ Selecting which metrics to provide

■ Coding it in

■ Identifying those metrics for the tool

■ Selecting a visualization (If possible)

■ Analyzing logs for serverless events

All three of these approaches actually run counter to the value and efficiency promise of using Serverless Functions in a distributed application.

Option (1) simply doesn't have the juice to provide the detailed information needed for complex applications — and ZERO information about distributed functions, their dependencies (upstream and downstream) with other services, and no context or understanding of traces or end users to examine performance against.

(2) and (3) have similar visibility problems, depending on how much instrumentation is turned on and how much time you're willing to invest in your developers writing performance monitoring instead of their functional code. However, even though those decision points aren't trivial, the real problem comes in the way of cost and performance overhead.

After all, regardless of whether you load code pre-instrumented with a tool or code that your developers added monitoring lines of code, you are essentially operating at 10, 20, even 50% more code, cycles, overhead and cost than just your functional code. Replicate that overhead enough times and not only are you impacting your user service levels, you're blowing through all your serverless "savings" by paying for additional non-functional code.

There Are Options

Look, all is not doom and gloom. There are methods and ways to get the performance data you need across your distributed application, without blowing your budget or your error budget. Look for non-traditional APM tools that don't rely on either legacy instrumentation methods OR open source observability (BONUS, though, if the tool can actually run its own monitoring AND support observability instrumentation).

The key to these tools is that they're more intricately connected with the serverless infrastructure than a legacy APM tool might have. Good news — this means that there are solutions out there that can instrument serverless on the fly, using their connections with the infrastructure. Bad news — if the tool and infrastructure don't match up, you're back to square one. Sometimes that means you may change your infrastructure choice — and sometimes, that means you have to go with the basic instance-based metrics — and use your EUM to the best of your ability.

Anyway, don't be discouraged by this. You can still effectively use Serverless functions to create a more cost effective and efficient multi-cloud application ... and you don't necessarily have to give up that application visibility you've become accustomed to seeing. You will have to check (up front, hopefully) that you have the right tools and right infrastructure to do both. Happy Serverlessing!!!!

Chris Farrell is Observability and APM Strategist at Instana
Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...