APM and Application Stability: Where Two Monitoring Roads Merge and Diverge - Part 1
March 23, 2020

James Smith
SmartBear

Share this

In software development, Application Performance Management (APM) is one of the grown-ups in the room. Not only has APM been around for a long time, but its solutions have evolved over several generations, making it one of the more mature product categories.

APM's longevity makes perfect sense when you consider its fundamental purpose, which is to provide organizations with a way to understand the performance characteristics of their software. Value is delivered by alerting infrastructure teams when applications are performing slowly or poorly, and APM has the vast experience to do this job well.

If APM is one of the old-timers in software development, then Application Stability is the new kid in town. With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience.

The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both.

APM: The Engine for Infrastructure and DevOps Teams

Before the cloud, organizations supplied their own physical hardware and monitored components such as RAM, disk space, CPU, and memory. If you ran out of any of these resources, you were screwed.

And that was the beauty of APM: It enabled the people running the applications to anticipate when they'd need more resources.

Of course, rather than monitor physical machines, today's infrastructure, SRE, and DevOps teams monitor cloud instances. Gone are the days of running out to Best Buy and lugging back new hardware. Instead, a simple request for a new instance is submitted to a cloud provider, and instant access is received.

Rather than make APM obsolete, as some feared, the cloud presents two good reasons for continued reliance on this tool:

1. Apps can be resource hogs: A cloud instance is simply a slice of someone else's computer, which means you still need to know when you're close to running out of resources on this virtual machine. In fact, extra care is needed these days because you're likely to run out of space more quickly.

2. Money still doesn't grow on trees: The ease of access to infinite cloud resources means that software companies may fall into the habit of throwing money at problems rather than figuring out how to streamline usage and costs through better efficiency. Companies can end up paying a ton of money to run apps if they continuously spin up more cloud instances without thorough consideration of the cost and need.

And that's where APM steps in. Its main purpose is to help infrastructure teams figure out capacity planning. With the cloud, questions shift to include: 

■ How can I better manage my cloud instances and optimize usage?

■ How can I make sure I don't give ludicrous amounts of money to cloud providers?

■ How can we tweak and optimize the cloud to reduce costs?

These are the types of things that APM does extremely well. Anytime you need to figure out how to optimize your current resources or when to buy a new server or cloud instance, APM tells you. This information is invaluable to the people running the applications, namely the infrastructure or DevOps team.

Application Stability: Where the Tires Meet the Road for Engineering Teams

Now, let's be honest: APM doesn't have its strongest showing when the person who runs the app is also the person who builds the app. And that's exactly the scenario presented by mobile development and iterative coding. 

Rather than going through long development cycles, software now gets pushed to the web on a daily basis, and mobile apps tend to have weekly or biweekly release cycles. This release speed is not only encouraged but expected in agile software development.

That means the chasm between building and running apps has shrunk, especially for mobile apps. More often than not, the people building the apps are the people releasing them. There is no gap. And, with mobile, companies no longer need to worry about expensive physical hardware but rather about the end user experience. The customer becomes the focus.

Unfortunately, APM is perhaps a little too set in its ways to help you do that. But it's not APM's fault: These requirements are not what APM is inherently built to do. It's great at detecting problems and alerting infrastructure teams so they can toss the issue over the fence to the development team. However, its core strength isn't providing information on how to fix the problems because it wasn't built with the developer audience in mind. 

Trying to refactor APM to help with stability and error issues is like tuning your engine when you have flat tires. They are completely separate components of the car, built for different purposes. You can tune the engine all you want, but it isn't going to move the car unless you focus on why there isn't air in the tires.

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. In short, developers want aggregation and automation of errors. They want to know, "Do we build or fix?"

This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools. Developers need answers to several questions:

1. Where are the errors in the code?

2. How can we get to those bugs and fix them as soon as possible?

3. How can we tie bug fixing into our planning for the week? 

Whether running sprints or building in an agile manner, development work is planned in advance. Developers want to figure out how much new feature work will be included in a sprint versus how much bug fixing is required. And they need a tool that helps them automate the answer to that question.

Check back tomorrow for Part 2 of this blog.

James Smith is SVP of the Bugsnag Product Group at SmartBear
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...