APM and Application Stability: Where Two Monitoring Roads Merge and Diverge - Part 1
March 23, 2020

James Smith
SmartBear

Share this

In software development, Application Performance Management (APM) is one of the grown-ups in the room. Not only has APM been around for a long time, but its solutions have evolved over several generations, making it one of the more mature product categories.

APM's longevity makes perfect sense when you consider its fundamental purpose, which is to provide organizations with a way to understand the performance characteristics of their software. Value is delivered by alerting infrastructure teams when applications are performing slowly or poorly, and APM has the vast experience to do this job well.

If APM is one of the old-timers in software development, then Application Stability is the new kid in town. With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience.

The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both.

APM: The Engine for Infrastructure and DevOps Teams

Before the cloud, organizations supplied their own physical hardware and monitored components such as RAM, disk space, CPU, and memory. If you ran out of any of these resources, you were screwed.

And that was the beauty of APM: It enabled the people running the applications to anticipate when they'd need more resources.

Of course, rather than monitor physical machines, today's infrastructure, SRE, and DevOps teams monitor cloud instances. Gone are the days of running out to Best Buy and lugging back new hardware. Instead, a simple request for a new instance is submitted to a cloud provider, and instant access is received.

Rather than make APM obsolete, as some feared, the cloud presents two good reasons for continued reliance on this tool:

1. Apps can be resource hogs: A cloud instance is simply a slice of someone else's computer, which means you still need to know when you're close to running out of resources on this virtual machine. In fact, extra care is needed these days because you're likely to run out of space more quickly.

2. Money still doesn't grow on trees: The ease of access to infinite cloud resources means that software companies may fall into the habit of throwing money at problems rather than figuring out how to streamline usage and costs through better efficiency. Companies can end up paying a ton of money to run apps if they continuously spin up more cloud instances without thorough consideration of the cost and need.

And that's where APM steps in. Its main purpose is to help infrastructure teams figure out capacity planning. With the cloud, questions shift to include: 

■ How can I better manage my cloud instances and optimize usage?

■ How can I make sure I don't give ludicrous amounts of money to cloud providers?

■ How can we tweak and optimize the cloud to reduce costs?

These are the types of things that APM does extremely well. Anytime you need to figure out how to optimize your current resources or when to buy a new server or cloud instance, APM tells you. This information is invaluable to the people running the applications, namely the infrastructure or DevOps team.

Application Stability: Where the Tires Meet the Road for Engineering Teams

Now, let's be honest: APM doesn't have its strongest showing when the person who runs the app is also the person who builds the app. And that's exactly the scenario presented by mobile development and iterative coding. 

Rather than going through long development cycles, software now gets pushed to the web on a daily basis, and mobile apps tend to have weekly or biweekly release cycles. This release speed is not only encouraged but expected in agile software development.

That means the chasm between building and running apps has shrunk, especially for mobile apps. More often than not, the people building the apps are the people releasing them. There is no gap. And, with mobile, companies no longer need to worry about expensive physical hardware but rather about the end user experience. The customer becomes the focus.

Unfortunately, APM is perhaps a little too set in its ways to help you do that. But it's not APM's fault: These requirements are not what APM is inherently built to do. It's great at detecting problems and alerting infrastructure teams so they can toss the issue over the fence to the development team. However, its core strength isn't providing information on how to fix the problems because it wasn't built with the developer audience in mind. 

Trying to refactor APM to help with stability and error issues is like tuning your engine when you have flat tires. They are completely separate components of the car, built for different purposes. You can tune the engine all you want, but it isn't going to move the car unless you focus on why there isn't air in the tires.

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. In short, developers want aggregation and automation of errors. They want to know, "Do we build or fix?"

This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools. Developers need answers to several questions:

1. Where are the errors in the code?

2. How can we get to those bugs and fix them as soon as possible?

3. How can we tie bug fixing into our planning for the week? 

Whether running sprints or building in an agile manner, development work is planned in advance. Developers want to figure out how much new feature work will be included in a sprint versus how much bug fixing is required. And they need a tool that helps them automate the answer to that question.

Check back tomorrow for Part 2 of this blog.

James Smith is SVP of the Bugsnag Product Group at SmartBear
Share this

The Latest

September 23, 2021

The Internet played a greater role than ever in supporting enterprise productivity over the past year-plus, as newly remote workers logged onto the job via residential links that, it turns out, left much to be desired in terms of enabling work ...

September 22, 2021

The world's appetite for cloud services has increased but now, more than 18 months since the beginning of the pandemic, organizations are assessing their cloud spend and trying to better understand the IT investments that were made under pressure. This is a huge challenge in and of itself, with the added complexity of embracing hybrid work ...

September 21, 2021

After a year of unprecedented challenges and change, tech pros responding to this year’s survey, IT Pro Day 2021 survey: Bring IT On from SolarWinds, report a positive perception of their roles and say they look forward to what lies ahead ...

September 20, 2021

One of the key performance indicators for IT Ops is MTTR (Mean-Time-To-Resolution). MTTR essentially measures the length of your incident management lifecycle: from detection; through assignment, triage and investigation; to remediation and resolution. IT Ops teams strive to shorten their incident management lifecycle and lower their MTTR, to meet their SLAs and maintain healthy infrastructures and services. But that's often easier said than done, with incident triage being a key factor in that challenge ...

September 16, 2021

Achieve more with less. How many of you feel that pressure — or, even worse, hear those words — trickle down from leadership? The reality is that overworked and under-resourced IT departments will only lead to chronic errors, missed deadlines and service assurance failures. After all, we're only human. So what are overburdened IT departments to do? Reduce the human factor. In a word: automate ...