Performance Testing and APM: Why You Need Both
October 23, 2015

Paola Moretto
Nouvola

Share this

Performance Testing (aka synthetic testing) and APM (Application Performance Management and monitoring solutions) are often regarded as two competing solutions, as if one were an alternative to the other. They are actually perfect complementary tools to achieve optimal performance for your application. Whether you develop web applications, SaaS products or mobile apps, you‘ll find both approaches to be absolutely necessary in your software operations. Here, I explain why. (I’ll use “monitoring solutions” as a synonym for APM solutions.)

Let’s talk about performance for a moment. Performance can refer to end-user performance or backend performance.

End-user performance is a measurement of the user experience as relates to speed, responsiveness and scalability. Page load time is one of the typical metrics for the user experience.

Backend performance, as the name suggests, refers to the performance of backend resources used by the application. This is something end users may not be aware of, but which is important to the way the app performs.

Let’s look at APM or monitoring solutions first. These tools are normally used to collect data from a live environment. Monitoring solutions help gather important information on the application backend metrics, server behaviors, slow components and transactions. Several application monitoring solutions track database and browser performance as well.

Monitoring is Not Enough

There are many kinds of monitoring solutions that operate at different levels of the stack, with different timing granularity, with the opportunity to define custom metrics. You have server monitoring, infrastructure monitoring, user monitoring, high-frequency metrics, and others. What is clear with monitoring solutions is that one is not enough. You probably need a portfolio of tools to have a clear understanding of what is happening with your application. Most tools have alerts systems that provide real-time information on page load and are designed to notify you in real time of events so you are immediately informed about deteriorating performance.

It doesn’t matter how comprehensive your monitoring architecture is, it doesn’t provide the full picture. First of all, live traffic is noisy. You have no control over what your users are doing and how many users you have at any given point. If you have a performance issue, that makes it really hard to troubleshoot. Was that expected behavior? Did they just hit a corner case previously unanticipated? Is a combination of the live workloads that crashed the system? So even though the technology is designed to enable real-time troubleshooting, the reality is that since it’s not a controlled environment, you might not be able to identify root cause performance issues in a timely manner.

Second and most important, the information produced by monitoring is delivered after the fact. Monitoring is like calling AAA after an accident. It’s a great service to have, but it’s much better to prevent the accident in the first place.

This explains why you need to add performance testing to the mix. While monitoring can inform you about performance after the fact, performance testing can help you prevent bad things from happening.

While monitoring is usually done on your live/production environment, performance testing usually utilizes synthetic traffic on a pre-production/staging environment. Having a pre-production environment as close as possible to your production environment will help you derive the most meaningful results.

In performance testing, users are simulated but the traffic is absolutely real. You can apply different types of loads to discover the breaking point of your application before it goes live. You can use performance testing to test with traffic that is higher than anything your actual application has seen – yet – so that you can prepare for peak of traffic.

Performance testing can also help you identify performance degradations that might have resulted from code changes, infrastructure changes or third party changes. It basically answers the question: “Can you trust this build to deliver the same user experience your users are counting on?”

In performance testing, you have total control over the amount of traffic and the workloads your users execute. That makes it a lot easier to troubleshoot.

And, if you are using Docker or containers-based architecture, you could also test performance improvements, under different configurations and platforms easily.

With performance testing, you can also measure end-to-end performance – a good indication of user experience -- which gives visibility into the entire application delivery chain, enabling greater transparency and targeted troubleshooting.

You wouldn’t build a bridge and then send hundreds of thousands of cars over it without first doing structural tests for structural problems. But you also wouldn’t open a bridge to traffic without continually monitoring how it is holding up under all the traffic. You need to do both, whether you’re talking about bridges or apps. The difference is, while a bridge is static and needs to be tested only once or at periodic intervals, software today is highly dynamic and needs to be tested on a daily basis as part of your regular flow.

Used together, performance testing and monitoring make a great team, so to speak. Use both to make sure you deploy a reliable product.

Paola Moretto is Founder and CEO of Nouvola.

Share this

The Latest

July 17, 2018

The essential value resulting from data-driven processes has become progressively linked with analytics. Once considered a desired complement to intuitive decision-making, analytics has developed into a main focus of mission-critical applications across industries for any number of use cases ...

July 16, 2018

The question of SaaS-based technology over the past decade has quickly changed from "should we?" to "how soon can we?" even for the most customized and regulated of industries. This macro move toward SaaS has also encouraged a series of IT "best practices" that have potential impacts on the employee digital experience, organizational risk and ultimately, productivity ...

July 11, 2018

Optimization means improving the performance of your human and technology resources while keeping a watchful eye. To accomplish this, we must have clear, crisp visibility into the metrics relevant to the delivery of workspace applications to your end users and to the devices – the endpoints – they use to be productive ...

July 09, 2018

As tech headlines flash across my email, the CMDB, and its federated equivalent, the CMS, are almost never mentioned. And yet when I do research, dialog with IT, or support our consulting team, the CMDB/CMS many times still remains paramount ...

June 28, 2018

Given the size and complexity of today's IT networks it can be almost impossible to detect just when and where a security breach or network failure might occur. It's critical, therefore, that businesses have complete visibility over their IT networks, and any applications and services that run on those networks, in order to protect their customers' information, assure uninterrupted service delivery and, of course, comply with the GDPR ...

June 27, 2018

A new breed of solution has been born that simultaneously provides the precision of packet-based analytics with the speed of flow-based monitoring (at a reasonable cost). Here are more reasons to use these new NPM/APM analytics solutions ...

June 26, 2018

A new breed of solution has been born that simultaneously provides the precision of packet-based analytics with the speed of flow-based monitoring (at a reasonable cost). Here are 6 reasons to use these new NPM/APM analytics solutions ...

June 21, 2018

There’s no doubt that digital innovations are transforming industries, and business leaders are left with little or no choice – either embrace digital processes or suffer the consequences and get left behind ...

June 20, 2018

Looking ahead to the rest of 2018 and beyond, it seems like many of the trends that shaped 2017 are set to continue, with the key difference being in how they evolve and shift as they become mainstream. Five key factors defining the progression of the digital transformation movement are ...

June 19, 2018

Companies using cloud technologies to automate their legacy applications and IT operations processes are gaining a significant competitive advantage over those behind the curve, according to a new report from Capgemini and Sogeti, The automation advantage: Making legacy IT keep pace with the cloud ...