Performance Testing and APM: Why You Need Both
October 23, 2015

Paola Moretto
Nouvola

Share this

Performance Testing (aka synthetic testing) and APM (Application Performance Management and monitoring solutions) are often regarded as two competing solutions, as if one were an alternative to the other. They are actually perfect complementary tools to achieve optimal performance for your application. Whether you develop web applications, SaaS products or mobile apps, you‘ll find both approaches to be absolutely necessary in your software operations. Here, I explain why. (I’ll use “monitoring solutions” as a synonym for APM solutions.)

Let’s talk about performance for a moment. Performance can refer to end-user performance or backend performance.

End-user performance is a measurement of the user experience as relates to speed, responsiveness and scalability. Page load time is one of the typical metrics for the user experience.

Backend performance, as the name suggests, refers to the performance of backend resources used by the application. This is something end users may not be aware of, but which is important to the way the app performs.

Let’s look at APM or monitoring solutions first. These tools are normally used to collect data from a live environment. Monitoring solutions help gather important information on the application backend metrics, server behaviors, slow components and transactions. Several application monitoring solutions track database and browser performance as well.

Monitoring is Not Enough

There are many kinds of monitoring solutions that operate at different levels of the stack, with different timing granularity, with the opportunity to define custom metrics. You have server monitoring, infrastructure monitoring, user monitoring, high-frequency metrics, and others. What is clear with monitoring solutions is that one is not enough. You probably need a portfolio of tools to have a clear understanding of what is happening with your application. Most tools have alerts systems that provide real-time information on page load and are designed to notify you in real time of events so you are immediately informed about deteriorating performance.

It doesn’t matter how comprehensive your monitoring architecture is, it doesn’t provide the full picture. First of all, live traffic is noisy. You have no control over what your users are doing and how many users you have at any given point. If you have a performance issue, that makes it really hard to troubleshoot. Was that expected behavior? Did they just hit a corner case previously unanticipated? Is a combination of the live workloads that crashed the system? So even though the technology is designed to enable real-time troubleshooting, the reality is that since it’s not a controlled environment, you might not be able to identify root cause performance issues in a timely manner.

Second and most important, the information produced by monitoring is delivered after the fact. Monitoring is like calling AAA after an accident. It’s a great service to have, but it’s much better to prevent the accident in the first place.

This explains why you need to add performance testing to the mix. While monitoring can inform you about performance after the fact, performance testing can help you prevent bad things from happening.

While monitoring is usually done on your live/production environment, performance testing usually utilizes synthetic traffic on a pre-production/staging environment. Having a pre-production environment as close as possible to your production environment will help you derive the most meaningful results.

In performance testing, users are simulated but the traffic is absolutely real. You can apply different types of loads to discover the breaking point of your application before it goes live. You can use performance testing to test with traffic that is higher than anything your actual application has seen – yet – so that you can prepare for peak of traffic.

Performance testing can also help you identify performance degradations that might have resulted from code changes, infrastructure changes or third party changes. It basically answers the question: “Can you trust this build to deliver the same user experience your users are counting on?”

In performance testing, you have total control over the amount of traffic and the workloads your users execute. That makes it a lot easier to troubleshoot.

And, if you are using Docker or containers-based architecture, you could also test performance improvements, under different configurations and platforms easily.

With performance testing, you can also measure end-to-end performance – a good indication of user experience -- which gives visibility into the entire application delivery chain, enabling greater transparency and targeted troubleshooting.

You wouldn’t build a bridge and then send hundreds of thousands of cars over it without first doing structural tests for structural problems. But you also wouldn’t open a bridge to traffic without continually monitoring how it is holding up under all the traffic. You need to do both, whether you’re talking about bridges or apps. The difference is, while a bridge is static and needs to be tested only once or at periodic intervals, software today is highly dynamic and needs to be tested on a daily basis as part of your regular flow.

Used together, performance testing and monitoring make a great team, so to speak. Use both to make sure you deploy a reliable product.

Paola Moretto is Founder and CEO of Nouvola.

Share this

The Latest

May 16, 2019

Although the vast majority of IT organizations have implemented a broad variety of systems and tools to modernize, simplify and streamline data center operations, many are still burdened by inefficiencies, security risks and performance gaps in their IT infrastructure as well as the excessive time it takes to manage legacy infrastructure, according to the State of IT Transformation, a report from Datrium ...

May 15, 2019

When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability. Why does this happen? ...

May 14, 2019

Data may be pouring into enterprises but IT professionals still find most of it stuck in siloed departments and weeks away from being able to drive any valued action. Coupled with the ongoing concerns over security responsiveness, IT teams have to push aside other important performance-oriented data in order to ensure security data, at least, gets prominent attention. A new survey by Ivanti shows the disconnect between enterprise departments struggling to improve operations like automation while being challenged with a siloed structure and a data onslaught ...

May 13, 2019

A subtle, deliberate shift has occurred within the software industry which, at present, only the most innovative organizations have seized upon for competitive advantage. Although primarily driven by Artificial Intelligence (AI), this transformation strikes at the core of the most pervasive IT resources including cloud computing and predictive analytics ...

May 09, 2019

When asked who is mandated with developing and delivering their organization's digital competencies, 51% of respondents say their IT departments have a leadership role. The critical question is whether IT departments are prepared to take on a leadership role in which collaborating with other functions and disseminating knowledge and digital performance data are requirements ...

May 08, 2019

The Economist Intelligence Unit just released a new study commissioned by Riverbed that explores nine digital competencies that help organizations improve their digital performance and, ultimately, achieve their objectives. Here's a brief summary of 7 key research findings you'll find covered in detail in the report ...

May 07, 2019

Today, the overall customer scenario has digitally transformed and practically there is no limitation to the ways in which the target customers can be reached. These opportunities are throwing multiple challenges for brands and enterprises, and one of the prominent ones is to ensure Omni Channel experience for customers ...

May 06, 2019

Most businesses (92 percent of respondents) see the potential value of data and 36 percent are already monetizing their data, according to the Global Data Protection Index from Dell EMC. While this acknowledgement is positive, however, most respondents are struggling to properly protect their data ...

May 02, 2019

IT practitioners are still in experimentation mode with artificial intelligence in many cases, and still have concerns about how credible the technology can be. A recent study from OpsRamp targeted these IT managers who have implemented AIOps, and among other data, reports on the primary concerns of this new approach to operations management ...

May 01, 2019

NVMe storage's strong performance, combined with the capacity and data availability benefits of shared NVMe storage over local SSD, makes it a strong solution for AI / ML infrastructures of any size. There are several AI / ML focused use cases to highlight ...