It doesn’t take a genius to know that business applications have to perform well, particularly if those applications service a company’s customers. This is why most IT staffs understand the importance of Application Performance Management (APM) for protecting and enabling their business’ reputation and success.
But selecting APM solutions can be a very confusing experience for customers choosing from the wide array of APM products and solutions; with one vendor focusing on the importance of end user experience, and another highlighting their automated application mapping, and another featuring their packet inspection capabilities, and yet another discussing transaction management, and the list goes on. They all identify themselves as APM solutions, but which is the best tool? And why are they all so different?
The difficulty of APM is that applications have many faces – kind of like multiple personalities – but they are all categorized with the same generic label of “applications”. The challenge is that each application has unique performance characteristics and user expectations that must be taken into account when managing performance but sometimes IT staffs take a more generic approach.
For example, gamers will quickly jump out of an online game that is not responding almost instantaneously, but a scientist running a large model run in the cloud may be very content to wait several hours for the results. So APM for gamers focuses on response time, while application performance for the scientist’s model run focuses on optimizing compute and database resources for better throughput. The application performance requirements determine the best APM approaches that should be applied to that particular application.
The End User Experience
Another face of APM is “end user experience”, which many APM vendors use interchangeably with “response time”. But end user experience encompasses more than just response time.
For some applications, like in the gamer example above, response time and fast page rendering plays a primary role in the end user’s experience. But for other application users, response time is an important part of their end user experience but it isn’t the only performance related aspect.
Consider the case of a shopper using an online retail site. At a minimum, the web site’s response time must perform at an acceptable level or the shopper may abandon their shopping cart. But what if the application design is inefficient and requires the shopper to go through a sequence of 10 pages to order an item, that a more efficient application could have accomplished with 6 pages. The response time and page rendering time may be very fast but the total amount of time that it takes the shopper to order the item is much longer than the shopper’s expectations and they abandon their cart.
So in addition to response time, end user experience and performance expectations can also be affected by good application design.
Since we’ve just touched on the importance of application design on performance, programming techniques can also have an impact on performance. Poor programming techniques can add extra seconds to response time, affecting the end user experience.
This highlights the issue of nature versus nurture for performance. If performance flaws are built into the application’s design or coding techniques, it’s unlikely that throwing more hardware at the problem will solve the performance problem. This also means that the scalability of cloud computing will not necessarily solve this problem either.
This is why performance testing and consideration, as well as load testing are important for development and testing teams, so they can minimize performance issues caused by development BEFORE the application goes into production. Taking that a step further, APM and application performance awareness is the responsibility of both development, QA and IT operations (DevOps).
Transaction Management is yet another face of APM. Once the various response time measurements tell you that you have a performance issue, the next step is to find what’s causing the performance issue, or the root cause.
The strength of transaction management is that it deconstructs response time into segment measurements that can help identify where the performance delay is occurring. Then IT teams can drill down to investigate the root cause of the issue.
For example, the delay may be happening in the database server. IT teams must then determine if the issue is due to hardware problems, resource issues such as memory constraints, inefficient programming techniques or other issues.
APM and BSM
APM is valuable when you can do something about performance issues. (Another face of APM.) APM is an important consideration when using cloud computing and outside service providers. Management visibility into application performance, as well as application diagnostic access is essential for critical business apps in the cloud or hosted by outside service providers. (Sometimes this requires resident agents on servers, which some service providers may prohibit.)
Service Level Agreements matching the criticality of the applications are a must for outside cloud services or service providers on which applications depend. There is nothing worse than the only action available to you is wringing your hands during a service provider outage.
And finally, APM is a Business Service Management issue – from the perspective of keeping business services performing optimally. That means that APM, like BSM, is everyone’s responsibility – business, development, testing, and operations. IT staffs must understand the unique performance characteristics and user expectations of applications, in order to manage them most effectively.
The multiple personalities of APM, while a challenge, also provide the flexibility to deal with a variety of unique application performance characteristics and user performance expectations. The key is developing performance expertise and awareness throughout the organization to know what and how to best use APM to optimize application performance.
The secret is not in the tools themselves (although they do provide a lot of helpful functionality and visibility). It’s in how you use the information and tools to keep your applications humming.
About Audrey Rasmussen
Audrey Rasmussen, Partner and Principal Analyst at Ptak, Noel and Associates, an industry analyst firm, leverages her experience of over 30 years in the information technology industry to help her clients as they navigate through the accelerating changes in the information technology industry. Over the years, she has developed experiences in various contexts -- expertise in systems and application management, working with very small companies to very large corporations, industry specializations, business focus, and technical focus, as well as vendor and consulting experience -- which combine into unique industry insights.
Most organizations suffer from some form of alert noise. Alert noise is only going to increase as organizations support cloud-native applications spanning multiple public and private clouds, including ephemeral deployments and more. It's not going to get easier for organizations to understand the signal from all those alerts being sent. So what can be done about it? ...
This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...