The quality of an end user's experience of an application is becoming an ever more important consideration in the APM world. It's not enough to draw a conclusion about the end user's experience based on an evaluation of how an individual application is performing. Increasingly, multiple applications and loosely coupled infrastructure components are coming together to contribute to the end user's experience. Understanding how all those applications and components are interacting at the point where the user is engaging them is crucial to an understanding of the user's experience.
So where do you start to gain this understanding? First, you must identify what constitutes a user's experience of an application: Response speed? Ease of information access? Depth of integration with other applications? Until you understand what constitutes a user's experience, you're not in a position to measure or quantify it.
Some of the elements that contribute to an end user's experience of an application will be inside the corporate firewall — servers, routers, database machines, and more.
Other elements contributing to the end user's experience will be outside the corporate firewall — data feeds from third parties, for example.
Organizations that want to know how well their applications are performing for users — particularly customers who are interacting from outside the firewall — need tools to monitor the user's experience that look at it from both the inside and the outside.
Monitoring Application Response Times For Each Transaction
Today's application infrastructures involve many servers, routers, switches, load balancers, and more. In any given application, information moves among these different devices. To understand fully what is happening every time the data moves among application or network elements, you need tools that can track and capture transaction information in real time and at a very granular level.
You also need to monitor for patterns in user engagement. Response times for an online booking application, for example, may be consistent all week long, then spike suddenly on a Friday night when everyone leaves work for the weekend. The user experience of your applications on a Friday night may be poor, given the traffic that your systems are experiencing.
Without insight into the response times for each movement between application and infrastructure elements, though, you won't know where to make changes to improve the end user experience.
Monitoring Business Metrics Related to Application Performance
While the ability to monitor all the different aspects of the application and infrastructure that contribute to end user experience is critical, you also need a context in which the data you capture from that monitoring effort has relevance. You need to develop business metrics that identify desired transaction performance levels.
Without both the metrics and the ability to track transaction performance against those metrics, you have information without any context — and it is impossible know where or how to refine a user's experience without that context.
Monitoring the Impact on End User Experience Across Infrastructure Tiers
Increasingly, today's applications are built from loosely coupled components that can exist in many different places and in many different infrastructure tiers — even within a single organization. Tracing root causes of end user experience problems is more complicated now, given the different infrastructure tiers in place.
In order to improve that end user experience, you need tools that can provide a comprehensive view of all those infrastructure elements — and show you how data and messages are moving between those elements.
Generating Synthetic Transactions For Measuring End User Performance
Finally, the ability to monitor the end user experience and trace root causes of problems across different transactions and infrastructure elements is crucial when an end user calls to report a problem. With these tools, you can find and fix a problem quickly.
However, it would be better to monitor the system proactively, finding end user experience problems before the end users report them. If you are able to do that, you could eliminate a large number of poor experiences before users even encounter them.
Passive monitoring tools can provide insights into the end user experience from outside the firewall. They can monitor the transactions, the transitions from page to page in a web application, and how much time it takes before the user can move on to a next step while waiting for a transaction to complete.
Active monitoring tools, in contrast, can create synthetic transactions that you can use to understand end user experience without the end user's involvement. They enable you to get a jump on end user experience management, because you can find and fix problems before the users do.
Ultimately, when you're looking at APM, you need to pay particular attention to the tools that enable you to monitor and manage the experience of the end user. The traditional APM tools are powerful tools for managing traditional applications, but as newer applications veer away from the traditional development and deployment models, you need tools that can focus on the end user experience, in order to understand how best to use the APM tools to modify the application delivery environment.
Create the right user experience, and you will keep more customers. They will be engaged with the experience you have created — and that, ultimately, is the best measure of application performance.
About Raj Sabhlok and Suvish Viswanathan
Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.
Related Links:
Click to read "Another Look at Gartner's 5 Dimensions of APM" by Raj Sabhlok and Suvish Viswanathan
The Latest
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...
Development teams so often find themselves rushing to get a release out on time. When it comes time for testing, the software works fine in the lab. But, when it's released, customers report a bunch of bugs. How does this happen? Why weren't the flaws found in QA? ...
At the same time, reported network outages globally continue to grow in frequency, duration and fiscal impact. And as migration to the cloud continues at a pace of nearly 5% per year, the amount of control over those cloud-based services typically decreases, which further increases operational risk and the potential for increased costs ... Why are network service disruptions still such an issue? ...