Skip to main content

Delivering Deep Insights Into End User Quality of Experience

The quality of an end user's experience of an application is becoming an ever more important consideration in the APM world. It's not enough to draw a conclusion about the end user's experience based on an evaluation of how an individual application is performing. Increasingly, multiple applications and loosely coupled infrastructure components are coming together to contribute to the end user's experience. Understanding how all those applications and components are interacting at the point where the user is engaging them is crucial to an understanding of the user's experience.

So where do you start to gain this understanding? First, you must identify what constitutes a user's experience of an application: Response speed? Ease of information access? Depth of integration with other applications? Until you understand what constitutes a user's experience, you're not in a position to measure or quantify it.

Some of the elements that contribute to an end user's experience of an application will be inside the corporate firewall — servers, routers, database machines, and more.

Other elements contributing to the end user's experience will be outside the corporate firewall — data feeds from third parties, for example.

Organizations that want to know how well their applications are performing for users — particularly customers who are interacting from outside the firewall — need tools to monitor the user's experience that look at it from both the inside and the outside.

Monitoring Application Response Times For Each Transaction

Today's application infrastructures involve many servers, routers, switches, load balancers, and more. In any given application, information moves among these different devices. To understand fully what is happening every time the data moves among application or network elements, you need tools that can track and capture transaction information in real time and at a very granular level.

You also need to monitor for patterns in user engagement. Response times for an online booking application, for example, may be consistent all week long, then spike suddenly on a Friday night when everyone leaves work for the weekend. The user experience of your applications on a Friday night may be poor, given the traffic that your systems are experiencing.

Without insight into the response times for each movement between application and infrastructure elements, though, you won't know where to make changes to improve the end user experience.

Monitoring Business Metrics Related to Application Performance

While the ability to monitor all the different aspects of the application and infrastructure that contribute to end user experience is critical, you also need a context in which the data you capture from that monitoring effort has relevance. You need to develop business metrics that identify desired transaction performance levels.

Without both the metrics and the ability to track transaction performance against those metrics, you have information without any context — and it is impossible know where or how to refine a user's experience without that context.

Monitoring the Impact on End User Experience Across Infrastructure Tiers

Increasingly, today's applications are built from loosely coupled components that can exist in many different places and in many different infrastructure tiers — even within a single organization. Tracing root causes of end user experience problems is more complicated now, given the different infrastructure tiers in place.

In order to improve that end user experience, you need tools that can provide a comprehensive view of all those infrastructure elements — and show you how data and messages are moving between those elements.

Generating Synthetic Transactions For Measuring End User Performance

Finally, the ability to monitor the end user experience and trace root causes of problems across different transactions and infrastructure elements is crucial when an end user calls to report a problem. With these tools, you can find and fix a problem quickly.

However, it would be better to monitor the system proactively, finding end user experience problems before the end users report them. If you are able to do that, you could eliminate a large number of poor experiences before users even encounter them.

Passive monitoring tools can provide insights into the end user experience from outside the firewall. They can monitor the transactions, the transitions from page to page in a web application, and how much time it takes before the user can move on to a next step while waiting for a transaction to complete.

Active monitoring tools, in contrast, can create synthetic transactions that you can use to understand end user experience without the end user's involvement. They enable you to get a jump on end user experience management, because you can find and fix problems before the users do.

Ultimately, when you're looking at APM, you need to pay particular attention to the tools that enable you to monitor and manage the experience of the end user. The traditional APM tools are powerful tools for managing traditional applications, but as newer applications veer away from the traditional development and deployment models, you need tools that can focus on the end user experience, in order to understand how best to use the APM tools to modify the application delivery environment.

Create the right user experience, and you will keep more customers. They will be engaged with the experience you have created — and that, ultimately, is the best measure of application performance.

About Raj Sabhlok and Suvish Viswanathan

Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ​ ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.

Related Links:

www.manageengine.com

Click to read "Another Look at Gartner's 5 Dimensions of APM" by Raj Sabhlok and Suvish Viswanathan

The Latest

Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline ... Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place ...

Across the enterprise technology landscape, a quiet crisis is playing out. Organizations have run hundreds, sometimes thousands, of generative AI pilots. Leadership has celebrated the proof of concept (POCs) ... Industry experience points to a sobering reality: only 5-10% of AI POCs that progress to the pilot stage successfully reach scaled production. The remaining 90% fail because the enterprise environment around them was never ready to absorb them, not the AI models ...

Today's modern systems are not what they once were. Organizations now rely on distributed systems, event-driven workflows, hybrid and multi-cloud environments and continuous delivery pipelines. While each adds flexibility, it also introduces new, often invisible failures. Development speed is no longer the primary bottleneck of innovation. Reliability is ...

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

Delivering Deep Insights Into End User Quality of Experience

The quality of an end user's experience of an application is becoming an ever more important consideration in the APM world. It's not enough to draw a conclusion about the end user's experience based on an evaluation of how an individual application is performing. Increasingly, multiple applications and loosely coupled infrastructure components are coming together to contribute to the end user's experience. Understanding how all those applications and components are interacting at the point where the user is engaging them is crucial to an understanding of the user's experience.

So where do you start to gain this understanding? First, you must identify what constitutes a user's experience of an application: Response speed? Ease of information access? Depth of integration with other applications? Until you understand what constitutes a user's experience, you're not in a position to measure or quantify it.

Some of the elements that contribute to an end user's experience of an application will be inside the corporate firewall — servers, routers, database machines, and more.

Other elements contributing to the end user's experience will be outside the corporate firewall — data feeds from third parties, for example.

Organizations that want to know how well their applications are performing for users — particularly customers who are interacting from outside the firewall — need tools to monitor the user's experience that look at it from both the inside and the outside.

Monitoring Application Response Times For Each Transaction

Today's application infrastructures involve many servers, routers, switches, load balancers, and more. In any given application, information moves among these different devices. To understand fully what is happening every time the data moves among application or network elements, you need tools that can track and capture transaction information in real time and at a very granular level.

You also need to monitor for patterns in user engagement. Response times for an online booking application, for example, may be consistent all week long, then spike suddenly on a Friday night when everyone leaves work for the weekend. The user experience of your applications on a Friday night may be poor, given the traffic that your systems are experiencing.

Without insight into the response times for each movement between application and infrastructure elements, though, you won't know where to make changes to improve the end user experience.

Monitoring Business Metrics Related to Application Performance

While the ability to monitor all the different aspects of the application and infrastructure that contribute to end user experience is critical, you also need a context in which the data you capture from that monitoring effort has relevance. You need to develop business metrics that identify desired transaction performance levels.

Without both the metrics and the ability to track transaction performance against those metrics, you have information without any context — and it is impossible know where or how to refine a user's experience without that context.

Monitoring the Impact on End User Experience Across Infrastructure Tiers

Increasingly, today's applications are built from loosely coupled components that can exist in many different places and in many different infrastructure tiers — even within a single organization. Tracing root causes of end user experience problems is more complicated now, given the different infrastructure tiers in place.

In order to improve that end user experience, you need tools that can provide a comprehensive view of all those infrastructure elements — and show you how data and messages are moving between those elements.

Generating Synthetic Transactions For Measuring End User Performance

Finally, the ability to monitor the end user experience and trace root causes of problems across different transactions and infrastructure elements is crucial when an end user calls to report a problem. With these tools, you can find and fix a problem quickly.

However, it would be better to monitor the system proactively, finding end user experience problems before the end users report them. If you are able to do that, you could eliminate a large number of poor experiences before users even encounter them.

Passive monitoring tools can provide insights into the end user experience from outside the firewall. They can monitor the transactions, the transitions from page to page in a web application, and how much time it takes before the user can move on to a next step while waiting for a transaction to complete.

Active monitoring tools, in contrast, can create synthetic transactions that you can use to understand end user experience without the end user's involvement. They enable you to get a jump on end user experience management, because you can find and fix problems before the users do.

Ultimately, when you're looking at APM, you need to pay particular attention to the tools that enable you to monitor and manage the experience of the end user. The traditional APM tools are powerful tools for managing traditional applications, but as newer applications veer away from the traditional development and deployment models, you need tools that can focus on the end user experience, in order to understand how best to use the APM tools to modify the application delivery environment.

Create the right user experience, and you will keep more customers. They will be engaged with the experience you have created — and that, ultimately, is the best measure of application performance.

About Raj Sabhlok and Suvish Viswanathan

Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ​ ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.

Related Links:

www.manageengine.com

Click to read "Another Look at Gartner's 5 Dimensions of APM" by Raj Sabhlok and Suvish Viswanathan

The Latest

Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline ... Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place ...

Across the enterprise technology landscape, a quiet crisis is playing out. Organizations have run hundreds, sometimes thousands, of generative AI pilots. Leadership has celebrated the proof of concept (POCs) ... Industry experience points to a sobering reality: only 5-10% of AI POCs that progress to the pilot stage successfully reach scaled production. The remaining 90% fail because the enterprise environment around them was never ready to absorb them, not the AI models ...

Today's modern systems are not what they once were. Organizations now rely on distributed systems, event-driven workflows, hybrid and multi-cloud environments and continuous delivery pipelines. While each adds flexibility, it also introduces new, often invisible failures. Development speed is no longer the primary bottleneck of innovation. Reliability is ...

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...