Skip to main content

3 Approaches to End-User Experience Monitoring

Sridhar Iyengar

The volume of transactions running through websites and mobile apps make customer-facing applications crucial to online businesses. If these applications perform well for their users, they generate revenue for the business. If they don't, they affect the credibility of the business, which in turn affects the overall revenue. It is therefore imperative that businesses understand how well their revenue-critical applications are behaving for their end users.

From an IT team's point of view, understanding the user experience of their applications is becoming challenging as technology evolves. Newer and more complex applications are being written using an assortment of languages. These applications are being deployed on a wide variety of infrastructure components. To add to that, today's users access these modern applications on a variety of devices such as the Web, smartphones, tablets and smart watches.

Fortunately, there are a few means available through which businesses can determine the user experience of their Web applications. Let's take a look at three common approaches:

Real User Monitoring (RUM)

Real user monitoring is a passive monitoring approach that involves collecting metrics at the browser level to accurately determine the application performance as perceived by the end users. Monitoring at the browser level is achieved by injecting JavaScript snippets into the header and footer of the HTML code of the Web application. This code will ascertain the full-page load experience — including downloading the assets from the content delivery network (CDN), rendering the page and executing the JavaScript from the browser's perspective. Additional instrumentation can be used to collect more metrics by injecting additional JavaScript code.

The data gathered through RUM provides answers to questions about user experience such as:

■ How long did it take to load the full page?

■ What is the response time from a network perspective (redirection time, DNS resolution time, connection time)?

■ What is the time interval between sending the request and receiving the first byte of response?

■ What is the time taken by the browser to receive the response and render the page?

■ Are there any problems on the page? If yes, what caused the problem?

■ How is the performance when the application is accessed from different countries?

■ What is the response time across different browsers? Do new application updates affect the performance in a specific version of the browser?

■ How does the application perform in different platforms such as desktop, Web and mobile?

The biggest advantage of monitoring real user data is that it relies on actual traffic to take measurements. There is no need to script the important use cases, which can save a lot of time and resources.

Real user monitoring captures everything as a user goes through the application, so performance data will be available irrespective of what pages the user sees. This is particularly useful for complex apps in which the functionality or content is dynamic.

Server-Side Monitoring

Although user experience is best tracked at the browser level, application performance monitoring at the server side also provides insight into end-user performance. Server-side monitoring is mostly used in conjunction with real user monitoring. This is because problems originating on the server side can only be efficiently detected using server-side monitoring.

Monitoring performance on the server side involves agent-based instrumentation technology for acquiring and transmitting data. This monitoring approach is used to watch user transactions in real time and troubleshoot in case of issues such as slowness or application bugs.

Developers have to install agents on the application server to help capture and visualize transactions end-to-end, with performance statistics across all components, from the URL down to the SQL level. This visual breakdown reveals the flow of all the user transactions being executed in each layer of the application infrastructure.

Server-side monitoring helps track response time and throughput taken by each application component, with the option to trace transactions end-to-end via code analysis. This helps the IT Operations/DevOps teams identify slow Web transactions and then isolate performance issues down to the level of the specific application code that caused them. The underlying database is also monitored most of the time to determine slow database calls, database usage and overall database performance. With server-side monitoring, users will be able to identify the SQL queries executed during a transaction and thus identify the worst performing queries.

Synthetic Transaction Monitoring

Synthetic transaction monitoring is an active monitoring technique based on the concept of simulating the actions of an end user on a Web application. This method involves the use of external monitoring agents executing pre-recorded scripts that mimic end-user behavior at regular time intervals. The monitoring agents are usually very light and do not create any additional load on network traffic.

Most application performance monitoring solutions provide recorder tools to capture the actions or paths a typical end user might take in an application, such as log in, view product, search and check out. These recordings are saved as scripts, which are then executed by the monitoring agents from different geographical locations.

Technically, there are two different approaches to generating requests. Some solutions replay recorded HTTP traffic patterns, while others drive real browser instances. The second approach is more useful for modern applications that make a lot of JavaScript, CSS and Ajax calls.

Since synthetic transaction monitoring involves sending requests across the network, it can measure the response time of application servers and network infrastructure. This type of monitoring does not require actual Web traffic, so you can use this approach to test your Web applications prior to launch — or anytime you like. Many companies use synthetic monitoring before entering production in the form of automated integration tests with Selenium.

Synthetic monitoring does have its limitations, though. Since the monitoring is based on pre-defined transactions, it does not monitor the perception of real end users. Transactions have to be “read-only” because they would otherwise set off real purchase processes. This limits the usage to a certain subset of your business-critical transactions.

The best approach is to use synthetic transaction monitoring as a reference measurement that will help identify performance degradation, detect network problems and notify in case of errors.

Every business is different and has its own requirements that can help to choose which type of monitoring to implement. An ideal strategy would be to use active and passive monitoring techniques side by side so that no stone is left unturned in the pursuit to monitor end-user experience.

The Latest

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

3 Approaches to End-User Experience Monitoring

Sridhar Iyengar

The volume of transactions running through websites and mobile apps make customer-facing applications crucial to online businesses. If these applications perform well for their users, they generate revenue for the business. If they don't, they affect the credibility of the business, which in turn affects the overall revenue. It is therefore imperative that businesses understand how well their revenue-critical applications are behaving for their end users.

From an IT team's point of view, understanding the user experience of their applications is becoming challenging as technology evolves. Newer and more complex applications are being written using an assortment of languages. These applications are being deployed on a wide variety of infrastructure components. To add to that, today's users access these modern applications on a variety of devices such as the Web, smartphones, tablets and smart watches.

Fortunately, there are a few means available through which businesses can determine the user experience of their Web applications. Let's take a look at three common approaches:

Real User Monitoring (RUM)

Real user monitoring is a passive monitoring approach that involves collecting metrics at the browser level to accurately determine the application performance as perceived by the end users. Monitoring at the browser level is achieved by injecting JavaScript snippets into the header and footer of the HTML code of the Web application. This code will ascertain the full-page load experience — including downloading the assets from the content delivery network (CDN), rendering the page and executing the JavaScript from the browser's perspective. Additional instrumentation can be used to collect more metrics by injecting additional JavaScript code.

The data gathered through RUM provides answers to questions about user experience such as:

■ How long did it take to load the full page?

■ What is the response time from a network perspective (redirection time, DNS resolution time, connection time)?

■ What is the time interval between sending the request and receiving the first byte of response?

■ What is the time taken by the browser to receive the response and render the page?

■ Are there any problems on the page? If yes, what caused the problem?

■ How is the performance when the application is accessed from different countries?

■ What is the response time across different browsers? Do new application updates affect the performance in a specific version of the browser?

■ How does the application perform in different platforms such as desktop, Web and mobile?

The biggest advantage of monitoring real user data is that it relies on actual traffic to take measurements. There is no need to script the important use cases, which can save a lot of time and resources.

Real user monitoring captures everything as a user goes through the application, so performance data will be available irrespective of what pages the user sees. This is particularly useful for complex apps in which the functionality or content is dynamic.

Server-Side Monitoring

Although user experience is best tracked at the browser level, application performance monitoring at the server side also provides insight into end-user performance. Server-side monitoring is mostly used in conjunction with real user monitoring. This is because problems originating on the server side can only be efficiently detected using server-side monitoring.

Monitoring performance on the server side involves agent-based instrumentation technology for acquiring and transmitting data. This monitoring approach is used to watch user transactions in real time and troubleshoot in case of issues such as slowness or application bugs.

Developers have to install agents on the application server to help capture and visualize transactions end-to-end, with performance statistics across all components, from the URL down to the SQL level. This visual breakdown reveals the flow of all the user transactions being executed in each layer of the application infrastructure.

Server-side monitoring helps track response time and throughput taken by each application component, with the option to trace transactions end-to-end via code analysis. This helps the IT Operations/DevOps teams identify slow Web transactions and then isolate performance issues down to the level of the specific application code that caused them. The underlying database is also monitored most of the time to determine slow database calls, database usage and overall database performance. With server-side monitoring, users will be able to identify the SQL queries executed during a transaction and thus identify the worst performing queries.

Synthetic Transaction Monitoring

Synthetic transaction monitoring is an active monitoring technique based on the concept of simulating the actions of an end user on a Web application. This method involves the use of external monitoring agents executing pre-recorded scripts that mimic end-user behavior at regular time intervals. The monitoring agents are usually very light and do not create any additional load on network traffic.

Most application performance monitoring solutions provide recorder tools to capture the actions or paths a typical end user might take in an application, such as log in, view product, search and check out. These recordings are saved as scripts, which are then executed by the monitoring agents from different geographical locations.

Technically, there are two different approaches to generating requests. Some solutions replay recorded HTTP traffic patterns, while others drive real browser instances. The second approach is more useful for modern applications that make a lot of JavaScript, CSS and Ajax calls.

Since synthetic transaction monitoring involves sending requests across the network, it can measure the response time of application servers and network infrastructure. This type of monitoring does not require actual Web traffic, so you can use this approach to test your Web applications prior to launch — or anytime you like. Many companies use synthetic monitoring before entering production in the form of automated integration tests with Selenium.

Synthetic monitoring does have its limitations, though. Since the monitoring is based on pre-defined transactions, it does not monitor the perception of real end users. Transactions have to be “read-only” because they would otherwise set off real purchase processes. This limits the usage to a certain subset of your business-critical transactions.

The best approach is to use synthetic transaction monitoring as a reference measurement that will help identify performance degradation, detect network problems and notify in case of errors.

Every business is different and has its own requirements that can help to choose which type of monitoring to implement. An ideal strategy would be to use active and passive monitoring techniques side by side so that no stone is left unturned in the pursuit to monitor end-user experience.

The Latest

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...