Skip to main content

What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 1

Eric Futoran
Embrace

Agent-based approaches to real user monitoring (RUM) simply do not work. If you are pitched to install an "agent" in your mobile or web environments, you should run for the hills.

The world is now all about end-users. This paradigm of focusing on the end-user was simply not true a few years ago, as backend metrics generally revolved around uptime, SLAs, latency, and the like. DevOps teams always pitched and presented the metrics they thought were the most correlated to the end-user experience.

But let's be blunt: Unless there was an egregious fire, the correlated metrics were super loose or entirely false.

Instead, your teams should prioritize alerts, monitoring, and work based on impact to the end-user, as it directly affects your businesses. And your developers and DevOps teams should collect data, monitor, prioritize, and resolve issues accordingly.

The agent-based RUM problem

"Agents" are a mechanism that does not work in the current end-user centric world. They were born out of shimmying the principles of the backend to mobile, web, and the myriad of other ways users interact with the world.

Let's compare the difference between user environments and backend environments:

User environments are open, unstructured, and uncontrollable as they are unowned devices and browsers with the central figure being an unpredictable user.

Backend environments are closed, structured, and controlled as they are composed of relatively homogenous physical and cloud applications.

With closed systems that have fewer external variables, agents focus on a known set of errors to monitor and to trigger data collection for resolution. However, monitoring systems outside of the backend is complex because there are a multitude of types of errors way beyond crashes, error logs, network traces, and API errors.

In an observability world, real user monitoring is about collecting "all" the data for every session — good or bad — and not just a sampled set based on predefined error types. Only by collecting the entirety of every session can the best vendors have the opportunity to analyze and provide the utmost value to your teams.

These vendors have evolved beyond agents to surface every type of user-impacting issue, help resolve them by comparing against good sessions, and prioritize overall impact across the complete set of issue types. For example, the same crash for two different users could have different root causes because of the environments, third-party SDKs, and API timeout parameters.

To hit the difference home, watch a developer, outside of DevOps, open a RUM dashboard for a vendor who uses the agent-based approach. The core dashboard will have the following:

■ A geographical map laying out the incidents

■ A generic list of error logs and crashes

■ Some sort of mapping of network errors

■ A single health score

The developer reviewing this dashboard will not come back to it regularly or at all. And it's not hard to see why.

The dashboard does not tell them which users are affected, where to prioritize their efforts, or the types of bugs and optimizations that they should care most about. It's not built for them from the data collected to data organization and display. There is a reason why these developers always implement and use other vendors — even for simple concepts like error logging and crashes — alongside those application performance monitoring vendors.

Let's deep dive into the core differences between these approaches and explore what a true real user monitoring methodology looks like. That way, you will know it when you see it and can create the best experience for your end-users as well as your developers and DevOps team.

The spider web problem

To illustrate the core implication of an agent mentality, let's focus on the "spider webs." You know the ones I'm talking about. You've seen the cool demos with a picture connecting nodes across your systems to demonstrate "visibility" across all the apps running on your servers and machines.

Everything is connected by an ever-expanding spider web of nodes and lines — every app, compute instance, API call, etc. Oh, it's very pretty to see all the apps and API calls going to and from each other. It's also a nice source of confidence that the agents are collecting the data required to monitor, identify, and resolve potential issues.

However, the very nature of this mental model of a spider web is it assumes all the issues occur on the lines between the nodes or on the nodes themselves:

■ An increase in network latency means you should look at the connected database, server, or service calls.

■ An increase in downtime means you should look at the connected servers to see if they're under heavy load.

■ An increase in transaction failures means you should look at the connected service calls for a point of failure.

The paradigm of agents is one of looking for a closed set of known symptoms for broken apps, failing processes, and poorly designed code. To help resolve these symptoms, the agents collect samples of app and process information, so that when an API throws an error or a process has downtime, the agent collects the corresponding data in reaction to the error.

And this approach works … on the backend, for a known set of errors, in a controlled environment, with little external pressure from the outside world.

But when applied to the client side of web and mobile, what happens when the complexity explodes? 

What happens when there are an infinite number of unknown pressures, from the users, the devices, the operating systems, the app versions, the network connectivities, and the other apps running?

How do you truly understand your team's effectiveness when the biggest issues are not related to downtime or following individual service calls throughout a distributed system?

The problem with uncontrolled environments

Uncontrolled environments are any digital experience that's external to data centers. Beyond just smartphones and web browsers, they're point of sales, VR and AR devices, tablets in the field, and smart cars. And the world is increasingly one of uncontrolled environments for business-critical touchpoints.

The most effective developer and DevOps teams monitor these client-side environments with early warning systems to determine when users are impacted so they can triage and resolve issues. They flip the traditional application monitoring paradigm.

Traditional application monitoring: Sample data by looking for a known set of errors, then gather context around them.

Modern application monitoring: Gather data without knowing its full value, correlate those data points to user impact from the end-user vantage point, then determine the error, measure the impact in order to prioritize it, and route it accordingly.

In order to collect, identify, and resolve errors correctly, DevOps teams must understand the challenges that come along with running apps in these types of uncontrolled environments. After all, the assumptions about where failure points can happen are vastly different.

Start with: What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 2

Eric Futoran is CEO of Embrace

The Latest

In MEAN TIME TO INSIGHT Episode 12, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses purchasing new network observability solutions.... 

There's an image problem with mobile app security. While it's critical for highly regulated industries like financial services, it is often overlooked in others. This usually comes down to development priorities, which typically fall into three categories: user experience, app performance, and app security. When dealing with finite resources such as time, shifting priorities, and team skill sets, engineering teams often have to prioritize one over the others. Usually, security is the odd man out ...

Image
Guardsquare

IT outages, caused by poor-quality software updates, are no longer rare incidents but rather frequent occurrences, directly impacting over half of US consumers. According to the 2024 Software Failure Sentiment Report from Harness, many now equate these failures to critical public health crises ...

In just a few months, Google will again head to Washington DC and meet with the government for a two-week remedy trial to cement the fate of what happens to Chrome and its search business in the face of ongoing antitrust court case(s). Or, Google may proactively decide to make changes, putting the power in its hands to outline a suitable remedy. Regardless of the outcome, one thing is sure: there will be far more implications for AI than just a shift in Google's Search business ... 

Image
Chrome

In today's fast-paced digital world, Application Performance Monitoring (APM) is crucial for maintaining the health of an organization's digital ecosystem. However, the complexities of modern IT environments, including distributed architectures, hybrid clouds, and dynamic workloads, present significant challenges ... This blog explores the challenges of implementing application performance monitoring (APM) and offers strategies for overcoming them ...

Service disruptions remain a critical concern for IT and business executives, with 88% of respondents saying they believe another major incident will occur in the next 12 months, according to a study from PagerDuty ...

IT infrastructure (on-premises, cloud, or hybrid) is becoming larger and more complex. IT management tools need data to drive better decision making and more process automation to complement manual intervention by IT staff. That is why smart organizations invest in the systems and strategies needed to make their IT infrastructure more resilient in the event of disruption, and why many are turning to application performance monitoring (APM) in conjunction with high availability (HA) clusters ...

In today's data-driven world, the management of databases has become increasingly complex and critical. The following are findings from Redgate's 2025 The State of the Database Landscape report ...

With the 2027 deadline for SAP S/4HANA migrations fast approaching, organizations are accelerating their transition plans ... For organizations that intend to remain on SAP ECC in the near-term, the focus has shifted to improving operational efficiencies and meeting demands for faster cycle times ...

As applications expand and systems intertwine, performance bottlenecks, quality lapses, and disjointed pipelines threaten progress. To stay ahead, leading organizations are turning to three foundational strategies: developer-first observability, API platform adoption, and sustainable test growth ...

What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 1

Eric Futoran
Embrace

Agent-based approaches to real user monitoring (RUM) simply do not work. If you are pitched to install an "agent" in your mobile or web environments, you should run for the hills.

The world is now all about end-users. This paradigm of focusing on the end-user was simply not true a few years ago, as backend metrics generally revolved around uptime, SLAs, latency, and the like. DevOps teams always pitched and presented the metrics they thought were the most correlated to the end-user experience.

But let's be blunt: Unless there was an egregious fire, the correlated metrics were super loose or entirely false.

Instead, your teams should prioritize alerts, monitoring, and work based on impact to the end-user, as it directly affects your businesses. And your developers and DevOps teams should collect data, monitor, prioritize, and resolve issues accordingly.

The agent-based RUM problem

"Agents" are a mechanism that does not work in the current end-user centric world. They were born out of shimmying the principles of the backend to mobile, web, and the myriad of other ways users interact with the world.

Let's compare the difference between user environments and backend environments:

User environments are open, unstructured, and uncontrollable as they are unowned devices and browsers with the central figure being an unpredictable user.

Backend environments are closed, structured, and controlled as they are composed of relatively homogenous physical and cloud applications.

With closed systems that have fewer external variables, agents focus on a known set of errors to monitor and to trigger data collection for resolution. However, monitoring systems outside of the backend is complex because there are a multitude of types of errors way beyond crashes, error logs, network traces, and API errors.

In an observability world, real user monitoring is about collecting "all" the data for every session — good or bad — and not just a sampled set based on predefined error types. Only by collecting the entirety of every session can the best vendors have the opportunity to analyze and provide the utmost value to your teams.

These vendors have evolved beyond agents to surface every type of user-impacting issue, help resolve them by comparing against good sessions, and prioritize overall impact across the complete set of issue types. For example, the same crash for two different users could have different root causes because of the environments, third-party SDKs, and API timeout parameters.

To hit the difference home, watch a developer, outside of DevOps, open a RUM dashboard for a vendor who uses the agent-based approach. The core dashboard will have the following:

■ A geographical map laying out the incidents

■ A generic list of error logs and crashes

■ Some sort of mapping of network errors

■ A single health score

The developer reviewing this dashboard will not come back to it regularly or at all. And it's not hard to see why.

The dashboard does not tell them which users are affected, where to prioritize their efforts, or the types of bugs and optimizations that they should care most about. It's not built for them from the data collected to data organization and display. There is a reason why these developers always implement and use other vendors — even for simple concepts like error logging and crashes — alongside those application performance monitoring vendors.

Let's deep dive into the core differences between these approaches and explore what a true real user monitoring methodology looks like. That way, you will know it when you see it and can create the best experience for your end-users as well as your developers and DevOps team.

The spider web problem

To illustrate the core implication of an agent mentality, let's focus on the "spider webs." You know the ones I'm talking about. You've seen the cool demos with a picture connecting nodes across your systems to demonstrate "visibility" across all the apps running on your servers and machines.

Everything is connected by an ever-expanding spider web of nodes and lines — every app, compute instance, API call, etc. Oh, it's very pretty to see all the apps and API calls going to and from each other. It's also a nice source of confidence that the agents are collecting the data required to monitor, identify, and resolve potential issues.

However, the very nature of this mental model of a spider web is it assumes all the issues occur on the lines between the nodes or on the nodes themselves:

■ An increase in network latency means you should look at the connected database, server, or service calls.

■ An increase in downtime means you should look at the connected servers to see if they're under heavy load.

■ An increase in transaction failures means you should look at the connected service calls for a point of failure.

The paradigm of agents is one of looking for a closed set of known symptoms for broken apps, failing processes, and poorly designed code. To help resolve these symptoms, the agents collect samples of app and process information, so that when an API throws an error or a process has downtime, the agent collects the corresponding data in reaction to the error.

And this approach works … on the backend, for a known set of errors, in a controlled environment, with little external pressure from the outside world.

But when applied to the client side of web and mobile, what happens when the complexity explodes? 

What happens when there are an infinite number of unknown pressures, from the users, the devices, the operating systems, the app versions, the network connectivities, and the other apps running?

How do you truly understand your team's effectiveness when the biggest issues are not related to downtime or following individual service calls throughout a distributed system?

The problem with uncontrolled environments

Uncontrolled environments are any digital experience that's external to data centers. Beyond just smartphones and web browsers, they're point of sales, VR and AR devices, tablets in the field, and smart cars. And the world is increasingly one of uncontrolled environments for business-critical touchpoints.

The most effective developer and DevOps teams monitor these client-side environments with early warning systems to determine when users are impacted so they can triage and resolve issues. They flip the traditional application monitoring paradigm.

Traditional application monitoring: Sample data by looking for a known set of errors, then gather context around them.

Modern application monitoring: Gather data without knowing its full value, correlate those data points to user impact from the end-user vantage point, then determine the error, measure the impact in order to prioritize it, and route it accordingly.

In order to collect, identify, and resolve errors correctly, DevOps teams must understand the challenges that come along with running apps in these types of uncontrolled environments. After all, the assumptions about where failure points can happen are vastly different.

Start with: What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 2

Eric Futoran is CEO of Embrace

The Latest

In MEAN TIME TO INSIGHT Episode 12, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses purchasing new network observability solutions.... 

There's an image problem with mobile app security. While it's critical for highly regulated industries like financial services, it is often overlooked in others. This usually comes down to development priorities, which typically fall into three categories: user experience, app performance, and app security. When dealing with finite resources such as time, shifting priorities, and team skill sets, engineering teams often have to prioritize one over the others. Usually, security is the odd man out ...

Image
Guardsquare

IT outages, caused by poor-quality software updates, are no longer rare incidents but rather frequent occurrences, directly impacting over half of US consumers. According to the 2024 Software Failure Sentiment Report from Harness, many now equate these failures to critical public health crises ...

In just a few months, Google will again head to Washington DC and meet with the government for a two-week remedy trial to cement the fate of what happens to Chrome and its search business in the face of ongoing antitrust court case(s). Or, Google may proactively decide to make changes, putting the power in its hands to outline a suitable remedy. Regardless of the outcome, one thing is sure: there will be far more implications for AI than just a shift in Google's Search business ... 

Image
Chrome

In today's fast-paced digital world, Application Performance Monitoring (APM) is crucial for maintaining the health of an organization's digital ecosystem. However, the complexities of modern IT environments, including distributed architectures, hybrid clouds, and dynamic workloads, present significant challenges ... This blog explores the challenges of implementing application performance monitoring (APM) and offers strategies for overcoming them ...

Service disruptions remain a critical concern for IT and business executives, with 88% of respondents saying they believe another major incident will occur in the next 12 months, according to a study from PagerDuty ...

IT infrastructure (on-premises, cloud, or hybrid) is becoming larger and more complex. IT management tools need data to drive better decision making and more process automation to complement manual intervention by IT staff. That is why smart organizations invest in the systems and strategies needed to make their IT infrastructure more resilient in the event of disruption, and why many are turning to application performance monitoring (APM) in conjunction with high availability (HA) clusters ...

In today's data-driven world, the management of databases has become increasingly complex and critical. The following are findings from Redgate's 2025 The State of the Database Landscape report ...

With the 2027 deadline for SAP S/4HANA migrations fast approaching, organizations are accelerating their transition plans ... For organizations that intend to remain on SAP ECC in the near-term, the focus has shifted to improving operational efficiencies and meeting demands for faster cycle times ...

As applications expand and systems intertwine, performance bottlenecks, quality lapses, and disjointed pipelines threaten progress. To stay ahead, leading organizations are turning to three foundational strategies: developer-first observability, API platform adoption, and sustainable test growth ...