What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 1
March 26, 2024

Eric Futoran
Embrace

Share this

Agent-based approaches to real user monitoring (RUM) simply do not work. If you are pitched to install an "agent" in your mobile or web environments, you should run for the hills.

The world is now all about end-users. This paradigm of focusing on the end-user was simply not true a few years ago, as backend metrics generally revolved around uptime, SLAs, latency, and the like. DevOps teams always pitched and presented the metrics they thought were the most correlated to the end-user experience.

But let's be blunt: Unless there was an egregious fire, the correlated metrics were super loose or entirely false.

Instead, your teams should prioritize alerts, monitoring, and work based on impact to the end-user, as it directly affects your businesses. And your developers and DevOps teams should collect data, monitor, prioritize, and resolve issues accordingly.

The agent-based RUM problem

"Agents" are a mechanism that does not work in the current end-user centric world. They were born out of shimmying the principles of the backend to mobile, web, and the myriad of other ways users interact with the world.

Let's compare the difference between user environments and backend environments:

User environments are open, unstructured, and uncontrollable as they are unowned devices and browsers with the central figure being an unpredictable user.

Backend environments are closed, structured, and controlled as they are composed of relatively homogenous physical and cloud applications.

With closed systems that have fewer external variables, agents focus on a known set of errors to monitor and to trigger data collection for resolution. However, monitoring systems outside of the backend is complex because there are a multitude of types of errors way beyond crashes, error logs, network traces, and API errors.

In an observability world, real user monitoring is about collecting "all" the data for every session — good or bad — and not just a sampled set based on predefined error types. Only by collecting the entirety of every session can the best vendors have the opportunity to analyze and provide the utmost value to your teams.

These vendors have evolved beyond agents to surface every type of user-impacting issue, help resolve them by comparing against good sessions, and prioritize overall impact across the complete set of issue types. For example, the same crash for two different users could have different root causes because of the environments, third-party SDKs, and API timeout parameters.

To hit the difference home, watch a developer, outside of DevOps, open a RUM dashboard for a vendor who uses the agent-based approach. The core dashboard will have the following:

■ A geographical map laying out the incidents

■ A generic list of error logs and crashes

■ Some sort of mapping of network errors

■ A single health score

The developer reviewing this dashboard will not come back to it regularly or at all. And it's not hard to see why.

The dashboard does not tell them which users are affected, where to prioritize their efforts, or the types of bugs and optimizations that they should care most about. It's not built for them from the data collected to data organization and display. There is a reason why these developers always implement and use other vendors — even for simple concepts like error logging and crashes — alongside those application performance monitoring vendors.

Let's deep dive into the core differences between these approaches and explore what a true real user monitoring methodology looks like. That way, you will know it when you see it and can create the best experience for your end-users as well as your developers and DevOps team.

The spider web problem

To illustrate the core implication of an agent mentality, let's focus on the "spider webs." You know the ones I'm talking about. You've seen the cool demos with a picture connecting nodes across your systems to demonstrate "visibility" across all the apps running on your servers and machines.

Everything is connected by an ever-expanding spider web of nodes and lines — every app, compute instance, API call, etc. Oh, it's very pretty to see all the apps and API calls going to and from each other. It's also a nice source of confidence that the agents are collecting the data required to monitor, identify, and resolve potential issues.

However, the very nature of this mental model of a spider web is it assumes all the issues occur on the lines between the nodes or on the nodes themselves:

■ An increase in network latency means you should look at the connected database, server, or service calls.

■ An increase in downtime means you should look at the connected servers to see if they're under heavy load.

■ An increase in transaction failures means you should look at the connected service calls for a point of failure.

The paradigm of agents is one of looking for a closed set of known symptoms for broken apps, failing processes, and poorly designed code. To help resolve these symptoms, the agents collect samples of app and process information, so that when an API throws an error or a process has downtime, the agent collects the corresponding data in reaction to the error.

And this approach works … on the backend, for a known set of errors, in a controlled environment, with little external pressure from the outside world.

But when applied to the client side of web and mobile, what happens when the complexity explodes? 

What happens when there are an infinite number of unknown pressures, from the users, the devices, the operating systems, the app versions, the network connectivities, and the other apps running?

How do you truly understand your team's effectiveness when the biggest issues are not related to downtime or following individual service calls throughout a distributed system?

The problem with uncontrolled environments

Uncontrolled environments are any digital experience that's external to data centers. Beyond just smartphones and web browsers, they're point of sales, VR and AR devices, tablets in the field, and smart cars. And the world is increasingly one of uncontrolled environments for business-critical touchpoints.

The most effective developer and DevOps teams monitor these client-side environments with early warning systems to determine when users are impacted so they can triage and resolve issues. They flip the traditional application monitoring paradigm.

Traditional application monitoring: Sample data by looking for a known set of errors, then gather context around them.

Modern application monitoring: Gather data without knowing its full value, correlate those data points to user impact from the end-user vantage point, then determine the error, measure the impact in order to prioritize it, and route it accordingly.

In order to collect, identify, and resolve errors correctly, DevOps teams must understand the challenges that come along with running apps in these types of uncontrolled environments. After all, the assumptions about where failure points can happen are vastly different.

Start with: What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 2

Eric Futoran is CEO of Embrace
Share this

The Latest

May 23, 2024

Hybrid cloud architecture is breaking the backs of network engineering and operations teams. These teams are more successful when their companies go all-in with the cloud or stay out of it entirely. When companies maintain hybrid infrastructure, with applications and data residing across data centers and public cloud services, the network team struggles. This insight emerged in the newly published 2024 edition of Enterprise Management Associates' (EMA) Network Management Megatrends research ...

May 22, 2024

As IT practitioners, we often find ourselves fighting fires rather than proactively getting ahead ... Many spend countless hours managing several tools that give them different, fractured views of their own work — which isn't an effective use of time. Balancing daily technical tasks with long-term company goals requires a three-step approach. I'll share these steps and tips for others to do the same ...

May 21, 2024

IT service outages are more than a minor inconvenience. They can cost businesses millions while simultaneously leading to customer dissatisfaction and reputational damage. Moreover, the constant pressure of dealing with fire drills and escalations day and night can take a heavy toll on ITOps teams, leading to increased stress, human error, and burnout ...

May 20, 2024

Amid economic disruption, fintech competition, and other headwinds in recent years, banks have had to quickly adjust to the demands of the market. This adaptation is often reliant on having the right technology infrastructure in place ...

May 17, 2024

In MEAN TIME TO INSIGHT Episode 6, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses network automation ...

May 16, 2024

In the ever-evolving landscape of software development and infrastructure management, observability stands as a crucial pillar. Among its fundamental components lies log collection ... However, traditional methods of log collection have faced challenges, especially in high-volume and dynamic environments. Enter eBPF, a groundbreaking technology ...

May 15, 2024

Businesses are dazzled by the promise of generative AI, as it touts the capability to increase productivity and efficiency, cut costs, and provide competitive advantages. With more and more generative AI options available today, businesses are now investigating how to convert the AI promise into profit. One way businesses are looking to do this is by using AI to improve personalized customer engagement ...

May 14, 2024

In the fast-evolving realm of cloud computing, where innovation collides with fiscal responsibility, the Flexera 2024 State of the Cloud Report illuminates the challenges and triumphs shaping the digital landscape ... At the forefront of this year's findings is the resounding chorus of organizations grappling with cloud costs ...

May 13, 2024

Government agencies are transforming to improve the digital experience for employees and citizens, allowing them to achieve key goals, including unleashing staff productivity, recruiting and retaining talent in the public sector, and delivering on the mission, according to the Global Digital Employee Experience (DEX) Survey from Riverbed ...

May 09, 2024

App sprawl has been a concern for technologists for some time, but it has never presented such a challenge as now. As organizations move to implement generative AI into their applications, it's only going to become more complex ... Observability is a necessary component for understanding the vast amounts of complex data within AI-infused applications, and it must be the centerpiece of an app- and data-centric strategy to truly manage app sprawl ...