End User Monitoring - Reports of EUM's Death Have Been Greatly Exaggerated
February 22, 2016

Larry Haig
Intechnica

Share this

Once upon a time (as they say) client side performance was a relatively straightforward matter. The principles were known (or at least available – thank you, Steve Souders et al), and the parameters surrounding delivery, whilst generally limited in modern terms (IE5 /Netscape, dialup connectivity anyone?) were at least reasonably predictable.

This didn't mean that enough people addressed client side performance (then or now for that matter), despite the alleged 80% of delivery time spent on the user machine, and the undoubted association between application performance and business outcomes.

From a monitoring and analysis point of view, synthetic external testing (or end user monitoring) did the job. Much has been written (not least by myself) on the need to apply best practice, and to select your tooling appropriately. The advent of “real user monitoring” (RUM) came some 10 years ago – a move at first decried, then rapidly embraced, by most of the “standalone” external test Vendors. The undoubted advantages of real user monitoring in terms of breadth of coverage and granular visibility to multiple user end points – geography, O/S, device, browser – tended for a time to mask the different, though complementary strengths of consistent, repeated performance monitoring at page or individual (eg 3rd party) object level.

Fast forward to today, though, and the situation demands a variety of approaches to cope with the extreme diverseness of delivery conditions. The rise and rise of mobile (just as one example, major UK retailer JohnLewis.com quoted over 60% of digital orders derived from mobile devices during 2015/16 peak trading) brings many challenges to Front-End Optimization (FEO) practice. These include: diversity of device types and version; browsers; and limiting connectivity conditions.

This situation is compounded by development of the applications themselves. As far as the web is concerned, monitoring challenges are introduced by, amongst other things: Single Page Applications (either full or partial); “server push content”; and mobile “WebApps” driven by service worker interactions. Mobile Applications, whether native or hybrid, present their own analysis challenges, which I will address subsequently also.

This already rich mix is further complicated by business demands for more on-site content – multimedia and other rich content, exotic fonts, and more. Increasingly large amounts of client side logic, whether as part of SPAs or otherwise, demand focused attention to avoid unacceptable performance in edge case conditions.

As if this wasn't enough, the (final!) emergence of HTTP/2 introduces both advantages and anti-patterns relative to former best practice.

The primitive simplicity of page onload navigation timing endpoints has moved from beyond irrelevance to becoming positively misleading, regardless of the type of tool used.

So, these changes require an increased subtlety of approach, combined with a range of tools to ensure that FEO recommendations are both relevant and effective.

I will provide some thoughts in subsequent blogs as to effective FEO approaches to derive maximum business benefit in each of these cases.

The bottom line is, however, that FEO is more important than ever in ensuring optimal business outcomes from digital channels.

Larry Haig is Senior Consultant at Intechnica.

Share this

The Latest

November 14, 2019

A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points, followed by a few important lessons which I have learned over the years ...

November 13, 2019

Research conducted by ServiceNow shows that Gen Zs, now entering the workforce, recognize the promise of technology to improve work experiences, are eager to learn from other generations, and believe they can help older generations be more open‑minded ...

November 12, 2019

We're in the middle of a technology and connectivity revolution, giving us access to infinite digital tools and technologies. Is this multitude of technology solutions empowering us to do our best work, or getting in our way? ...

November 07, 2019

Microservices have become the go-to architectural standard in modern distributed systems. While there are plenty of tools and techniques to architect, manage, and automate the deployment of such distributed systems, issues during troubleshooting still happen at the individual service level, thereby prolonging the time taken to resolve an outage ...

November 06, 2019

A recent APMdigest blog by Jean Tunis provided an excellent background on Application Performance Monitoring (APM) and what it does. A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data ...

November 05, 2019

Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. That's why we're hearing more about self-driving networks, self-healing networks, intent-based networking, and other concepts. These approaches collectively belong to a growing focus area called AIOps, which aims to apply automation, AI and ML to support modern network operations ...

November 04, 2019

IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines ...

October 31, 2019

APM tools are ideal for an application owner or a line of business owner to track the performance of their key applications. But these tools have broader applicability to different stakeholders in an organization. In this blog, we will review the teams and functional departments that can make use of an APM tool and how they could put it to work ...

October 30, 2019

Enterprises depending exclusively on legacy monitoring tools are falling behind in business agility and operational efficiency, according to a new study, Prevalence of Legacy Tools Paralyzes Enterprises' Ability to Innovate conducted by Forrester Consulting ...

October 29, 2019

Hyperconverged infrastructure is sometimes referred to as a "data center in a box" because, after the initial cabling and minimal networking configuration, it has all of the features and functionality of the traditional 3-2-1 virtualization architecture (except that single point of failure) ...