The Importance of Real and Synthetic End User Monitoring
September 22, 2015

Dennis Rietvink
Savision

Share this

Organizations have many ways of ensuring that their systems are functioning properly. One of the most important things to measure, when assessing the performance of a system, is the end user experience.

Can users access the system quickly? Do they experience errors while accessing the system? Can they easily interact with the system across all the available channels? For the IT department, the answers to these questions determine whether or not the system is functioning properly. For the organization, they reveal the most important thing – whether or not their customers are happy, and are likely to continue using their services.

There are two ways to monitor user transactions and interactions with your website:

Real User Monitoring

This method uses a passive monitoring system, documenting all actions of users as they interact with your website. The feedback, generated in real time, is automatically assessed against established benchmarks, to correctly measure the quality of delivered services.

Real user monitoring systems have many advantages – you get to know exactly how visitors to your website experience all its features and applications, and how the website is performing for your end users in various geographic locations. The biggest problem with this method is that you won’t know about any website issues until at least one user gets to experience an existing problem.

Synthetic User Monitoring

This method simulates user experience on your website. It works by scripting typical user actions, and then simulates user click at regular intervals, to ensure that your website is responsive.

This method enables you to proactively catch any existing problems before your end users get to experience slow or unresponsive applications, or encounter other errors.

The obvious downside is that this method requires you to spend time scripting typical user actions. In addition, if your website changes frequently, you’ll need to periodically update your scripted scenarios.

In addition to websites, synthetic transactions can be used to monitor databases and TCP ports.

Organizations need a solution that can help recognize potential system problems by categorizing and visually presenting information concerning end user behavior and website performance in real time. In addition, such solution should also offer a way to script common user transactions and monitor the system’s performance 24x7.

End user monitoring reflects end user health, but doesn’t tell you the root cause of a problem. Linking end user monitoring data with application and infrastructure monitoring data enables organizations to determine the impact of a problem, rank its priority and quickly navigate to the root cause.

Dennis Rietvink is Co-Founder and VP of Product Management at Savision

Share this

The Latest

October 17, 2019

As the data generated by organizations grows, APM tools are now required to do a lot more than basic monitoring of metrics. Modern data is often raw and unstructured and requires more advanced methods of analysis. The tools must help dig deep into this data for both forensic analysis and predictive analysis. To extract more accurate and cheaper insights, modern APM tools use Big Data techniques to store, access, and analyze the multi-dimensional data ...

October 16, 2019

Modern enterprises are generating data at an unprecedented rate but aren't taking advantage of all the data available to them in order to drive real-time, actionable insights. According to a recent study commissioned by Actian, more than half of enterprises today are unable to efficiently manage nor effectively use data to drive decision-making ...

October 15, 2019

According to a study by Forrester Research, an enhanced UX design can increase the conversion rate by 400%. If UX has become the ultimate arbiter in determining the success or failure of a product or service, let us first understand what UX is all about ...

October 10, 2019

The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...

October 09, 2019

Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...

October 08, 2019

There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...

October 07, 2019
OK, I admit it. "Service modeling" is an awkward term, especially when you're trying to frame three rather controversial acronyms in the same overall place: CMDB, CMS and DDM. Nevertheless, that's exactly what we did in EMA's most recent research: <span style="font-style: italic;">Service Modeling in the Age of Cloud and Containers</span>. The goal was to establish a more holistic context for looking at the synergies and differences across all these areas ...
October 03, 2019

If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...

October 02, 2019

Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...

October 01, 2019

To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...