Why isn’t it enough to simply measure network performance? In this post, I will try to explain how UI response time is different than network performance.
Let's say you own a mobile application for hotel reservations. This is a highly competitive market, and you are trying to achieve the best user experience and make your application as fast as possible.
Processing Time
When a user enters their selection for hotel search, the application contacts the server and asks for the relevant search results. For efficiency, the server sends the results in a very efficient and compact format.
When the application receives the data, it has to process it.
First, it might need to decompress the data.
Secondly, the hotel’s list has to be sorted per user preferences. If hotel images are also displayed, the images must be opened and displayed. This processing takes time. And sometimes, it takes a lot of time.
The processing time also varies depending on the device. When the display is larger (as it is on tablets), your app needs to process more items because more items are visible. It also takes longer on weaker CPU devices. Each user experiences the performance differently, even if network time is the same.
Simply put, UI Response Time is the time elapsed from when the user clicks the hotel search button until the screen is loaded with the search results. No more, no less. This is the time you want to see in your monitoring tool.
Background Operations
Suppose your hotel search application has a great feature – once every ten minutes, it contacts a few external servers for price alerts. This is a background process – it will happen regardless of what the user is doing with the application.
Most importantly – these network operations don’t affect user experience at all! When you try to prioritize your work according to the monitoring results, it is very important to distinguish between these operations, and give them lower priority than operations that actually impact user experience.
Out-Of-Screen Content
Here’s another trick your developers are using to improve user experience: when a search request is sent, only part of the data is retrieved from the server. Usually it is just enough data to fully occupy the screen, and a little bit more (for scrolling).
But what about the rest of the data? In the old days of Web 1.0, a long list of results was divided into pages, with “next” and “previous” buttons to navigate between them. More modern applications are using a different approach – they will continue to fetch the rest of the search results in the background, allowing the user to interact with what they already see on the screen. This is probably one of the biggest benefits of Web 2.0.
Measuring the UI response times of such actions is very tricky. The reported time should be from the user’s perspective – meaning the report has to ignore network operations that don’t result directly in screen rendering. If the user selects one of the hotels for details, the results of those out-of-screen operations have no effect on the UI response time.
Look for a tool that will monitor your mobile application’s real user experience - and more specifically user-perceived UI response time.
Amichai Nitsan is a Senior Architect at HP Software.
The Latest
DDI technology has become more challenging in recent years with the rise of hybrid and multi-cloud architectures, according to a new report, DDI Directions: DNS, DHCP, and IP Address Management Strategies for the Multi-Cloud Era, from Enterprise Management Associates ...
Navigating observability pricing models can be compared to solving a perplexing puzzle which includes financial variables and contractual intricacies. Predicting all potential costs in advance becomes an elusive endeavor, exemplified by a recent eye-popping $65 million observability bill ...
Generative AI may be a great tool for the enterprise to help drive further innovation and meaningful work, but it also runs the risk of generating massive amounts of spam that will counteract its intended benefits. From increased AI spam bots to data maintenance due to large volumes of outputs, enterprise AI applications can create a cascade of issues that end up detracting from productivity gains ...
A long-running study of DevOps practices ... suggests that any historical gains in MTTR reduction have now plateaued. For years now, the time it takes to restore services has stayed about the same: less than a day for high performers but up to a week for middle-tier teams and up to a month for laggards. The fact that progress is flat despite big investments in people, tools and automation is a cause for concern ...
Companies implementing observability benefit from increased operational efficiency, faster innovation, and better business outcomes overall, according to 2023 IT Trends Report: Lessons From Observability Leaders, a report from SolarWinds ...
Customer loyalty is changing as retailers get increasingly competitive. More than 75% of consumers say they would end business with a company after a single bad customer experience. This means that just one price discrepancy, inventory mishap or checkout issue in a physical or digital store, could have customers running out to the next store that can provide them with better service. Retailers must be able to predict business outages in advance, and act proactively before an incident occurs, impacting customer experience ...
Earlier this year, New Relic conducted a study on observability ... The 2023 Observability Forecast reveals observability's impact on the lives of technical professionals and businesses' bottom lines. Here are 10 key takeaways from the forecast ...