Web Performance Still Below Par
March 26, 2015

Pete Goldin
APMdigest

Share this

A new report by Radware shows that 9% of the top 100 leading retail web pages took ten or more seconds to become interactive, which is down considerably from 22% of sites last quarter.

Studies have shown that online shoppers will abandon a web page after waiting just three seconds to load. Although this improvement is promising, 10 second load times are still far off of the three second target that most users expect.

Today’s web users are now likely to use mobile devices to browse dynamic pages and expect a high degree of responsiveness. Unfortunately, congested networks and unoptimized web pages lead to a frustrating lag time for users and many sites face an overall increase – not a decrease – in sub-optimal user experiences.

Radware’s report entitled State of the Union: Ecommerce Page Speed & Web Performance, Spring 2015 also found that only 14% of the top 100 retail sites rendered feature content within the acceptable threshold.

“There is no doubt that web pages have been increasing in complexity as well in payload size. Although this trend is focused on enhancing the user experience, it can unfortunately correlate to slower load times if a page is not properly optimized,” says Kent Alstad, VP of Acceleration for Radware. “Our latest report has found that the median page size is 1354 KB in size. As images comprise over 50% of the average page’s total weight, almost half of the top 100 sites have failed to implement core optimization techniques such as image compression. This alone, can help deliver pages quicker to the viewer.”

Radware also lists the fastest ecommerce sites which were the quickest to display actionable content. From a user experience perspective, time to interact (TTI) is a more meaningful performance metric than load time, as it indicates when a page begins to be usable. Among those listed is a well-known Internet based retailer which took 16.3 seconds to load, but boasted a TTI of 1.4 seconds.

“When we discovered what made sites load fast, we found that the median page was 932 KB in size and actually deferred resources that were not part of the page’s critical rendering path. These non-essential resources were mainly ‘invisible’ such as third-party scripts that aren’t needed until a page completes its rendering. Deferral is a fundamental performance technique and should be employed to optimize the critical rendering path of websites,” added Alstad.

Other findings in the Spring 2015 report include:

■ Despite the fact that images comprise 50-60% of the average page’s total weight, 43% of the top 100 sites failed to implement image compression, a core optimization technique.

■ Page complexity, which is a greater performance challenge than page size, has grown by 26% in the past two years. The more complex a page, the greater risk for page failure.

■ Among the top 100 pages, the median time for pages to interact is 5.2 seconds. Although down from the previous quarter of 6.5 seconds, this is considerably slower than users’ wait-time threshold of 3 seconds.

Also outlined in the report is the “performance comeback” of two large eRetailers that show significant changes in their TTI compared to Radware’s report of Fall 2014. Time to interact for the online retailers were 2.4 and 2.9 seconds down from 5.2 and 7.2 seconds respectively, demonstrating the value of implementing optimization techniques to decrease load times of web pages.

Methodology: The tests in this study were conducted using an online tool called WebPagetest – an open-source project primarily developed and supported by Google – which simulates page load times from a real user’s perspective using real browsers.
Radware tested the home page of every site in the Alexa Retail 500 nine consecutive times. The system automatically clears the cache between tests. The median test result for each home page was recorded and used in the calculations. The tests were conducted on February 16, 2015, via the WebPagetest.org server in Dulles, VA, using Chrome 40 on a DSL connection. In very few cases, WebPagetest rendered a blank page or an error in which none of the page rendered. These instances were represented as null in the test appendix. Also, in very few cases WebPagetest.org rendered a page in more than 60 seconds (the default timeout for WebPagetest.org). In these cases, 60 seconds was used for the result instead of null. To identify the Time to Interact (TTI) for each page, Radware generated a timed filmstrip view of the median page load for each site in the Alexa Retail 100. Time to Interact is defined as the moment that the featured page content and primary call-to-action button or menu is rendered in the frame.

Pete Goldin is Editor and Publisher of APMdigest
Share this

The Latest

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...

January 18, 2023

While most thought that distraction and motivation would be the main contributors to low productivity in a work-from-home environment, many organizations discovered that it was gaps in their IT systems that created some of the most significant challenges ...

January 17, 2023
The US aviation sector was struggling to return to normal following a nationwide ground stop imposed by Federal Aviation Administration (FAA) early Wednesday over a computer issue ...
January 13, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are teaming up on the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 1, Dan Twing, President and COO of EMA, discusses Observability and Automation with Will Schoeppner, Research Director covering Application Performance Management and Business Intelligence at EMA ...

January 12, 2023

APMdigest is following up our list of 2023 Application Performance Management Predictions with predictions from industry experts about how the cloud will evolve in 2023 ...

January 11, 2023

As demand for digital services increases and distributed systems become more complex, organizations must collect and process a growing amount of observability data (logs, metrics, and traces). Site reliability engineers (SREs), developers, and security engineers use observability data to learn how their applications and environments are performing so they can successfully respond to issues and mitigate risk ...