Skip to main content

Web Performance Still Below Par

Pete Goldin
APMdigest

A new report by Radware shows that 9% of the top 100 leading retail web pages took ten or more seconds to become interactive, which is down considerably from 22% of sites last quarter.

Studies have shown that online shoppers will abandon a web page after waiting just three seconds to load. Although this improvement is promising, 10 second load times are still far off of the three second target that most users expect.

Today’s web users are now likely to use mobile devices to browse dynamic pages and expect a high degree of responsiveness. Unfortunately, congested networks and unoptimized web pages lead to a frustrating lag time for users and many sites face an overall increase – not a decrease – in sub-optimal user experiences.

Radware’s report entitled State of the Union: Ecommerce Page Speed & Web Performance, Spring 2015 also found that only 14% of the top 100 retail sites rendered feature content within the acceptable threshold.

“There is no doubt that web pages have been increasing in complexity as well in payload size. Although this trend is focused on enhancing the user experience, it can unfortunately correlate to slower load times if a page is not properly optimized,” says Kent Alstad, VP of Acceleration for Radware. “Our latest report has found that the median page size is 1354 KB in size. As images comprise over 50% of the average page’s total weight, almost half of the top 100 sites have failed to implement core optimization techniques such as image compression. This alone, can help deliver pages quicker to the viewer.”

Radware also lists the fastest ecommerce sites which were the quickest to display actionable content. From a user experience perspective, time to interact (TTI) is a more meaningful performance metric than load time, as it indicates when a page begins to be usable. Among those listed is a well-known Internet based retailer which took 16.3 seconds to load, but boasted a TTI of 1.4 seconds.

“When we discovered what made sites load fast, we found that the median page was 932 KB in size and actually deferred resources that were not part of the page’s critical rendering path. These non-essential resources were mainly ‘invisible’ such as third-party scripts that aren’t needed until a page completes its rendering. Deferral is a fundamental performance technique and should be employed to optimize the critical rendering path of websites,” added Alstad.

Other findings in the Spring 2015 report include:

■ Despite the fact that images comprise 50-60% of the average page’s total weight, 43% of the top 100 sites failed to implement image compression, a core optimization technique.

■ Page complexity, which is a greater performance challenge than page size, has grown by 26% in the past two years. The more complex a page, the greater risk for page failure.

■ Among the top 100 pages, the median time for pages to interact is 5.2 seconds. Although down from the previous quarter of 6.5 seconds, this is considerably slower than users’ wait-time threshold of 3 seconds.

Also outlined in the report is the “performance comeback” of two large eRetailers that show significant changes in their TTI compared to Radware’s report of Fall 2014. Time to interact for the online retailers were 2.4 and 2.9 seconds down from 5.2 and 7.2 seconds respectively, demonstrating the value of implementing optimization techniques to decrease load times of web pages.

Methodology: The tests in this study were conducted using an online tool called WebPagetest – an open-source project primarily developed and supported by Google – which simulates page load times from a real user’s perspective using real browsers.
Radware tested the home page of every site in the Alexa Retail 500 nine consecutive times. The system automatically clears the cache between tests. The median test result for each home page was recorded and used in the calculations. The tests were conducted on February 16, 2015, via the WebPagetest.org server in Dulles, VA, using Chrome 40 on a DSL connection. In very few cases, WebPagetest rendered a blank page or an error in which none of the page rendered. These instances were represented as null in the test appendix. Also, in very few cases WebPagetest.org rendered a page in more than 60 seconds (the default timeout for WebPagetest.org). In these cases, 60 seconds was used for the result instead of null. To identify the Time to Interact (TTI) for each page, Radware generated a timed filmstrip view of the median page load for each site in the Alexa Retail 100. Time to Interact is defined as the moment that the featured page content and primary call-to-action button or menu is rendered in the frame.

Pete Goldin is Editor and Publisher of APMdigest

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Web Performance Still Below Par

Pete Goldin
APMdigest

A new report by Radware shows that 9% of the top 100 leading retail web pages took ten or more seconds to become interactive, which is down considerably from 22% of sites last quarter.

Studies have shown that online shoppers will abandon a web page after waiting just three seconds to load. Although this improvement is promising, 10 second load times are still far off of the three second target that most users expect.

Today’s web users are now likely to use mobile devices to browse dynamic pages and expect a high degree of responsiveness. Unfortunately, congested networks and unoptimized web pages lead to a frustrating lag time for users and many sites face an overall increase – not a decrease – in sub-optimal user experiences.

Radware’s report entitled State of the Union: Ecommerce Page Speed & Web Performance, Spring 2015 also found that only 14% of the top 100 retail sites rendered feature content within the acceptable threshold.

“There is no doubt that web pages have been increasing in complexity as well in payload size. Although this trend is focused on enhancing the user experience, it can unfortunately correlate to slower load times if a page is not properly optimized,” says Kent Alstad, VP of Acceleration for Radware. “Our latest report has found that the median page size is 1354 KB in size. As images comprise over 50% of the average page’s total weight, almost half of the top 100 sites have failed to implement core optimization techniques such as image compression. This alone, can help deliver pages quicker to the viewer.”

Radware also lists the fastest ecommerce sites which were the quickest to display actionable content. From a user experience perspective, time to interact (TTI) is a more meaningful performance metric than load time, as it indicates when a page begins to be usable. Among those listed is a well-known Internet based retailer which took 16.3 seconds to load, but boasted a TTI of 1.4 seconds.

“When we discovered what made sites load fast, we found that the median page was 932 KB in size and actually deferred resources that were not part of the page’s critical rendering path. These non-essential resources were mainly ‘invisible’ such as third-party scripts that aren’t needed until a page completes its rendering. Deferral is a fundamental performance technique and should be employed to optimize the critical rendering path of websites,” added Alstad.

Other findings in the Spring 2015 report include:

■ Despite the fact that images comprise 50-60% of the average page’s total weight, 43% of the top 100 sites failed to implement image compression, a core optimization technique.

■ Page complexity, which is a greater performance challenge than page size, has grown by 26% in the past two years. The more complex a page, the greater risk for page failure.

■ Among the top 100 pages, the median time for pages to interact is 5.2 seconds. Although down from the previous quarter of 6.5 seconds, this is considerably slower than users’ wait-time threshold of 3 seconds.

Also outlined in the report is the “performance comeback” of two large eRetailers that show significant changes in their TTI compared to Radware’s report of Fall 2014. Time to interact for the online retailers were 2.4 and 2.9 seconds down from 5.2 and 7.2 seconds respectively, demonstrating the value of implementing optimization techniques to decrease load times of web pages.

Methodology: The tests in this study were conducted using an online tool called WebPagetest – an open-source project primarily developed and supported by Google – which simulates page load times from a real user’s perspective using real browsers.
Radware tested the home page of every site in the Alexa Retail 500 nine consecutive times. The system automatically clears the cache between tests. The median test result for each home page was recorded and used in the calculations. The tests were conducted on February 16, 2015, via the WebPagetest.org server in Dulles, VA, using Chrome 40 on a DSL connection. In very few cases, WebPagetest rendered a blank page or an error in which none of the page rendered. These instances were represented as null in the test appendix. Also, in very few cases WebPagetest.org rendered a page in more than 60 seconds (the default timeout for WebPagetest.org). In these cases, 60 seconds was used for the result instead of null. To identify the Time to Interact (TTI) for each page, Radware generated a timed filmstrip view of the median page load for each site in the Alexa Retail 100. Time to Interact is defined as the moment that the featured page content and primary call-to-action button or menu is rendered in the frame.

Pete Goldin is Editor and Publisher of APMdigest

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...