Skip to main content

Want a Slow Website? Make it Bigger and More Complex

Kent Alstad

Is your website slow to load?

Page size and complexity are two of the main factors you need to consider.

Looking back at the trends over the last five years, the average site has ballooned from just over 700KB to 2,135KB. That’s over a 200% increase in five years!

The number of requests have grown as well, from around 70 to about 100.

Consider the data from WebPagetest.org (numbers overlap, due to the number of samples):


What’s going on?

Sites Are Opting For Complexity Over Speed

It’s clear from the data that sites are built with a preference for rich, complex pages, unfortunately relegating load times to a lower tier of importance. While broadband penetration continues to climb, the payload delivered to browsers is increasing as well.

This is a similar dynamic to what’s going on in smartphones with their battery life: the amount of a phone’s “on” time is staying static, even though processors have become smaller and more efficient. Why? Because the processors have to work harder on larger, more complex applications, and there’s pressure to deliver thin, svelte devices at the expense of the physical size of the batteries. Any gains in efficiency are offset by the work the processors have to do.

So yes, people generally have more bandwidth available to them, but all the data being thrown at their browsers has to be sorted and rendered, hence the slowdown in page speed.

Take images are a perfect example of this trend. The rise in ecommerce has brought with it all the visual appeal of a high-end catalogue combined with a commercial. The result: larger images – and more of them.


While the number of image requests hasn’t risen dramatically, the size of those images has. Looking at the above chart, the total size of a typical page’s images has grown from 418KB in 2010 to 1,348KB for today’s typical page, an increase of 222 percent.

You could go on and on about the impact of custom fonts, CSS transfer size and requests and the same for JavaScript, but the trends are the same. Other than the number of sites utilizing Flash decreasing due to the switch to HTML5, the story is always boils down to “bigger” and “more”, leading to a user experience that equates to more waiting.

What Can You Do About It?

Thankfully, there are steps you can take to get things moving. For example:

Consolidate JavaScript and CSS: Consolidating JavaScript code and CSS styles into common files that can be shared across multiple pages should be a common practice. This technique simplifies code maintenance and improves the efficiency of client-side caching. In JavaScript files, be sure that the same script isn’t downloaded multiple times for one page. Redundant script downloads are especially likely when large teams or multiple teams collaborate on page development.

Sprite Images: Spriting is a CSS technique for consolidating images. Sprites are simply multiple images combined into a rectilinear grid in one large image. The page fetches the large image all at once as a single CSS background image and then uses CSS background positioning to display the individual component images as needed on the page. This reduces multiple requests to only one, significantly improving performance.

Compress Images: Image compression is a performance technique that minimizes the size (in bytes) of a graphics file without degrading the quality of the image to an unacceptable level. Reducing an image’s file size has two benefits: reducing the amount of time required for images to be sent over the internet or downloaded, and increasing the number of images that can be stored in the browser cache, thereby improving page render time on repeat visits to the same page.

Defer Rendering “Below the Fold” Content: Ensure that the user sees the page quicker by delaying the loading and rendering of any content that is below the initially visible area, sometimes called “below the fold.” To eliminate the need to reflow content after the remainder of the page is loaded, replace images initially with placeholder Image removed. tags that specify the correct height and width.

Preload Page Resources in the Browser: Auto-preloading is a powerful performance technique in which all user paths through a website are observed and recorded. Based on this massive amount of aggregated data, the auto-preloading engine can predict where a user is likely to go based on the page they are currently on and the previous pages in their path. The engine loads the resources for those “next” pages in the user’s browser cache, enabling the page to render up to 70 percent faster. Note that this is a data-intensive, highly dynamic technique that can only be performed by an automated solution.

Implement an Automated Web Performance Optimization Solution: While many of the performance techniques outlined in this section can be performed manually by developers, hand-coding pages for performance is specialized, time-consuming work. It is a never-ending task, particularly on highly dynamic sites that contain hundreds of objects per page, as both browser requirements and page requirements continue to develop. Automated front-end performance optimization solutions apply a range of performance techniques that deliver faster pages consistently and reliably across the entire site.

The Bottom Line

While pages are still within the trend of seeing their size and complexity grow, the toolsets available to combat slow loading times have increased as well. HTTP/2 promises protocol optimization, and having a powerful content optimization solution in place will help you take care of the rest.

Still – if you can – keep it simple. That’s always a great rule to follow.

Kent Alstad is VP of Acceleration at Radware.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Want a Slow Website? Make it Bigger and More Complex

Kent Alstad

Is your website slow to load?

Page size and complexity are two of the main factors you need to consider.

Looking back at the trends over the last five years, the average site has ballooned from just over 700KB to 2,135KB. That’s over a 200% increase in five years!

The number of requests have grown as well, from around 70 to about 100.

Consider the data from WebPagetest.org (numbers overlap, due to the number of samples):


What’s going on?

Sites Are Opting For Complexity Over Speed

It’s clear from the data that sites are built with a preference for rich, complex pages, unfortunately relegating load times to a lower tier of importance. While broadband penetration continues to climb, the payload delivered to browsers is increasing as well.

This is a similar dynamic to what’s going on in smartphones with their battery life: the amount of a phone’s “on” time is staying static, even though processors have become smaller and more efficient. Why? Because the processors have to work harder on larger, more complex applications, and there’s pressure to deliver thin, svelte devices at the expense of the physical size of the batteries. Any gains in efficiency are offset by the work the processors have to do.

So yes, people generally have more bandwidth available to them, but all the data being thrown at their browsers has to be sorted and rendered, hence the slowdown in page speed.

Take images are a perfect example of this trend. The rise in ecommerce has brought with it all the visual appeal of a high-end catalogue combined with a commercial. The result: larger images – and more of them.


While the number of image requests hasn’t risen dramatically, the size of those images has. Looking at the above chart, the total size of a typical page’s images has grown from 418KB in 2010 to 1,348KB for today’s typical page, an increase of 222 percent.

You could go on and on about the impact of custom fonts, CSS transfer size and requests and the same for JavaScript, but the trends are the same. Other than the number of sites utilizing Flash decreasing due to the switch to HTML5, the story is always boils down to “bigger” and “more”, leading to a user experience that equates to more waiting.

What Can You Do About It?

Thankfully, there are steps you can take to get things moving. For example:

Consolidate JavaScript and CSS: Consolidating JavaScript code and CSS styles into common files that can be shared across multiple pages should be a common practice. This technique simplifies code maintenance and improves the efficiency of client-side caching. In JavaScript files, be sure that the same script isn’t downloaded multiple times for one page. Redundant script downloads are especially likely when large teams or multiple teams collaborate on page development.

Sprite Images: Spriting is a CSS technique for consolidating images. Sprites are simply multiple images combined into a rectilinear grid in one large image. The page fetches the large image all at once as a single CSS background image and then uses CSS background positioning to display the individual component images as needed on the page. This reduces multiple requests to only one, significantly improving performance.

Compress Images: Image compression is a performance technique that minimizes the size (in bytes) of a graphics file without degrading the quality of the image to an unacceptable level. Reducing an image’s file size has two benefits: reducing the amount of time required for images to be sent over the internet or downloaded, and increasing the number of images that can be stored in the browser cache, thereby improving page render time on repeat visits to the same page.

Defer Rendering “Below the Fold” Content: Ensure that the user sees the page quicker by delaying the loading and rendering of any content that is below the initially visible area, sometimes called “below the fold.” To eliminate the need to reflow content after the remainder of the page is loaded, replace images initially with placeholder Image removed. tags that specify the correct height and width.

Preload Page Resources in the Browser: Auto-preloading is a powerful performance technique in which all user paths through a website are observed and recorded. Based on this massive amount of aggregated data, the auto-preloading engine can predict where a user is likely to go based on the page they are currently on and the previous pages in their path. The engine loads the resources for those “next” pages in the user’s browser cache, enabling the page to render up to 70 percent faster. Note that this is a data-intensive, highly dynamic technique that can only be performed by an automated solution.

Implement an Automated Web Performance Optimization Solution: While many of the performance techniques outlined in this section can be performed manually by developers, hand-coding pages for performance is specialized, time-consuming work. It is a never-ending task, particularly on highly dynamic sites that contain hundreds of objects per page, as both browser requirements and page requirements continue to develop. Automated front-end performance optimization solutions apply a range of performance techniques that deliver faster pages consistently and reliably across the entire site.

The Bottom Line

While pages are still within the trend of seeing their size and complexity grow, the toolsets available to combat slow loading times have increased as well. HTTP/2 promises protocol optimization, and having a powerful content optimization solution in place will help you take care of the rest.

Still – if you can – keep it simple. That’s always a great rule to follow.

Kent Alstad is VP of Acceleration at Radware.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...