Skip to main content

Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3

Jean Tunis

This blog is the third in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Common HTTP/1.1 Workarounds

Regarding the HTTP/1.1 limitations outlined in my last blog, it was known that an update was needed to address them. But this did not happen, until recently. With the need for better performance, a number of workarounds were created to get around the limitations.

Open Multiple Connections

As web technology developed, it became clear that more connections were needed to help improve web performance by opening up more than one connection to the server at the same time. For years, Internet Explorer allowed only 2 concurrent connections. As Firefox and Chrome entered the scene, this number went up to 6.

At this point, most modern browsers allow for 6 concurrent TCP connections. All of these connections will help to reduce the impact of TCP slow start on the overall website.

Domain Sharding

With the ability to have multiple connections to the server, developers soon realized that they could improve performance if website resources were placed on various domain servers. They would then be able to allow for up to 6 concurrent connections for each domain.

The content for a website, domain.com, for example, could be spread across three domains - one.domain.com, two.domain.com and three.domain.com.

With this configuration, a browser can now have up to 18 concurrent connections to make HTTP requests!

Resource Inlining

What's better than having the browser open up more connections across many domains? Using those same connections to send more data.

In my previous blog, Web Performance 101: 4 Recommendations to Improve Web Performance, I mentioned that you don't want to have too many connections. With 18 connections from one browser to 3 domains, the PC may run into some resource issues. Opening and closing more connections can cause CPU slowdown, for example.

The ability to include some scripting data directly into the HTML, known as resource inlining, allowed the browser to download CSS and JS file information along with the HTML and not have to open up a new connection to do it. This not only reduced the number of connections, but also reduced the need for another round-trip across the network to get more data.

Enter SPDY

To help solve the HTTP/1.1 limitations, in November 2009, Google released its first draft defining a new protocol called SPDY, which is pronounced "SPeeDY".

Get it? Speedy? Haha!

The primary goal that Google stated for this protocol was to reduce page load times by at least 50%.

The plan to achieve this goal was in the following ways:

■ Multiplexing requests onto one TCP connection

■ Prioritizing requests

■ Compressing headers

■ Enabling server pushes

■ Ensuring better security with TLS

SPDY is not an outright replacement of HTTP. Instead it runs as an application layer protocol that sits between TCP and HTTP.

Google also ensured that every request would be secure, so part of the SPDY implementation included TLS for data encryption, by default. Google was able to accomplish this goal when it saw up to 64% decreases in page load times across its Google properties that were tested, when compared to HTTP/1.1.

Now to HTTP/2

The performance gains that were experienced with SPDY were so great that Google submitted SPDY to the IETF for consideration in the HTTP/1.1 upgrade. This was accepted, and when the initial draft of the HTTP/2 standard was published in 2012, it was an exact of the copy of SPDY.

HTTP/2 is meant to be a more efficient version of the HTTP/1.1 protocol. But rather than a simple dot upgrade, like HTTP/1.2, HTTP/2 was used due to the binary framing layer (more on this later) of the upgraded protocol.

So SPDY was used as a starting point to build on HTTP/2.

The following capabilities were used as goals to accomplish:

■ Multiplexing of requests via request/response streams

■ Flow control and prioritization of multiplexed streams

■ Interaction mode via server push

■ Data compression of HTTP headers

After many months, the updated HTTP protocol was published as the proposed standard (RFC 7540) in May 2015.

Not Your Average SPDY

While HTTP/2 was based on Google's SPDY protocol at the outset, there were a couple of capabilities removed by the time it became a standard.

SPDY was only implemented with the TLS protocol enabled for security. The HTTP/2 protocol can be implemented with or without TLS.

This means that both ports 80 and 443 can be used as default ports to implement the protocol. HTTP/2 defines a version ID in the HTTP header so that you can verify what version is being used.

■ H2 version is for encrypted HTTP/2.

■ H2C version is for unencrypted HTTP/2.

But the de facto standard implementation will be H2 to ensure that as many websites as possible are always using HTTP with TLS encryption.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 4, covering HTTP/2 in more detail.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3

Jean Tunis

This blog is the third in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Common HTTP/1.1 Workarounds

Regarding the HTTP/1.1 limitations outlined in my last blog, it was known that an update was needed to address them. But this did not happen, until recently. With the need for better performance, a number of workarounds were created to get around the limitations.

Open Multiple Connections

As web technology developed, it became clear that more connections were needed to help improve web performance by opening up more than one connection to the server at the same time. For years, Internet Explorer allowed only 2 concurrent connections. As Firefox and Chrome entered the scene, this number went up to 6.

At this point, most modern browsers allow for 6 concurrent TCP connections. All of these connections will help to reduce the impact of TCP slow start on the overall website.

Domain Sharding

With the ability to have multiple connections to the server, developers soon realized that they could improve performance if website resources were placed on various domain servers. They would then be able to allow for up to 6 concurrent connections for each domain.

The content for a website, domain.com, for example, could be spread across three domains - one.domain.com, two.domain.com and three.domain.com.

With this configuration, a browser can now have up to 18 concurrent connections to make HTTP requests!

Resource Inlining

What's better than having the browser open up more connections across many domains? Using those same connections to send more data.

In my previous blog, Web Performance 101: 4 Recommendations to Improve Web Performance, I mentioned that you don't want to have too many connections. With 18 connections from one browser to 3 domains, the PC may run into some resource issues. Opening and closing more connections can cause CPU slowdown, for example.

The ability to include some scripting data directly into the HTML, known as resource inlining, allowed the browser to download CSS and JS file information along with the HTML and not have to open up a new connection to do it. This not only reduced the number of connections, but also reduced the need for another round-trip across the network to get more data.

Enter SPDY

To help solve the HTTP/1.1 limitations, in November 2009, Google released its first draft defining a new protocol called SPDY, which is pronounced "SPeeDY".

Get it? Speedy? Haha!

The primary goal that Google stated for this protocol was to reduce page load times by at least 50%.

The plan to achieve this goal was in the following ways:

■ Multiplexing requests onto one TCP connection

■ Prioritizing requests

■ Compressing headers

■ Enabling server pushes

■ Ensuring better security with TLS

SPDY is not an outright replacement of HTTP. Instead it runs as an application layer protocol that sits between TCP and HTTP.

Google also ensured that every request would be secure, so part of the SPDY implementation included TLS for data encryption, by default. Google was able to accomplish this goal when it saw up to 64% decreases in page load times across its Google properties that were tested, when compared to HTTP/1.1.

Now to HTTP/2

The performance gains that were experienced with SPDY were so great that Google submitted SPDY to the IETF for consideration in the HTTP/1.1 upgrade. This was accepted, and when the initial draft of the HTTP/2 standard was published in 2012, it was an exact of the copy of SPDY.

HTTP/2 is meant to be a more efficient version of the HTTP/1.1 protocol. But rather than a simple dot upgrade, like HTTP/1.2, HTTP/2 was used due to the binary framing layer (more on this later) of the upgraded protocol.

So SPDY was used as a starting point to build on HTTP/2.

The following capabilities were used as goals to accomplish:

■ Multiplexing of requests via request/response streams

■ Flow control and prioritization of multiplexed streams

■ Interaction mode via server push

■ Data compression of HTTP headers

After many months, the updated HTTP protocol was published as the proposed standard (RFC 7540) in May 2015.

Not Your Average SPDY

While HTTP/2 was based on Google's SPDY protocol at the outset, there were a couple of capabilities removed by the time it became a standard.

SPDY was only implemented with the TLS protocol enabled for security. The HTTP/2 protocol can be implemented with or without TLS.

This means that both ports 80 and 443 can be used as default ports to implement the protocol. HTTP/2 defines a version ID in the HTTP header so that you can verify what version is being used.

■ H2 version is for encrypted HTTP/2.

■ H2C version is for unencrypted HTTP/2.

But the de facto standard implementation will be H2 to ensure that as many websites as possible are always using HTTP with TLS encryption.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 4, covering HTTP/2 in more detail.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...