Skip to main content

Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Jean Tunis

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Hot Topics

The Latest

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Jean Tunis

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Hot Topics

The Latest

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...