Skip to main content

Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Jean Tunis

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Jean Tunis

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...