Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2
May 19, 2016

Jean Tunis
RootPerformance

Share this

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Jean Tunis is Principal Consultant and Founder of RootPerformance
Share this

The Latest

January 18, 2022

As part of APMdigest's list of 2022 predictions, industry experts offer thoughtful, insightful, and often controversial predictions on how Network Performance Management (NPM) and related technologies will evolve and impact business in 2022 ...

January 13, 2022

Gartner highlighted 6 trends that infrastructure and operations (I&O) leaders must start preparing for in the next 12-18 months ...

January 11, 2022

Technology is now foundational to financial companies' operations with many institutions relying on tech to deliver critical services. As a result, uptime is essential to customer satisfaction and company success, and systems must be subject to continuous monitoring. But modern IT architectures are disparate, complex and interconnected, and the data is too voluminous for the human mind to handle. Enter AIOps ...

January 11, 2022

Having a variety of tools to choose from creates challenges in telemetry data collection. Organizations find themselves managing multiple libraries for logging, metrics, and traces, with each vendor having its own APIs, SDKs, agents, and collectors. An open source, community-driven approach to observability will gain steam in 2022 to remove unnecessary complications by tapping into the latest advancements in observability practice ...

January 10, 2022

These are the trends that will set up your engineers and developers to deliver amazing software that powers amazing digital experiences that fuel your organization's growth in 2022 — and beyond ...

January 06, 2022

In a world where digital services have become a critical part of how we go about our daily lives, the risk of undergoing an outage has become even more significant. Outages can range in severity and impact companies of every size — while outages from larger companies in the social media space or a cloud provider tend to receive a lot of coverage, application downtime from even the most targeted companies can disrupt users' personal and business operations ...

January 05, 2022

Move fast and break things: A phrase that has been a rallying cry for many SREs and DevOps practitioners. After all, these teams are charged with delivering rapid and unceasing innovation to wow customers and keep pace with competitors. But today's society doesn't tolerate broken things (aka downtime). So, what if you can move fast and not break things? Or at least, move fast and rapidly identify or even predict broken things? It's high time to rethink the old rallying cry, and with AI and observability working in tandem, it's possible ...

January 04, 2022

AIOps is still relatively new compared to existing technologies such as enterprise data warehouses, and early on many AIOps projects suffered hiccups, the aftereffects of which are still felt today. That's why, for some IT Ops teams and leaders, the prospect of transforming their IT operations using AIOps is a cause for concern ...

December 16, 2021

This year is the first time APMdigest is posting a separate list of Remote Work Predictions. Due to the drastic changes in the way we work and do business since the COVID pandemic started, and how significantly these changes have impacted IT operations, APMdigest asked industry experts — from analysts and consultants to users and the top vendors — how they think the work from home (WFH) revolution will evolve into 2022, with a special focus on IT operations and performance. Here are some very interesting and insightful predictions that may change what you think about the future of work and IT ...

December 15, 2021

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM, AIOps, Observability, OpenTelemetry, and related technologies will evolve and impact business in 2022. Part 6 covers the user experience ...