Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 4
May 25, 2016

Jean Tunis
RootPerformance

Share this

This blog is the fourth in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3

The new HTTP/2 protocol includes a number of things that did not exist at all in HTTP before:

Uses only one TCP connection

In HTTP/1.1, we needed many connections, but not too many due to resource constraints and latency considerations. In HTTP/2, the standard calls for only one TCP connection to be used. This will reduce the overhead of opening and closing TCP connections and reduce the round-trip time (RTT) of going to the server and back for numerous requests.

Requests are multiplexed

What allows the one-connection capability to occur and not impact performance is the ability of requests to be multiplexed. HTTP requests are broken up into streams, and each stream can be sent down one connection. This is what pipelining was hoping to achieve, but did not.

It's binary, not text-based to allow for multiplexing

The ability to multiplex the HTTP requests is enabled by the fact that the protocol is now binary. HTTP/1.1 is a text-based protocol, which make it difficult to break up HTTP data for the multiplexing capability needed.

Compresses headers

One of the recommendations to help improve performance is to enable caching on the server. Since web browsers generally support caching, returns to the browser would not have to re-download the same data it previously downloaded. This will save a round-trip request, and users get their request almost instantaneously, depending of the performance of their PC.

The drawback of all this caching is the data in the HTTP header used to identify whether data is cached via a cookie. The size of the cookies have gotten bigger and bigger over the years. Most browsers allow a cookie to be about 4KB. With this size, an HTTP request can sometimes be mostly of cookie data in the header.

Compression also occurs with a new format called HPACK, defined in RFC 7541. This compression format replaces GZIP because of a security risk (CRIME) discovered in 2012 discovered about this format.

Compressing the headers helps to reduce the growth of the HTTP headers.

Has different frame types: headers and data

At the core of the performance improvement gains expected of HTTP/2 is the new binary framing format. Each HTTP message is encoded in binary format. With this format, HTTP/2 introduces different types of frames that are part of a message. Instead of having an HTTP message with the headers and the payload in one frame, there are frames only for data and frames only for header information. There are in total ten new frame types in HTTP/2, which help allow for the new capabilities.

Prioritizes requests sent

HTTP/2 allows for the browser to be able to prioritize requests that are sent. Higher priority requests can go ahead of other requests via the multiplexing mechanism. This is done with the PRIORITY frame type.

Can reset HTTP/2 stream instead of TCP connection

In HTTP/1.1, when a request is complete, the connection can be reset and closed by either end. The problem is that it means if you want to use that connection again, you have to open it, and hence another trip to the server.

With HTTP/2, we can now reset a HTTP stream inside of a TCP connection. This allows for close and reusing another stream, without tearing down the TCP connection, and requiring another trip to the server when we need to send some data down that connection. This is done with the RST_STREAM frame type.

Servers can push data to browser

Web servers now have the ability to push content directly to client browsers even if they are not explicitly requested. It means that when a client, for example, makes a request for a particular page, the server will automatically push any additional data, such as Javascript or CSS files, required to properly render the page. This removes the need for the browser to make more requests for those files, which would create additional round-trips.

The server must specify to the client that it will be pushing content to it before it does so. This is done via the PUSH_PROMISE frame type.

Controls the flow of data

The TCP protocol has the ability to control the flow of data by opening and closing the TCP congestion window. When the receiver needs to slow down the other side, it does so by reducing its window.

With HTTP/2, we have one connection, and if that happens, everything slows down.

But with the capability of having multiplexed streams, HTTP/2 was given the ability to provide for its own flow control at the stream and connection level. This way, if a stream of data needs to be slow down, other streams are not impacted, and the TCP connection continues to operates appropriately.

This is done via the WINDOW_UPDATE frame type.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 5, the last installment in this blog series, taking a final look at HTTP/2.

Jean Tunis is Principal Consultant and Founder of RootPerformance
Share this

The Latest

March 26, 2020

While remote work policies have been gaining steam for the better part of the past decade across the enterprise space — driven in large part by more agile and scalable, cloud-delivered business solutions — recent events have pushed adoption into overdrive ...

March 25, 2020

Time-critical, unplanned work caused by IT disruptions continues to plague enterprises around the world, leading to lost revenue, significant employee morale problems and missed opportunities to innovate, according to the State of Unplanned Work Report 2020, conducted by Dimensional Research for PagerDuty ...

March 24, 2020

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. They want to know, "Do we build or fix?" This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools ...

March 23, 2020

With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience. The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both ...

March 19, 2020

The 2019 State of E-Commerce Infrastructure Report, from Webscale, analyzes findings from a comprehensive survey of more than 450 ecommerce professionals regarding how their online stores performed during the 2019 holiday season. Some key insights from the report include ...

March 18, 2020

Robinhood is a unicorn startup that has been disrupting the way by which many millennials have been investing and managing their money for the past few years. For Robinhood, the burden of proof was to show that they can provide an infrastructure that is as scalable, reliable and secure as that of major banks who have been developing their trading infrastructure for the last quarter-century. That promise fell flat last week, when the market volatility brought about a set of edge cases that brought Robinhood's trading app to its knees ...

March 17, 2020

Application backend monitoring is the key to acquiring visibility across the enterprise's application stack, from the application layer and underlying infrastructure to third-party API services, web servers and databases, be they on-premises, in a public or private cloud, or in a hybrid model. By tracking and reporting performance in real time, IT teams can ensure applications perform at peak efficiency — and guarantee a seamless customer experience. How can IT operations teams improve application backend monitoring? By embracing artificial intelligence for operations — AIOps ...

March 16, 2020

In 2020, DevOps teams will face heightened expectations for higher speed and frequency of code delivery, which means their IT environments will become even more modular, ephemeral and dynamic — and significantly more complicated to monitor. As a result, AIOps will further cement its position as the most effective technology that DevOps teams can use to see and control what's going on with their applications and their underlying infrastructure, so that they can prevent outages. Here I outline five key trends to watch related to how AIOps will impact DevOps in 2020 and beyond ...

March 12, 2020

With the spread of the coronavirus (COVID-19), CIOs should focus on three short-term actions to increase their organizations' resilience against disruptions and prepare for rebound and growth, according to Gartner ...

March 11, 2020

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY ...