So, What’s about HTTP/2?

Rabby Hossain
8 min readApr 23, 2021
Photo by Igor Vymorkov on Unsplash

Introduction

HTTP stands for Hypertext Transfer Protocol, an application-level protocol, which is used for communication on the World Wide Web(WWW) since its invention in 1989.

HTTP/1.1 was initiated in 1989 by Tim Berners-Lee and released in 1997. In HTTP/1.1, a client sends a text-based request to a server by calling a method like GET or POST. In response, the server sends a resource like an HTML page back to the client.

HTTP/1.1 remained a standard over the last decade but in 2015, a major version came into use, developed primarily at Google, offered several advantages, and overcome issues with HTTP/1.1.At this moment, HTTP/2 is supported by almost all modern web browsers and used by over 50% of websites.

In this article, I am going to give an overview of the main features of HTTP/2 and the differences between HTTP/1.1 and HTTP/2.

Request and Response Multiplexing

In a traditional web application, a client(normally a browser) sends a REST request to a web server, and the server responses back with the requested resources like an HTML page, text, or JSON data.

For example, let’s say you are visiting a website at the domain www.example.com. When you navigate to this URL, the web browser on your computer sends an HTTP request in the form of a text-based message, similar to the one shown here:

For this request, a TCP connection is established between your browser and the web server like this:

The server may return the HTML page in response but to render the beautiful web page in front of you, the browser needs additional data like style, images, and js files, and all that information can not be transferred in a single HTTP/1.1 call since only one response can be delivered at a time per connection and HTTP/1.1 keeps all requests and responses in plain text format. So, to fetch all that information, the browser needs to establish several TCP connections like this :

But this approach is very costly in terms of time, bandwidth, and resource and leads to a bad user experience where internet speed is slow. HTTP/2 solves this problem by sending all the information in a single TCP connection but with a different technique. Instead of plain text format, HTTP/2 uses a binary framing layer to encapsulate all the messages in binary format. Within a single TCP connection, data transferred as streams, the underlying binary layer breaks down an HTTP message into independent frames, interleaves them, and then reassembles them on the other end.

This approach makes your application load faster, reduces extra HTTP requests, ensures resource optimization, and increases user experience.

Performance test for a web page consists of 100 images

A point to remember that HTTP/2 still maintains HTTP semantics, such as verbs, methods, and headers. It means web applications created before HTTP/2 can continue functioning as normal when interacting with the new protocol.

Header Compression

We know that each HTTP request has a set of headers, sent as plain text, that describes transferred resources and properties. Those headers add extra 500–800 bytes of overhead per transfer, and sometimes kilobytes if HTTP cookies are being used. Most of the time headers are redundant. For example, take the following two requests:

Request #1

method:     GET
scheme: https
host: example.com
path: /resource
accept: /image/jpeg
user-agent: Mozilla/5.0 ...

Request #2

method:     GET
scheme: https
host: example.com
path: /new_resource
accept: /image/jpeg
user-agent: Mozilla/5.0 ...

The various fields in these requests, such as method, scheme, host, accept, and user-agent, have the same values; only the pathfield uses a different value.HTTP/1.1 can not solve the redundant headers problem.

In this scenario, HTTP/2 compresses the request and response headers using HPACK compression format. Internally, it encodes the headers by using static Huffman coding which reduces the size drastically and also creates a shared compression context where it keeps track of previously transmitted values to prevent duplication.

As a result, when sendingrequest #2, the client can use HPACK to send only the indexed values needed to reconstruct these common fields and newly encode the path field. The resulting header frames will be as follows:

So, by compressing the headers using HPACK and other compression methods, HTTP/2 provides one more feature that can reduce client-server latency.

Stream prioritization

At this point, we know that we can make one HTTP request instead of multiple HTTP requests to get multiple resources. Now, Let’s consider one situation, you as a client may need some resources earlier. For example, you need the product name, price, and thumbnail images earlier than product ratings, reviews, etc. So, how can we achieve this goal in HTTP/2?

As we know, data is transferred as streams in an HTTP/2 connection and we can also prioritize a stream by assigning a weight between 1 to 256 and declaring its dependency on another stream. A client can construct a “dependency tree” with the combination of weight and dependency to express a preference for how it would prefer to receive the response. And the server can use this “dependency tree” to decide which stream should get higher priority in allocating resources like CPU, processing, and delivering.

In our case, we can assign a higher weight to the product name, price, and thumbnail images and set ratings and reviews dependent on the product name like this :

Dependency Tree

According to the above picture, stream NA(product name), PR(price), and IM(images) are root streams as they don’t depend on other streams and they have the same weight 1. So, they should get the same priority from the server and should not wait for any stream to be completed if possible. Stream RA(ratings) and RE(reviews) are siblings as they have the same parent stream NA(product name). It means the RA(reviews) and RE(ratings) should be processed later than the parent stream NA(product name). On the other hand, RE(reviews) should get higher priority as it has a higher weight 2 than stream RA(ratings) 1.

So, with the combination of the “dependency tree” and the weight of HTTP/2, we can improve the browsing performance when we need resources with different priorities.

Server Push

This capability of HTTP/2 allows the server to send additional resources to the client that isn’t requested since the server knows that client will need these resources in the future. For more clarity, let’s take an example, a client sends a request to the server for information about a product. The server will send the requested information and it also sends the information about the shipment since it knows that this information will be requested in the future by the client.

The pushed resources can be cached, reused, or declined by the client and prioritized by the server.

To achieve this feature, the server sendsPUSH_PROMISE frames ahead of the response DATA frames(requested resource) that express the server’s intention to push additional information. At this point, the client can decline(if the client has the information in cache) the push stream from the server by sending RST_STREAM frame. The client can also control the number of concurrently pushed streams via SETTINGS frames.

Server Push

This feature allows the client to reduce the number of requests and increase performance.

Buffer Overflow Control

In any TCP connection between two machines, both the client and the server have a certain amount of buffer space available to hold incoming requests that have not yet been processed. For example, when a client uploads a huge image or a video to a server, the server buffer may overflow, causing some additional packets to be lost. HTTP/2 has its own flow control mechanism to prevent the sender from overwhelming the receiver with data it may not want or be able to process: the receiver may be busy, under heavy load, or may only be willing to allocate a fixed amount of resources for a particular stream.

When an HTTP/2 connection has established the client and the server exchange SETTINGS frames, which set the flow control window sizes in both directions. The default value of the flow control window is set to 65,535 bytes, but the receiver can set a large window size maximum (2^31-1 bytes) and maintain it by sending a WINDOW_UPDATE frame whenever any data is received. And the WINDOW_UPDATE frame is reduced when the sender emits a DATA frame.

As we see, HTTP/2 provides the simple building blocks for flow control and defers the implementation to the client and server, which allow them to implement custom strategies to regulate resource use and allocation, as well as implement new delivery capabilities that may help improve both the real and perceived performance of our web applications.

Browser Compatibility

Though most of the modern browsers fully support HTTP/2 protocol, UC Browser for Android and Opera Mini (all versions) don’t support it.

Conclusion

As you see, HTTP/2 offers many features to minimize the issues of HTTP/1.1 and increase the performance of an application. Besides, its implementation is painless and is also supported by almost all browsers. So, you can safely use it for your web application. At least, give it a try!

References:

  1. https://hpbn.co/http2/
  2. https://factoryhr.medium.com/http-2-the-difference-between-http-1-1-benefits-and-how-to-use-it-38094fa0e95b
  3. https://www.digitalocean.com/community/tutorials/http-1-1-vs-http-2-what-s-the-difference
  4. https://developers.google.com/web/fundamentals/performance/http2

For more articles, you can follow me on Linkedin.

--

--

Rabby Hossain

An enthusiastic software engineer who like to solve problems.