Improvement in HTTP/2 over HTTP/1.1

Harshit Sharma
3 min readMay 12, 2021

HTTP is an application protocol that is used for exchanging information between the client and server. HTTP/2 has some significant improvements over HTTP/1.1 to improve the user experience.

Photo by Sigmund on Unsplash

In HTTP/1.1 the client sends a text-based request to a server and the server responds with a text-based HTML page back to the client.

HTTP/2 has several improvements over HTTP/1.1 for speeding up the page loads and delivering the content faster to the clients to improve the user experience. HTTP/2 reduces latency and accelerates content download on the web pages.

Here are the improvements offered by HTTP/2 over HTTP/1.1.

Binary framing over text-based request and response

In HTTP/2 all the data is encapsulated in the binary format while it still maintains the HTTP semantics such as verbs, methods, and headers. So it does not break the normal functionality offered by HTTP/1.1 rather using binary framing over text-based request and response offers several advantages.

A single TCP connection can be used for multiple streams of data. So it allows the interleaving of the requests & responses, that can run in parallel without blocking each other. This is also called multiplexing where only one persistent TCP connection is getting utilized for multiple requests & responses.

In the case of HTTP/1.1 using one persistent TCP connection for multiple requests and response is not so efficient because the data flows in the order of requests. So multiple data packets cannot pass each other when they are traveling to the same client There may be a situation where a request at the head of the queue blocks the request which is behind. It is also known as the head of line blocking (HOL Blocking) and it is a significant problem in HTTP/1.1.

A single TCP connection also helps in improving the performance of HTTPS protocol since the client and server can use the same secured connection for multiple request and response so the TLS handshake does not have to happen multiple times between the server and the client which in turns improve the performance of HTTPS protocol.

Prioritization

In HTTP/2, the client can send multiple requests in a single TCP connection to the server simultaneously. A client can prioritize the request being sent to the server by assigning them the weights between 1 and 256. The server uses the weights passed in the binary stream and create a dependency tree which allows the server to get an understanding of how the data should be passed back to the client and hence user experience can be improved by showing the content which can have more impact to the user rather than waiting for the content to be delivered to the client.

Server push

Typically webpage contains various resources to be retrieved from the servers that include HTML, JS, images, CSS extra. While the client request for index page it is given that the client will also request the additional resources required to display the webpage completely. In the case of HTTP/2, a server can send additional resources to the client along with the requested resource and hence providing the resource before the client asked for it. This reduces the overall round trip time. This process is called server push. The client will have Full control if they want to utilize the server push mechanism or not. Clients can decline to use the service push mechanism.

Header Compression

HTTP/2 allows the split of headers from the data resulting header and data frames. It also provides a mechanism for the compression of the headers using HPACK and hence it reduces the overall size & transmission and improves the efficiency.

--

--