HTTP stands for hypertext transfer protocol, and it is the basis for almost all web applications. More specifically, HTTP is the method computers and servers use to request and send information.
If you want to learn about the HTTP in detail, you can refer to my below post:
How does HTTP/1.X works ? What are problem do we have with HTTP/1.X ?
As we can see in the above image, while using HTTP/1.X, browser will first setup a TCP connection with the server and will first start the GET/ index.html, browser will sit idle and wait for the response by the server until it gets the response for the GET/index.html request. Only once the first request is completed, browser will be start the second request for GET/Style.css and similarly for the GET/script.css after completing the GET/style.css.
By looking at the above process we can easily see that browser is wasting a lot of time in waiting by using HTTP/1.X whereas multiple request can be easily done in parallel here.
Http/1.X has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time.
This is the problem which has been corrected in HTTP/2 which we will see below.
Solution : HTTP/2
HTTP/2 changes how requests and responses travel on the wire, a key limitation in the prior versions of HTTP. HTTP/2 works by making a single connection to the server, and then “multiplexing” multiple requests over that connection to receive multiple responses at the same time.
The browser is using a single connection, but it no longer requests items one at a time. Here we see the browser receives the response headers for Stream 1 (maybe an image), and then it receives the response body for Stream #3. Next it starts getting the response headers for Stream #2, before continuing on to Stream #3 or Stream #1.
There is no more “make request; do nothing while waiting; download response” loop.Now network connections don’t sit idle while you are waiting on a single resource to finish downloading. For example, instead of waiting for one image to finish downloading before starting the next, your browser could actually finish downloading image 2 before image 1 even completes.
This also prevents what is known as head-of-line blocking: when a large/slow resource (say for example a 1 MB background image) blocks all other resources from downloading until complete. Under HTTP, browsers would only download one resource at a time per connection. HTTP/2’s multiplexing approach allows browsers to download all those other 5 KB images in parallel over the same connection and display as they become available.
Now the process for loading a website having resources such as index.html, style.css and script.js with HTTP/2 will be as below :
Binary Framing in HTTP/2
Now the questions which comes to mind is that how HTTP/2 is able to achieve multiplexing on one TCP connection. At the core of all performance enhancements of HTTP/2 is the new binary framing layer, which dictates how the HTTP messages are encapsulated and transferred between the client and server.
The “layer” refers to a design choice to introduce a new optimized encoding mechanism between the socket interface. Unlike the newline delimited plaintext HTTP/1.x protocol, all HTTP/2 communication is split into smaller messages and frames, each of which is encoded in binary format.
HTTP/2 breaks down the HTTP protocol communication into an exchange of binary-encoded frames, which are then mapped to messages that belong to a particular stream, all of which are multiplexed within a single TCP connection. This is the foundation that enables all other features and performance optimizations provided by the HTTP/2 protocol.
Another great performance benefit of HTTP/2 is the “Server Push” feature: this allows the server to proactively push content to a visitor without them requesting it. So for example, when a browser visits your website, your server can actually “push” your logo image down to the browser before it even knows it needs it. By proactively pushing needed resources from the server, the browser can load pages much quicker then was previously possible.
Small files load more quickly than large ones. To speed up web performance, both HTTP/1.1 and HTTP/2 compress HTTP messages to make them smaller. However, HTTP/2 uses a more advanced compression method called HPACK that eliminates redundant information in HTTP header packets. This eliminates a few bytes from every HTTP packet. Given the volume of HTTP packets involved in loading even a single webpage, those bytes add up quickly, resulting in faster loading.
HTTP/2 offers a feature called weighted prioritization. HTTP/2 standard allows each stream to have an associated weight and dependency:
- Each stream may be assigned an integer weight between 1 and 256.
- Each stream may be given an explicit dependency on another stream
The combination of stream dependencies and weights allows the client to construct and communicate a “prioritization tree” that expresses how it would prefer to receive responses. In turn, the server can use this information to prioritize stream processing by controlling the allocation of CPU, memory, and other resources, and once the response data is available, allocation of bandwidth to ensure optimal delivery of high-priority responses to the client.
Difference between HTTP/2 and HTTP/1.X
- HTTP/2 is binary framing protocol whereas HTTP/1.X allows textual data stream.
- HTTP/2 is fully multiplexed, whereas HTTP/1.X has head of blocking mechanism.
- HTTP/2 compress the headers to reduce the data transmit size whereas no such mechanism in HTTP/1.X
- HTTP/2 has server push mechanism whereas HTTP/1.X does not have such mechanism.
This is all from this Post, if you have any thoughts or you find anything incorrect, please leave your comments in the comment box below.