How-tos

What You Need To Know About HTTP/3 – CloudSavvy IT


HTTP protocol.
Shutterstock/Robert Avgustin

HTTP/3 is the next generation of the HTTP protocol. It’s powered by QUIC, which replaces TCP at the transport layer and cuts down on the number of round trips a client must make to establish a connection.

What Makes It Better?

If you can’t tell from the acronym “QUIC,” HTTP/3 is much faster.

HTTP is just part of the OSI model, which powers the internet as we know it. Each layer of the model serves a different purpose, with high-level APIs like HTTP sitting at the very top (the application layer), all the way down to the physical wires and connections that plug into routers:

HTTP is part of OSI model

But there’s a bottleneck in this model—and despite the new name, the HTTP standard itself isn’t the problem.

TCP (the transport layer) is the culprit here; it was designed back in the ’70s, and as such was not built to handle real-time communication very well. HTTP-over-TCP has reached its limit. Google and the rest of the tech space have been working on a replacement for TCP.

In 2012, Google created SPDY, a protocol that builds on top of TCP and fixes a lot of common issues. SPDY itself is deprecated, but parts of it made their way into HTTP/2, which is currently used by 40% of the web.

QUIC is a new standard, much like SPDY, but it’s built on top of UDP rather than TCP. UDP is much faster than TCP, but is generally less reliable as it doesn’t have the same error checking and loss prevention as TCP does. It’s commonly used in applications that don’t require packets to be in the exact right order, but care about latency (such as live video calling).

QUIC is still reliable, but it implements its error checking and reliability on top of UDP, so it gets the best of both protocols. The first time a user connects to a QUIC-enabled site, they’ll do so over TCP.

The main problem with TCP that QUIC fixes is head-of-line blocking. Once a connection is made between server and client, the server sends data packets to the client. If the connection is bad and one packet is lost, the client withholds all packets received after that until the server retransmits the lost packet. HTTP/2 fixes this issue somewhat, by allowing multiple transfers over the same TCP connection, but it isn’t perfect and can actually be slower than HTTP/1 with high-loss connections.

QUIC fixes this issue, and deals with high-loss connections much better. Early tests from Google showed improvements of around 15% in high-latency scenarios, and up to 30% improvements in video buffering on mobile connections. Because QUIC cuts down on the number of handshakes that must be made, there will be latency improvements across the board.

Is It Hard To Implement?

While QUIC is a new standard, it’s built on top of UDP, which is already supported nearly everywhere. It won’t require any new kernel updates, which can be problematic for servers. QUIC should work out of the box on any system that supports UDP

HTTP-over-QUIC should be a drop-in replacement for HTTP-over-TCP once it’s readily available. At the time of writing, Chrome has support for QUIC, but it’s disabled by default. You can enable it for testing by going to:

chrome://flags

and turning on the “Experimental QUIC protocol” flag. Firefox will add support later this fall, and with Edge moving to Chromium, they’ll pick up support soon as well.

On the server end, If you’re using CloudFlare as your CDN, you’ll be able to enable the option already in your dashboard, though you won’t have many clients actually using it until mobile browsers have it on by default. Fastly is actively working on support. If you want to enable it on your web server though, you’ll have to wait a bit—early support for QUIC is slated to arrive during the nginx 1.17 development cycle, but Apache support is nowhere in sight just yet.

Once nginx and Apache are updated to support it, adding QUIC to your webpage or web app will be as simple as updating your web server and enabling the option. You won’t have to make any changes to your app or your code, as everything is handled at the infrastructure level. It’s not here yet, but it’s coming very soon, and you will definitely want to enable it once it’s supported by default.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.