RPC must handle lost, broken and duplicate messages. An ID space is required to match requests and responses. Message segmentation / reassembly should be supported, to name a few. Out-of-order delivery, which is prevented by a reliable byte stream, is also suitable for RPC. There may be a reason why many RPC frameworks were born in the 1980s and 1990s. People in distributed systems needed an RPC mechanism, so there was no readily available standard TCP / IP protocol suite. (RFC 1045 actually defines an RPC-oriented experimental transport, but it doesn’t seem to catch on). Nor is it clear that TCP / IP will ever become as dominant as it is today. As such, some RPC frameworks (such as DCE) are designed to be independent of the underlying network protocol.
The lack of RPC support in the TCP / IP stack laid the foundation for QUIC.
When HTTP came out in the early 1990s, it wasn’t trying to solve the RPC problem, it was trying to solve the information sharing problem, but it implemented request / response semantics. The HTTP designers decided to use HTTP over TCP, apparently due to the lack of better options. Previous versions were notorious for slow performance due to a new connection being used for each “GET”.
Various changes have been made to HTTP to improve performance, such as pipelines, persistent connections, and the use of parallel connections, but TCP’s reliable byte stream model is suitable for HTTP. I never did it.
With the introduction of the Transport Layer Security (TLS) protocol, there has been a new exchange of cryptographic information back and forth and the discrepancy between what HTTP requested and what TCP has provided has become increasingly evident. This is well explained in Jim Roskind’s 2012 QUIC design paper. Head-of-line blocking, poor congestion responses, and the additional RTTs introduced by TLS have all been identified as inherent problems with HTTP over TCP.
One way to frame what happened here is: the “narrow waist” of the Internet was originally just an Internet protocol, intended to support the various protocols above it. But somehow “the West” now also includes TCP and UDP. It was the only means of transportation available. If you just need the datagram service, you can use UDP. If you need some kind of reliable delivery, TCP is the answer. If you want something that doesn’t map perfectly to unreliable datagrams or reliable byte streams, you’re in luck. But requiring everything from TCP to many higher-level protocols was a hassle.
QUIC does a lot of work. Its definition extends to three RFCs that cover the underlying protocol (RFC 9000), the use of TLS (9001), and congestion control mechanisms (9002). But deep down, it’s an implementation of the Internet’s third lost paradigm: RPC.
If you really need a reliable stream of bytes, like downloading multi-gigabyte OS updates, TCP is well designed. But HTTP (S) is more like RPC than a reliable stream of bytes. One way to look at QUIC is to finally bring the RPC model into the IP suite.
This definitely benefits applications running over HTTP (S), especially gRPC and any RESTful APIs we rely on.
When I wrote about QUIC earlier, I said it was a good case study of how to rethink layered systems as requirements become more explicit. byte stream requirements) and congestion control algorithms continue to evolve to meet these requirements.
QUIC actually meets a variety of requirements. Since HTTP is so central to the Internet today (here and here) it is said to have become the new “Narrow West”, QUIC could become the dominant transport protocol. Because it meets the needs of the most important applications. ®