Scroll to top
Read Time: 10 min

APIs, or application programming interfaces, define and implement rules that allow applications to communicate and interact with one another. An API specifies the types of calls and requests that one application can make to another, how to make those requests, the data formats that must be used, and the conventions that clients must adhere to.

REST APIs (Representational State Transfer) and gRPC APIs (gRPC Remote Procedural Call) are two distinct architectural styles for constructing APIs.

RPC (Remote Procedure Call) is the oldest and most traditional API style. It requests a service from a remote server using procedure calls. RPC APIs are difficult to integrate and risk leaking internal implementation details.

REST was introduced in 2000 to address this issue and make APIs more accessible. It provided a consistent way to access data indirectly, through resources, using generic HTTP methods such as GET, POST, PUT, and DELETE. REST APIs are self-explanatory. This was the primary distinction between RPC and REST, as RPC addresses procedures, and it is difficult to predict which procedures may exist on different systems.

gRPC modernizes the old RPC design method by making it interoperable, modern, and efficient.

The REST API has been a pillar of web programming for a long time. However, gRPC has recently begun to encroach on its territory. It turns out that there are some very good reasons for that. In this tutorial, we will learn about the gRPC API and how it differs from the REST API.

Protobuf vs. JSON

One of the biggest differences between REST and gRPC is the format of the payload. REST messages typically contain JSON. This is not a strict requirement, and in theory, you can send a response in any format, but in practice the whole REST ecosystem is focused on JSON. It is safe to say that, with very few exceptions, REST APIs accept and return JSON. 

gRPC, on the other hand, accepts and returns Protobuf messages. Protobuf is a very efficient and packed data format. Protocol Buffers (Protobuf) is a data serialization format developed by Google. It stores structured data in binary format efficiently and compactly, allowing for faster transfer over network connections. JSON, on the other hand, is a textual format. JSON contains only messages and no schema, whereas Protobuf not only has messages but also includes a set of rules and schemas to define these messages. You can compress JSON, but then you lose the benefit of a textual format that you can easily expect.


    "Customer" : 
            "name" : "Jane Doe",
            "id" : 1001,
            "email" : ""


message Customer {
 required string name = 1;
 optional int32 id = 2;
 optional string email = 3;

HTTP/2 vs. HTTP 1.1

Let's compare the transfer protocols used by REST and gRPC. As stated previously, REST relies heavily on HTTP (typically HTTP 1.1) and the request-response model. In contrast, gRPC uses the more recent HTTP/2 protocol. HTTP (Hypertext Transfer Protocol) is a top-level application protocol that exchanges information between a client computer and a local or remote web server. It has been in use since 1989.

There are several problems that plague HTTP 1.1 that HTTP/2 fixes. Here are the significant ones:

HTTP 1.1 Is Too Big and Complicated

HTTP 1.0 RFC 1945 is a 60-page RFC. HTTP 1.1 was originally described in RFC 2616, which ballooned up to 176 pages. However, later the IETF split it up into six different documents—RFC 7230, 7231, 7232, 7233, 7234, and 7235—with an even higher combined page count. HTTP 1.1 allows for many optional parts that contribute to its size and complexity.

The Growth of Page Size and Number of Objects

The trend for web pages is to increase both the total size of the page (1.9 MB on average) and the number of objects on the page that require individual requests. Since each object requires a separate HTTP request, this multiplication of separate objects increases the load on web servers significantly and slows down page load times for users.

Latency Issues

HTTP 1.1 is sensitive to latency. Latency is the amount of time it takes for a packet of data to travel from one location to another. HTTP/2 outperforms HTTP/1.1 on high-latency connections. This is because HTTP/2's support for binary framing and header compression in the new version of the protocol makes communication faster and needs fewer round trips. For each request, a TCP handshake is needed, and the more requests there are, the longer it takes for a page to load. The ongoing improvement in available bandwidth doesn't solve these latency issues in most cases.

Head of Line Blocking

Head of Line blocking refers to the fact that each client has a limited number of TCP connections to a server and must wait for the previous request on the same connection to complete before making a new request. The number of connections per hostname used to be limited to two, but it is now between 6 and 8.

HTTP/1.1 introduced the "pipelining" feature, which allowed a client to send multiple HTTP requests over the same TCP connection. However, HTTP/1.1 still required responses to arrive in order, so it did not completely solve the HOL problem, and it is still not widely used today. With HTTP pipelining, a request can be sent while waiting for the response to a previous request. This creates a queue. But that introduces other problems. If the request gets stuck behind a slow request, then the response time will suffer. 

There are other concerns, like performance and resource penalties when switching lines. At the moment, HTTP pipelining is not widely enabled.

HTTP/2 addresses the HOL problem by multiplexing requests over the same TCP connection, allowing a client to make multiple requests to a server without having to wait for the previous ones to complete, as responses can arrive in any order.

How HTTP/2 Addresses the Problems

HTTP/2, which came out of Google's SPDY protocol, maintains the basic premises and paradigms of HTTP:

  • request-response model over TCP
  • resources and verbs
  • https:// and https:// URL schemas

But the optional parts of HTTP 1.1 were removed.

To address the negotiating protocol as a result of the shared URL schema, an upgrade header is present. Additionally, HTTP/2 is binary! If you've been around internet protocols, then you know that textual protocols are considered king because they are easier for humans to troubleshoot and construct requests manually. But, in practice, most servers today use encryption and compression anyway. The binary framing goes a long way toward reducing the complexity of handling frames in HTTP 1.1.

However, the major improvement of HTTP/2 is that it uses multiplexed streams. A single HTTP/2 TCP connection can support many bidirectional streams. These streams can be interleaved (no queuing), and multiple requests can be sent at the same time without a need to establish new TCP connections for each one. Servers can also now push notifications to clients via the established connection (HTTP/2 push).

Messages vs. Resources and Verbs

REST is an interesting API. It is built very tightly on top of HTTP. It doesn't just use HTTP as a transport; it embraces all of its features and builds a consistent conceptual framework on top of them. In theory, it sounds great. In practice, it's been very difficult to implement REST properly. 

REST APIs have been and still are very successful, but most implementations don't fully follow the REST philosophy and only use a subset of its principles. The reason is that it's quite challenging to map business logic and operations into the strict REST world.

The conceptual model used by gRPC is to have services with clear interfaces and structured messages for requests and responses. This model is directly translated from programming language concepts such as interfaces, functions, methods, and data structures. It also allows gRPC to automatically generate client libraries for you. 

Streaming vs. Request-Response

The REST API only supports the request-response model supported by HTTP 1.x. However, gRPC takes full advantage of the capabilities of HTTP/2 and enables the continuous streaming of data. It enables real-time communication through binary framing, which divides each stream into frames that can be prioritized and run over a single TCP connection, reducing network utilization and processing load.

There are several types of streaming:

  1. server-side streaming
  2. client-side streaming
  3. bidirectional streaming

Server-Side Streaming

In server-side streaming, the client sends a request to the server and receives a stream from which to read a sequence of messages. The client reads from the returned stream until there are no more messages. After sending back all its responses, the server’s status details and optional trailing metadata are sent back to complete on the server side. The client completes once it has all the server’s responses.

Client-Side Streaming

In client-side streaming, the client creates a stream containing a sequence of messages and sends it to the server. When the client has finished writing the messages, it waits for the server to read them and return a single response with its status details and optional trailing metadata.

Bidirectional Streaming

In bidirectional streaming, both the client and the server send a series of messages via a read-write stream. The two streams function independently, allowing clients and servers to read and write in any order they choose. For instance, the server could wait until all client messages are received before writing its responses, or it could alternately read a message and then write a message, or perform some other combination of reads and writes.

In this scenario, the client and the server send information to each other in pretty much free form. The client usually initiates the bidirectional streaming and also closes the connection.

Strong Typing vs. Serialization

The REST paradigm doesn't mandate any structure for the exchanged payload. It is typically JSON. Consumers don't have a formal mechanism to coordinate the format of requests and responses. On both the server side and the client side, the JSON needs to be serialized and translated into the programming language that will be used. Serialization is another step in the chain that introduces the possibility of errors as well as performance overhead. 

The gRPC service contract has strongly typed messages that are converted automatically from their Protobuf representation to your programming language of choice both on the server and on the client.

On the other hand, JSON is more flexible in theory because you can send dynamic data and don't have to follow a strict structure. 

The gRPC Gateway

Support for gRPC in the browser is not as mature. Today, gRPC is used primarily for internal services, which are not exposed directly to the world. 

If you want to consume a gRPC service from a web application or from a language not supported by gRPC, then gRPC offers a REST API gateway to expose your service. The gRPC gateway plugin generates a full-fledged REST API server with a reverse proxy and Swagger documentation. 

Most of the benefits of gRPC are lost with this method, but if you need to give access to an existing service, you can do so without having to implement your service twice.


In the world of internal microservices, gRPC will become dominant very soon, and it is considered by some to be the API of the future. The performance benefits and ease of development are just too good to pass up. However, REST APIs will still be around for a long time. It still excels for publicly exposed APIs and backward compatibility reasons.

This post has been updated with contributions from Mary Okosun.

Did you find this post useful?
Want a weekly email summary?
Subscribe below and we’ll send you a weekly email summary of all new Code tutorials. Never miss out on learning about the next big thing.
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.