Back to Blog

Mastering gRPC in Node.js: A Comprehensive Guide in 2024

The goal of this article is that you, the reader, get a chance to build a gRPC channel from scratch. We will cover:

  • Remote Procedure Calls
  • Node.js EventEmitters
  • Google’s Protocol Buffers
  • How can you write your own gRPC channel using these pieces
    1. Load the protobuf into JavaScript
    2. Write a chattyMicroservice client (to invoke RPCs)
    3. Write a server (to execute RPCs)
    4. Run the files!

And before we begin, note that the file structure we will be using in this article looks like the following:

  • your_workspace_folder/
    • proto/
      • exampleAPI.proto
      • package.js
    • clients/
      • chattyMicroservice.js
    • servers/
      • server.js

Introduction to gRPC

gRPC is a high-performance Remote Procedure Call (RPC) framework that allows developers to build scalable and efficient APIs. Developed by Google, gRPC is now widely used in production environments due to its robustness and efficiency. At its core, gRPC uses Protocol Buffers (protobuf) as its interface definition language (IDL) and message format. Protocol Buffers provide a flexible and efficient way to define and serialize data, making it easier to build and maintain APIs.

In this tutorial, we will explore how to use gRPC with Node.js to build a scalable and efficient API. By leveraging the power of gRPC and Node.js, you can handle millions of requests per minute with ease. Let’s dive in and see how it’s done!

Remove Procedure Call

Imagine a computer wants to invoke a function, and then execute that function on a different computer! This is a Remote Procedure Call, or RPC - one computer invokes or calls a function, and another computer executes that function.

How in the world can one computer remotely call a function on another computer? Generally, a channel is set up between the invoker and the executor. This channel relays a request message to alert the executor when a function should be run, the executor runs the function, returning back to the invoker the result as a response message.

Therefore to review, for an RPC to complete, one computer invokes the RPC, the remote computer executes the RPC, and the RPC returns a response message back to the invoker.

Node.js Eventemitter

Node.js features a non-blocking event-driven model, which is highly compatible with node gRPC implementations. A key aspect of this event-driven model is the frequent use of EventEmitters. Simply put, an EventEmitter object binds together the ability to write or emit events, as well as run functionality upon receiving a subscribed-to event. Essentially, each EventEmitter can both listen and speak, or rather – read and write.

The most important methods of any EventEmitter are .on and .write. The .on method stores an EventListener mapped to an event named with a string and a queue of functions - callbacks - to run upon hearing that named event. The .write method emits a ’data’ event which an EventEmitter listening for ’data’ will hear and run the callbacks.

Protocol Buffers: An Overview

In Google’s own words, “Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily read and write your structured data to and from a variety of data streams.”

A Protocol Buffer, or protobuf, is a way to define all the data that your entire application or distributed system will be sending and receiving. A protobuf acts as a schema for all of your services. In our proto3 protobuf below, we define the name of a package, the services within that package, the RPCs in each service, and the messages used by each RPC.

// proto/exampleAPI.proto
syntax = “proto3”;
// define the name of the package
package exampleAPI;
// define the name of the service(s)
service ChattyMicroservice {
// define the rpc method and what it returns
//            invocation                 execution
rpc BidiMath (stream Benchmark) returns (stream Benchmark);
}
// define the name of the message(s)
message Benchmark {
// define the type and the index of the field
// note that protobuf message fields are not 0-indexed, but start at 1!
double requests = 1;
double responses = 2;
}
/*

In our example, the RPC method BidiMath is fully bidirectional.

The Benchmark message will be an Object with the properties requests and responses.

The values of requests and responses will be doubles, or potentially very large numbers.

In some cases, an empty message can be used as a parameter for methods that do not require additional data.

*/

Defining the Service Definition

The first step in building a gRPC API is to define the service definition. This is done using a .proto file, which contains the interface definition language (IDL) for the service. The .proto file defines the service methods, request and response messages, and any other relevant details.

Here is an example of a simple .proto file: syntax = "proto3";

package example;

service Greeter { rpc SayHello (HelloRequest) returns (HelloResponse) {} }

message HelloRequest { string name = 1; }

message HelloResponse { string message = 1; } In this example, we define a service called Greeter with a single method SayHello that takes a HelloRequest message as input and returns a HelloResponse message. The HelloRequest message contains a single field name, and the HelloResponse message contains a single field message. This simple service definition will serve as the foundation for our gRPC API. With our service definition in place, let's now move on to setting up our Node.js project to build the gRPC channel

Setting up the Project

To set up a new gRPC project in Node.js, we need to install the necessary dependencies. We will use the @grpc/proto-loader package to load the .proto file and generate the necessary code. We will also use the grpc package to create the gRPC server and client.

First, let’s install the dependencies: npm install --save @grpc/proto-loader grpc Once the dependencies are installed, we can create a new file called greeter.proto and add the service definition to it. This file will contain the same content as the example .proto file we defined earlier.

Tutorial: How to Write a Node.js gRPC Channel

Load the protobuf into JavaScript

Now that we have our protobuf written, we have to do the work of loading this protobuf into our desired target language – in this case, JavaScript. Just like any other file, the .proto files must be compiled into JavaScript before we can interact with them meaningfully.

But before we do that, we need to install our two dependencies. Open up a new workspace in your favorite IDE with Node.js installed, and let’s run the terminal commands:

npm init

npm install grpc @grpc/proto-loader

Now that we have the two npm libraries both installed and saved in our package.json, let’s write out the JavaScript to synchronously load our protobuf, and then further load the package definition into a descriptor Object, and from that descriptor Object extract the one package we wrote named exampleAPI!

// proto/package.js

const { loadSync } = require(‘@grpc/proto-loader’);
const { loadPackageDefinition } = require( ‘grpc’ );
const PROTO_PATH = __dirname + ‘/exampleAPI.proto’
const CONFIG_OBJECT = {
longs: Number,
/* compiles the potentially enormous `double`s for our Benchmark
into a Number rather than a String */
}

// synchronously compiles and loads the .proto file into a definition
const definition = loadSync(PROTO_PATH, CONFIG_OBJECT);

// generates a descriptor Object from the loaded API definition
const descriptor = loadPackageDefinition(definition);

// descriptor Object contains a lot of data, all we need is the package
const package = descriptor.exampleAPI

// export the package we named in the .proto file
module.exports = package;

And now that we exported that package, the rest of our local codebase can require it in, or import it, as necessary.

Create a chattyMicroservice client (to invoke RPCs)

Now we can begin implementing our chattyMicroservice gRPC service which will ping-pong our eventual server with RPCs. The secret to using RPCs in Node.js - as you may have guessed - is EventEmitters!

In our chattyMicroservice.js file below, we first import two dependencies – a credentials Object from the ‘grpc’ module, and we import the service ChattyMicroservice from the package we just exported from ‘package.js’. Next, we construct a gRPC channel Stub by invoking a new ‘ChattyMicroservice’, passing this channel Stub the server address to bind, and a credentials security level. Finally, we define our RPC EventEmitter and write out the logic for it to ‘ping pong’ – we want this bidiClientEventEmitter to volley a Benchmark message back and forth to the Server, incrementing the requests and responses on each return.

// /clients/chattyMicroservice.js

const { credentials } = require( ‘grpc’ );
const { ChattyMicroservice } = require( ‘../proto/package.js’ )

// the Stub is constructed from the package.ServiceName()
// the Stub has on it every RPC method
// the Stub is one half of a gRPC channel
const Stub = new ChattyMicroservice(
// binds it to the Server address
’localhost: 3000’,
// defines the security level
credentials.createInsecure(),
);

// RPC invocations
/* the Stub has every RPC method, each of which, when invoked, returns an EventEmitter
with the ability to write messages to the server at the bound address - in this case,
‘localhost: 3000’ - and listen for returned messages from that server. Also in this case,
it is a bidirectional EventEmitter, able to both listen and write continuously. */
const bidiClientEventEmitter = Stub.BidiMath()

// Let’s initialize some mutable variables
let start;
let current;
let perResponse;
let perSecond;

// Client must write the first message to the server
bidiClientEventEmitter.write({requests: 1, responses: 0})
// adds a listener for metadata - metadata is sent only once at the beginning of a channel
bidiClientEventEmitter.on( ‘metadata’, metadata => {
// highly accurate Node.process nanosecond timer converted to an integer with Number()
start = Number(process.hrtime.bigint());
// returns the special metadata object as an Object
console.log(metadata.getMap())
})

// adds a listener for errors
bidiClientEventEmitter.on( ‘error’, (err) => console.error(err))

/* adds listener for message data, the benchmark message received is passed to the callback,
and the callback is run on every message received */
bidiClientEventEmitter.on( ‘data’, benchmark => {
// writes a message to Server
bidiClientEventEmitter.write(
// properties match the message fields for benchmark
{
requests: benchmark.requests + 1,
responses: benchmark.responses
}
)
// console logs every 100,000 invocations
if (benchmark.responses % 100000 === 0) {
// highly accurate Node.process nanosecond timer converted to an integer with Number()
current = Number(process.hrtime.bigint());
// nanoseconds to milliseconds averaging total responses
perResponse = ((current - start) / 1000000) / benchmark.responses;
// inverting milliseconds per response to responses per second
perSecond = 1 / (perResponse / 1000);
// adds new-lines with \n
console.log(
’\nRPC Invocations:’,
’\nserver address:’, bidiClientEventEmitter.getPeer(),
’\ntotal number of responses:’, benchmark.responses,
’\navg millisecond speed per response:’, perResponse,
’\nresponses per second:’, perSecond,
)
}
});

Create a server (to execute RPCs)

And now that we created the client, we need a node server to listen at localhost 3000. We can break our server-side code into two main pieces. The first is our BidiMathExecution function, which will run as soon as the client invokes the RPC for the first time. The second is our Server Object imported from grpc, which we will add the ChattyMicroservice service from our package, and then we will bind the server to listen on our designated socket, and start the server!

// /servers/server.js

const { Server, ServerCredentials } = require( ‘grpc’ );
const { ChattyMicroservice } = require( ‘../proto/package.js’ );

// RPC executions, is passed an RPC-specific EventEmitter automatically
function BidiMathExecution(bidiServerEventEmitter) {
// highly accurate Node.process nanosecond timer converted to an integer with Number()
let start = Number(process.hrtime.bigint());
let current;
let perRequest;
let perSecond;

/* adds listener for message data, the benchmark message received is passed to the callback,
and the callback is run on every message received from Client */
bidiServerEventEmitter.on(‘data’, benchmark => {
// writes a message back to Client
bidiServerEventEmitter.write(
// properties match the message fields for benchmark
{
requests: benchmark.requests,
responses: benchmark.responses + 1
}
);

// console logs every 100,000 executions
if (benchmark.requests % 100000 === 0) {
// highly accurate Node.process nanosecond timer converting to an integer with Number()
current = Number(process.hrtime.bigint());
// nanoseconds to milliseconds averaging total requests
perRequest = ((current - start) / 1000000) / benchmark.requests;
// inverting milliseconds per request to requests per second
perSecond = 1 / (perRequest / 1000);
// adds new-lines with \n
console.log(
’\nRPC Executions:’,
’\nclient address:’, bidiServerEventEmitter.getPeer(),
’\nnumber of requests:’, benchmark.requests,
’\navg millisecond speed per request:’, perRequest,
’\nrequests per second:’, perSecond,
);
}
})
}

// creates a new instance of the Server Object
const server = new Server();

// adds a service as defined in the .proto, takes two Objects as arguments
server.addService(
// the service Object is the package.ServiceName.service
ChattyMicroservice.service,
/* the rpc method and it’s attached function for execution - effectively this Object
is how we handle server routing, each property is like an endpoint */
{ BidiMath: BidiMathExecution }
);

// binds the server to a socket with a security level
server.bind(‘0.0.0.0: 3000’, ServerCredentials.createInsecure())

// starts the server listening on the designated socket(s)
server.start();

The BidiMath method is an example of a bidirectional streaming RPC, where both the client and server can send and receive multiple messages independently over a single RPC call.

More on gRPC in Node.js

What is gRPC Node?

Procedure Calls (RPC). It allows Node.js applications to efficiently communicate with services using Google's Protocol Buffers.

Why is gRPC not widely used?

gRPC isn’t widely used due to its steep learning curve, complex setup, lack of browser support, and reliance on Protocol Buffers, making it less straightforward than REST for simpler applications.

Is gRPC better than WebSocket?

gRPC is better for structured, bi-directional communication with clear service definitions, while WebSocket is simpler for unstructured, real-time messaging. The choice depends on the use case.

What is the weakness of gRPC?

gRPC's primary weakness is its lack of native browser support, which requires workarounds like gRPC-Web. It’s also complex to implement compared to simpler REST-based systems.

Is gRPC faster than REST?

Yes, gRPC is generally faster than REST due to its use of binary Protocol Buffers and HTTP/2, resulting in lower payload sizes and faster communication.

Why use gRPC over HTTP?

gRPC uses HTTP/2 for efficient multiplexing, bidirectional streaming, and lower latency. It’s ideal for high-performance, real-time, and low-latency applications.

What is the difference between gRPC and Kafka?

gRPC is for real-time RPC communication between services, while Kafka is a distributed messaging system for event streaming and processing. Kafka focuses on handling large-scale message ingestion.

GRPC Node: Next Steps

Congratulations! You've now written a fully functional gRPC channel that benchmarks itself on both ends! Bask in the glory of sending thousands of messages per second - depending on your computer, we can send 6000-18000 messages per second with RPCs! Just imagine how much faster it might be with an even more powerful computer - a cloud server within a distributed system, let's say - and you quickly start to see the incredible power of gRPCs and Node.js!

As you may have noticed, this demo app you wrote has folders named clients and servers. This is your invitation to add more clients, add more servers! You can run multiple clients and servers on any number of sockets, even share sockets with multiple servers, as well as run multiple concurrent services, multiple concurrent RPCs, and use any number of message types. The HTTP/2 multiplex limit is the limit! For more information on gRPCs in Node.js, you can refer to Google's own Node.js tutorials here and for the full API reference written by Google's amazing engineers, go here.