gRPC
gRPC is an open-source RPC framework that makes communication over the network performant and scalable by taking advantage of binary formats through leveraging HTTP/2 features for transporting and using protocol buffers (out of the box) for serializing data. It supports generating server and client bindings in many languages across many platforms. Common use cases include connecting microservices, connecting mobile devices to backend services, and optimizing container to container communication. Learn more here.
Prisma 2
Prisma 2 is an open-source project that provides an ecosystem for interacting with your data. Anytime you need to connect to a data source or work with a database, Prisma 2 will be helpful since it provides tools to abstract your data to allow for scalable, explicit and declarative data access.
It features Photon (which autogenerates a type-safe database client with a highly optimized query engine embedded) and Lift (which allows you to declaratively model your database schema and get migrations autogenerated). These data tools can be used standalone or together and is compatible with many popular databases. They are a great fit for building REST & gRPC APIs where it can be used in place of traditional ORMs.
gRPC and Prisma2 in combination
In this tutorial, we will build a basic gRPC server and client for a simple blog service in Typescript with the help of Prisma 2! We will use Lift for database migrations and Photon JS for database access.
Goals
This tutorial will teach you how to:
1. Install and use the tools in the Prisma 2 ecosystem
2. Define the gRPC service and message types
3. Define a Prisma schema file
4. Migrate your database schema using Lift
5. Generate Photon JS to interact with your database
6. Write a basic gRPC server and client
7. Set up our Typescript project
Prerequisites
This tutorial assumes that you have some basic familiarity with:
- Typescript
- Node.js
We will use TypeScript with a MySQL database in this tutorial. You can set up your MySQL database locally or use a hosting provider such as Heroku or Digital Ocean.
Make sure that your database server is running.
If you are running Docker, here is how you can get a MySQL server running quickly:
export MYSQL_ROOT_PASSWORD=password
export MYSQL_PORT=33123
docker run --name prisma-mysql \
-e MYSQL_ROOT_PASSWORD \
-d -p $MYSQL_PORT:3306 mysql
The username defaults to root
and the host defaults to localhost
.
Now you can create a database by using the MySQL CLI tool that is included in the Docker container you just started:
docker exec prisma-mysql mysql \
-p"$MYSQL_ROOT_PASSWORD" \
-e "CREATE DATABASE database_name"
To check that it worked:
docker exec prisma-mysql mysql \
-p"$MYSQL_ROOT_PASSWORD" \
-e "SHOW DATABASES"
Note: If you don’t want to set up a MySQL database, you can still follow along by choosing SQLite in the beginning. One of Prisma’s main benefits is that it lets you easily swap out the data sources your application connects to by adjusting a few lines in your Prisma schema file.
Here is an overview of how it all fits together:
We will go through each section in detail. The example code for this tutorial is located in this repository.
1. Install the Prisma 2 CLI
The Prisma 2 CLI is available as the prisma2
package on npm. You can install it as a global package on your machine with the following command:
npm install -g prisma2
Alternatively, you can do a local install with
npm install prisma2
You can then run npx prisma2
instead of prisma2
. npx
is a wrapper to execute package binaries. It will check whether the command exists in $PATH, or in the local project binaries, and execute it.
Note: You can set up a new project with
prisma2 init
, follow the init process, and create a gRPC boilerplate, which provides a gRPC API example. But we will go through a more stripped down version in this tutorial and walk through the details.
We will make use of the Prisma 2 CLI utilities after defining our gRPC service.
2. Define the gRPC service and message types
Like many RPC systems, gRPC is based around the idea of defining a service and specifying the methods that can be called remotely with their parameters and return types.
We need to first use protocol buffers to define our gRPC service and method request and response types in a .proto
file. You can follow along with the example here. We will use the proto3 version of the protocol buffers language.
2.1 Define the service
Let’s define a blog service interface:
service Blog {
...
}
Then define your RPC methods inside your blog service definition. During our implementation later, both the server and the client will have a ShowFeed
RPC method that takes a ShowFeedRequest
parameter from the client and returns a Feed
response from the server:
rpc ShowFeed(ShowFeedRequest) returns (Feed) {};
Let’s add another method:
rpc CreatePost(CreatePostRequest) returns (Post) {}
2.2 Define the message type
We also need to define the structure of the message payload for all the request and response types used in our service methods.
Let’s define a message format for each Post
, where each request has the following fields:
message Post {
string id = 1;
string createdAt = 2;
string updatedAt = 3;
string title = 4;
string content = 5;
User author = 6;
}
Note that the type comes first and that User
is a composite type. Each field in the message definition has a unique number, which are used to identify your fields in the message binary format, and should not be changed once your message type is in use.
Let’s define our Feed
message type:
message Feed {
repeated Post feed = 1;
}
Note that it has a field rule repeated
for the Post
type, which signifies that this field can be repeated any number of times. So our Feed
will contain any number of Posts
.
3. Define a Prisma schema file
The schema file (example here) holds the configurations for your Prisma setup and consists of data sources, a generator, and data model definitions.
3.1 Add a data source and a generator
In schema.prisma
, let’s specify a data source (MySQL), which determines what native database type each of our data model types map to and a generator (Photon JS), which determines what type in the target programming language each of these types map to. Learn more here.
datasource mysql {
provider = "mysql"
url = "mysql://USER:PASSWORD@HOST:3306/DATABASE"
}
generator photon {
provider = "photonjs"
}
You can give your file any name but schema.prisma
will be automatically detected by the Prisma 2 CLI. It’s also possible to provision the url
as an environment variable.
3.2 Add data model definitions
Next, we define our model definitions. Models represent the entities of our application domain, define the underlying database schema, and is the foundation for the auto-generated CRUD operations of the database client.
Let’s define a simple User
and Post
model to our schema file:
model Post {
id String @default(cuid()) @id @unique
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
title String
content String?
author User?
}
model User {
id String @default(cuid()) @id @unique
email String @unique
password String
name String?
posts Post[]
}
Post
and User
will each be mapped to database tables. There is a bidirectional relation between User
and Post
via the author
and posts
field. The fields will be mapped to columns of the tables.
Things to note:
- @default
directive sets a default value
- @default(cuid())
sets a default value for the field by generating a cuid
- @id
and @updatedAt
directives are managed by Prisma and read-only in the exposed Prisma API
- @id
directive indicates that this field is used as the primary key
- @unique
directive expresses a unique constraint which means that Prisma enforces that no two records will have the same values for that field
- ?
is a type modifier indicating that this field is optional
If you change your datamodel, you can just regenerate your Prisma client and all typings will be updated.
4. Migrate your database schema using Lift
Now that we have defined our data model, we can map the data model to our database schema with Lift!
Every schema migration with Lift follows this process:
- [x] Adjust data model
- Change your data model definition to match your desired database schema.
- [ ] Save migration
- Run prisma2 lift save
to create your migration files on the file system.
- [ ] Run migration
- Run prisma2 lift up
to perform the migration against your database.
Note: If you run into any issues, try switching on the debug mode with
export DEBUG=*
. To switch off the debug mode, tryexport DEBUG=""
.
4.1. Save the migration files
With every database schema migration, Lift generates migration files and saves them on your file system. This allows you to maintain a migration history of your database and to rollback and replay as needed. A _Migration
table is also generated in your database which stores the details of every migration.
Let’s create and save our migrations files and give it a name:
prisma2 lift save --name 'init'
This reads the data sources and data model defintions from our schema.prisma
file and creates a migrations
folder (tagged with ‘init’) on our system holding the first set of migration files:
prisma2-grpc
└── prisma
├── migrations
│ ├── 20190703131441-init
│ │ ├── README.md
│ │ ├── datamodel.prisma
│ │ └── steps.json
│ └── lift.lock
└── schema.prisma
4.2. Perform the database migration
Now that we have our migration files, we can run the migration (create/write to our database):
prisma2 lift up
This maps your data model to the underlying database schema.
In our example, these are some of the SQL queries that executed:
CREATE TABLE `prisma2`.`Post`(
`id` varchar(1000) NOT NULL ,
`createdAt` datetime(3) NOT NULL ,
`updatedAt` datetime(3) NOT NULL DEFAULT '1970-01-01 00:00:00' ,
`title` varchar(1000) NOT NULL DEFAULT '' ,
`content` varchar(1000),
PRIMARY KEY (`id`));
CREATE TABLE `prisma2`.`User`(
`id` varchar(1000) NOT NULL ,
`email` varchar(1000) NOT NULL DEFAULT '' ,
`password` varchar(1000) NOT NULL DEFAULT '' ,
`name` varchar(1000) ,
PRIMARY KEY (`id`));
CREATE UNIQUE INDEX `User.id._UNIQUE` ON `prisma2`.`User`(`id`);
Note: Note that
updatedAt
is initiated by the Unix start time.
You can check the database steps in the README file in the prisma/migrations
directory.
We can now access our database programmatically by generating a Photon API.
5. Generate the database client (Photon JS)
Photon provides a type-safe data access API for our data model and is generated from our Prisma schema file with the generator
definition:
generator photonjs {
provider = "photonjs"
}
Run the following command to generate Photon JS:
prisma2 generate
This parses our Prisma schema file to generate the right data source client code and creates a photon
directory inside node_modules/@generated
:
├── node_modules
│ └── @generated
│ └── photon
│ └── runtime
│ ├── index.d.ts
│ └── index.js
This is the default path but can be customized. It is best not to change the files in the generated directory because it will get overwritten.
Now we can import Photon in our code like this:
import Photon from '@generated/photon'
6. Write our gRPC server and client
In Node.js, we can generate the code from our protocol buffers dynamically at runtime with protobuf.js or statically with a protocol buffer compiler protoc. The behaviour in the end will be the same. Since we are using Typescript, we can safely leverage dynamically generated code and that is what we will use in this tutorial.
6.1 Write our server
For our server, we will implement the methods declared by the service and run a gRPC server to handle client calls.
We will use the @grpc/proto-loader library to load our .proto
files with it’s loadSync
method and pass the output to the gRPC library’s loadPackageDefinition
method. The protobuf.js library dynamically generates service descriptors and client definitions from .proto
files loaded at runtime.
const PROTO_PATH = __dirname + '/service.proto'
import Photon from '@generated/photon'
import * as protoLoader from '@grpc/proto-loader'
import * as grpc from 'grpc'
const photon = new Photon()
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
})
const { protoDescriptor } = grpc.loadPackageDefinition(packageDefinition) as any
Our protoDescriptor
object now has the full package hierarchy.
Let’s implement the service interface from our service definition. We’ll do a simple one and write the showFeed
method, which simply accepts a callback to which we can return all the blog posts that has been fetched from the database with findMany
, a method which is exposed for the Post
model with the generated Photon API. Note that all of these methods are asynchronous and use callbacks so we can await
the results of the operation.
const showFeed = async(call: any, callback: any) => {
const feed = await photon.posts.findMany()
callback(null, { feed })
}
Now let’s implement the getPost
method, which gets passed a call object and has the id
parameter as a property, and a callback to which we can pass our returned post, filtered by the id
, from Photon’s findOne
method for the Post
model, with a null first parameter to indicate that there is no error:
const getPost = async(call: any, callback: any) => {
const { id } = call.request
const post = await photon.posts.findOne({
where: {
id,
},
})
callback(null, { post })
}
After implementing the methods defined in our interface, let’s run a gRPC server to listen for requests and return responses for our blog service:
const server = new grpc.Server()
server.addService(protoDescriptor.Blog.service, {
showFeed,
getPost,
signupUser,
createPost,
})
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure())
server.start()
Things to note:
- we created an instance of the server by calling the gRPC server constructor
- our service descriptor, which is a property of protoDescriptor.Blog.service
is used to create the server
- we implemented the service methods
- we specified the address and port we want to use to listen for client requests using the server instance’s bind
method
6.2 Write our client
For our client, we have a local object that implements the same methods as the service. The client can then just call those methods on the local object and gRPC looks after sending the requests to the server and returning the server’s protocol buffer responses.
We will load our .proto
file the same as we did for our server.
To call service methods, we first need to create a client. To do this, we just need to call the Blog client constructor, specifying the server address and port.
const client = new protoDescriptor.Blog('localhost:50051', grpc.credentials.createInsecure()
Let’s call our RPC getPost
method on our local client and pass in an id
object and a callback into the request:
const id = ''
client.getPost(id, (err: any, response: any) => {
if (err) {
console.error(err)
return
}
console.log(response)
})
If there is no error, we can get the response from the server in our callback object.
7. Set up our TypeScript project
7.1. Initialize our project and install dependencies
Let’s set up a basic TypeScript app with npm
.
Initialize a new npm project: npm init -y
Install typescript and ts-node locally: npm install --save-dev typescript ts-node
7.2. Add TypeScript configuration
Create tsconfig.json in our project root and add:
{
"compilerOptions": {
"sourceMap": true,
"outDir": "dist",
"lib": ["esnext", "dom"],
"strict": true
}
}
7.3. Add a start script to package.json
In our package.json, let’s add some scripts :
"scripts": {
"seed": "ts-node prisma/seed.ts",
"server": "ts-node server.ts",
"client": "ts-node client.ts"
}
7.4. Access your database with Photon
You can interact with your database with Photon with a basic script following this pattern:
import Photon from '@generated/photon'
const photon = new Photon()
async function main() {
await photon.connect() // Open connection to database
// Access your database with Photon
await photon.disconnect() // Close connection to database
}
main()
7.5 Run the project
With everything in place, you can run the project!
- Seed your database:
npm start
- Run your gRPC server:
npm server
- Run your gRPC client:
npm client
If you prefer a GUI client, try BloomRPC.
If you run into problems with this tutorial or spot any mistakes, feel free to make a pull request! :)