Brian Neville-O'Neill
Posted on February 12, 2020
Written by Leigh Halliday✏️
Many people only think of Next.js as a frontend React framework, providing server-side rendering, built-in routing, and a number of performance features. All of this is still true, but since version 9, Next.js also supports API routes, an easy way to provide a backend to your React frontend code, all within the same package and setup.
In this article, we will learn how to use API routes to set up a GraphQL API within a Next.js app. It will start with the basic setup, and then cover some more in-depth concepts such as CORS, loading data from Postgres via the Knex package, improving performance using the DataLoader package and pattern, and avoiding costly N+1 queries.
Full source code can be found here.
Setting up Next.js
The easiest way to set up Next.js is to run the command npx create-next-app
. If you don’t have npx
installed, it can first be installed by running npm i -g npx
to install it globally on your system.
There is even an example setup you can use for the very thing we are going to cover today, setting up Next.js with a GraphQL API: npx create-next-app --example api-routes-graphql
. That said, we’re going to set things up ourselves, focusing on a number of additional concepts, so I have chosen to go with the bare-bones starter app.
Adding an API route
With Next.js setup, we’re going to add an API (server) route to our app. This is as easy as creating a file within the pages/api
folder called graphql.js
. For now, its contents will be:
export default (_req, res) => {
res.end("GraphQL!");
};
Done! Just joking… if only it were that easy! The above code will simply respond with the text “GraphQL!”, but with this setup we could respond with any JSON that we wanted, reading query params, headers, etc… from the req (request) object.
What we want to produce
At the end of this example, we want to be able to perform the following query of albums and artists, loaded efficiently from our Postgres database:
{
albums(first: 5) {
id
name
year
artist {
id
name
}
}
}
Producing output which might resemble:
{
"data": {
"albums": [
{
"id": "1",
"name": "Turn It Around",
"year": "2003",
"artist": {
"id": "1",
"name": "Comeback Kid"
}
},
{
"id": "2",
"name": "Wake the Dead",
"year": "2005",
"artist": {
"id": "1",
"name": "Comeback Kid"
}
}
]
}
}
Basic GraphQL setup
Setting up a GraphQL server involves four steps:
- Defining type definitions which describe the GraphQL schema
- Creating resolvers: The ability to generate a response to a query or mutation
- Creating an Apollo Server
- Creating a handler that will tie things into the Next.js API request and response lifecycle
After importing the gql
function from apollo-server-micro
(and having installed yarn add apollo-server-micro
), we can define our type definitions, describing the schema of our GraphQL server. Eventually, we’ll expand on this, but for now, we have a field we can query called hello
that responds with a String
.
import { ApolloServer, gql } from "apollo-server-micro";
const typeDefs = gql`
type Query {
hello: String!
}
`;
With our schema defined, we need to write the code that will enable our server to respond to queries and mutations. These are called resolvers, and each field (such as hello
) requires a function that will produce some result. The result of the resolver functions must line up with the types defined above.
The arguments each resolver function receive are:
- parent: This is typically ignored on the
Query
(topmost) level, but will be used as we eventually tackle albums and artists - arguments: In the first example which included
albums(first: 5)
, this would arrive to our resolver function as{first: 5}
, allowing us to access the field’s arguments - context: Context is global state, such as who the authenticated user is, or in our case, the global instance of DataLoader
const resolvers = {
Query: {
hello: (_parent, _args, _context) => "Hello!"
}
};
Passing the typeDefs
and resolvers
to a new instance of ApolloServer
gets us up and running:
const apolloServer = new ApolloServer({
typeDefs,
resolvers,
context: () => {
return {};
}
});
From the apolloServer
we can access a handler
, in charge of handling the request and response lifecycle. There is an additional config we need to export, stopping the body
of incoming HTTP
requests from being parsed, a requirement for GraphQL to work correctly:
const handler = apolloServer.createHandler({ path: "/api/hello" });
export const config = {
api: {
bodyParser: false
}
};
export default handler;
Adding CORS support
If we would like to enable or limit cross-origin requests using CORS, we’re able to add the micro-cors package to enable this:
import Cors from "micro-cors";
const cors = Cors({
allowMethods: ["POST", "OPTIONS"]
});
export default cors(handler);
In this case, I have limited the cross-origin HTTP methods to POST
and OPTIONS
, changing the default export to have our handler
passed to the cors
function.
Dynamic data with Postgres and Knex
Hard-coding data can get boring… it’s time to load it from the database! There’s a tiny bit of setup to get this up and running. First, install the required packages: yarn add knex pg
.
Create a knexfile.js
file, configuring Knex so it knows how to talk to our database. In this setup, an ENV variable is required in order to know how to connect to the database. If you are using Now along with Next.js, I have an article which talks about setting up secrets. My local ENV variable looks like PG_CONNECTION_STRING="postgres://leighhalliday@localhost:5432/next-graphql"
:
// knexfile.js
module.exports = {
development: {
client: "postgresql",
connection: process.env.PG_CONNECTION_STRING,
migrations: {
tableName: "knex_migrations"
}
},
production: {
client: "postgresql",
connection: process.env.PG_CONNECTION_STRING,
migrations: {
tableName: "knex_migrations"
}
}
};
Next, we are able to create database migrations to set up our artists
and albums
tables. Empty migration files are created with the command yarn run knex migrate:make create_artists
(and a similar one for albums). The migration for artists looks like this:
exports.up = function(knex) {
return knex.schema.createTable("artists", function(table) {
table.increments("id");
table.string("name", 255).notNullable();
table.string("url", 255).notNullable();
});
};
exports.down = function(knex) {
return knex.schema.dropTable("artists");
};
And the migration for albums looks like this:
exports.up = function(knex) {
return knex.schema.createTable("albums", function(table) {
table.increments("id");
table.integer("artist_id").notNullable();
table.string("name", 255).notNullable();
table.string("year").notNullable();
table.index("artist_id");
table.index("name");
});
};
exports.down = function(knex) {
return knex.schema.dropTable("albums");
};
With our tables in place, I ran the following insert statements in Postico to set up a few dummy records:
INSERT INTO artists("name", "url") VALUES('Comeback Kid', 'http://comeback-kid.com/');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Turn It Around', '2003');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Wake the Dead', '2005');
The last step before updating our GraphQL API to load data from the database is to create a connection to our DB within the graphql.js
file.
import knex from "knex";
const db = knex({
client: "pg",
connection: process.env.PG_CONNECTION_STRING
});
New definitions and resolvers
Let’s remove our hello
query and resolvers, replacing them with definitions for loading albums and artists from the database:
const typeDefs = gql`
type Query {
albums(first: Int = 25, skip: Int = 0): [Album!]!
}
type Artist {
id: ID!
name: String!
url: String!
albums(first: Int = 25, skip: Int = 0): [Album!]!
}
type Album {
id: ID!
name: String!
year: String!
artist: Artist!
}
`;
const resolvers = {
Query: {
albums: (_parent, args, _context) => {
return db
.select("*")
.from("albums")
.orderBy("year", "asc")
.limit(Math.min(args.first, 50))
.offset(args.skip);
}
},
Album: {
id: (album, _args, _context) => album.id,
artist: (album, _args, _context) => {
return db
.select("*")
.from("artists")
.where({ id: album.artist_id })
.first();
}
},
Artist: {
id: (artist, _args, _context) => artist.id,
albums: (artist, args, _context) => {
return db
.select("*")
.from("albums")
.where({ artist_id: artist.id })
.orderBy("year", "asc")
.limit(Math.min(args.first, 50))
.offset(args.skip);
}
}
};
You’ll have noticed that I didn’t define every single field for the Album
and Artist
resolvers. If it is going to simply read an attribute from an object, you can avoid defining the resolver for that field. This is why the artist doesn’t have a name
resolver, for example. To be honest we could remove the id
resolver as well!
Avoiding N+1 queries with DataLoader
There is a hidden problem with the above resolvers… specifically, loading the artist for each album. This SQL query gets run for each album, meaning that if you have fifty albums to display, you will have to perform fifty additional SQL queries to load each album’s artist. Marc-André Giroux has a great article on this problem, and we’re going to discover how to solve it right now!
The first step is to define a loader. The purpose of a loader is to pool up IDs (of artists in our case) to load and load them all at once in a single batch, rather than each one on its own:
import DataLoader from "dataloader";
const loader = {
artist: new DataLoader(ids =>
db
.table("artists")
.whereIn("id", ids)
.select()
.then(rows => ids.map(id => rows.find(row => row.id === id)))
)
};
We need to pass our loader
to our GraphQL resolvers, which can be done via context
:
const apolloServer = new ApolloServer({
typeDefs,
resolvers,
context: () => {
return { loader };
}
});
This allows us to update our album
resolver to utilize the DataLoader:
const resolvers = {
//...
Album: {
id: (album, _args, _context) => album.id,
artist: (album, _args, { loader }) => {
return loader.artist.load(album.artist_id);
}
}
//...
};
The end result is a single query to the database to load all the artists at once… N+1 problem solved!
Conclusion
In this article, we were able to create a GraphQL server with CORS support, loading data from Postgres, and stomping out N+1 performance issues using DataLoader. Not bad for a day’s work! The next step might involve adding mutations along with some authentication to our app, enabling users to create and modify data with the correct permissions. Next.js is no longer just for the frontend (as you can see). It has first-class support for server endpoints and the perfect place to put your GraphQL API.
200's only ✅: Monitor failed and show GraphQL requests in production
While GraphQL has some features for debugging requests and responses, making sure GraphQL reliably serves resources to your production app is where things get tougher. If you’re interested in ensuring network requests to the backend or third party services are successful, try LogRocket.
LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic GraphQL requests to quickly understand the root cause. In addition, you can track Apollo client state and inspect GraphQL queries' key-value pairs.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
The post Building a GraphQL server in Next.js appeared first on LogRocket Blog.
Posted on February 12, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.