Using Fauna with Gatsby Serverless Functions: Part One
Rodney Lab
Posted on September 1, 2021
Blog Post Comments
Whether you're responsible for a corporate blog site or a micro blog for a side-hussle, comments are a fantastic way to increase user engagement. Not only do you get direct feedback on your latest ideas, products and offerings, often answering user questions will provide additional opportunity to show all your readers your domain knowledge or services. What's more, user questions can also help spawn ideas for new blog posts — probably one of the hardest aspects of maintaining a blog is coming up with ideas for what to write about. In this post we will see how you can use Fauna to host and manage your site comments in conjunction with Gatsby serverless functions. In a follow up, we will look at more advanced Fauna features like user authentication.
Why Fauna for your Site Comments System?
You could use a dedicated comments service already available so why use Fauna? Firstly, you can customize the comments with just the features you need. This is ideal if you have a fast, static site and want to keep it lean. Although the services can help with aspects like spam filtering, we will see using serverless functions with Fauna, it is no problem at all to provide these additional features ourselves. We can integrate Fauna into our static site build. This means comments will be part of our fast loading site, and we will have no need to call external services to view them. Visitors won't have to wait while comments load from an external service and your whole page will load faster. The flexibility provided by Fauna NoSQL helps us create our database with just the fields we want.
Why use Gatsby Serverless Functions?
Gatsby only released serverless functions recently. Serverless functions offer a secure way to interact with services like Fauna; you do not have to expose secret API keys to the client. Traditionally you would need a server running 24/7 ready to handle requests like comments. However, by coupling services like Fauna with serverless functions, we can bring this functionality to our static sites without having to manage a server ourselves. As a bonus we don't need to worry about making sure we have enough server capacity to handle busy periods; serverless functions run on demand and naturally scale to cover your back when demand spikes.
What we are building
We will build out a comments system hosted in Fauna, using Gatsby serverless functions. We will “bake-in” existing comments to our site build. When users leave new comments, our serverless function will run a spam check and then add the new comments to our Fauna database. We'll add a little magic on top of all that; we trigger a fresh static site build when users leave new comments. This provides a great experience for site users. We review new comments and integrate them into the page straight away. Throughout, the site is kept static, with minimal extra client JavaScript required to add all these features. The upside is we keep our site fast and stay on the right side of Core Web Vitals, helping with SEO and keeping our page ranking high in search engine results!
Creating a Fauna Database
Create a Fauna Account
If you don't yet have a Fauna account, let's get you signed up before we create our comments database.
You might already have a Netlify account. If you do, you can use it to sign up. We will use Netlify for hosting in our code when we see how to trigger live site rebuilds. If you use a different hosting service, check their documentation on how to trigger rebuilds via web hooks.
Open up the Fauna Dashboard
Now you have an account, let's get start by creating a Comments database and getting API keys which we will use later. Open up the Fauna dashboard. Keep the SIMPLE tab selected and enter gatsby-serverless-functions-comments
as a name. You can choose the United States (US)
region from the dropdown list, or another region closer to home if you prefer. Leave the Use demo data
box unchecked and click CREATE. Click CREATE DATABASE.
Next we will create a collection. If you are new to noSQL databases, this is just the equivalent to a table in an SQL database. When a user creates a new comment, we will add it as an object to this collection. Click NEW COLLECTION and enter comments
as the Collection Name in the box that appears. The defaults here will work fine, so click save, once you have entered the name. You will see a message saying we don't yet have any documents. Don't worry about that, we will create some shortly from our app.
API Keys
One final thing on the dashboard, will be generating API keys. Click Security from the menu on the left, then NEW KEY. Choose Server
from the Role dropdown list. You can add a Key Name if you want to, the click SAVE. When the dashboard displays your new secret key, copy it as we will need it in a moment. That completes the initial config. Next, let's create a skeleton app. A Server key will only have access to this database, while an Admin key will be able to access and manage all of your databases and keys. Protect all of your keys carefully.
Fauna Comments Gatsby Serverless Functions App
To save time we will clone a blog starter to get going. Open up the terminal and type the following commands:
git clone --branch start https://github.com/rodneylab/fauna-serverless-blog-comments.git
cd fauna-serverless-blog-comments
npm install
cp .env.EXAMPLE .env.development
cp .env.EXAMPLE .env.production
The first command clones a starter and installs packages. Once we have changed into the newly created directory, we copy the example environment variables needed to get the app up and running. Customize the dummy content in .env.development
and .env.production
with your own details.
Next we add our new Fauna credentials. Add the following environment variables to the bottom of each of two files:
FAUNA_COLLECTION="comments"
FAUNA_SECRET="ENTER_YOUR_FAUNA_SECRET_HERE"
Finally spin up the dev server with the Terminal command:
gatsby develop
Jump to localhost:8000 to take a look through the site pages. You will see there are already some dummy blog posts. We will use these when we create comments. If you are building a new blog from scratch, you will eventually delete these posts and add your own content.
What's coming up:
next we will add a form to the bottom of each our blog posts by adding a new Comments component to the blog post template,
then we will create the serverless function which adds new comments to our Fauna collection,
once that is working, we will see how to pull comments from Fauna during site build.
Comment Form
React Hook Form will provide a form for users to enter their comments. To submit the form to our Gatsby serverless function, we will use axios. Let's install these two packages:
npm install axios react-hook-form
Create a new component file in the project at src/components/CommentForm.jsx
and paste in the following content:
import axios from 'axios';
import PropTypes from 'prop-types';
import React, { useState } from 'react';
import { useForm } from 'react-hook-form';
import {
container,
formButton,
formError,
formInput,
successText,
} from './CommentForm.module.scss';
import FormInput from './FormInput';
import FormInputArea from './FormInputArea';
import { ExternalLink } from './Link';
const CommentForm = ({ slug }) => {
const [serverState, setServerState] = useState({ ok: true, message: '' });
const [showForm, setShowForm] = useState(true);
const [submitting, setSubmitting] = useState(false);
const {
register,
handleSubmit,
formState: { errors },
} = useForm();
const handleServerResponse = (ok, message) => {
setServerState({ ok, message });
};
const getIP = async () => {
try {
const response = await axios({
url: '/.netlify/functions/client-ip',
method: 'GET',
});
return response.data;
} catch (error) {
handleServerResponse(
false,
'There was an error processing your comment. Please try again later.',
);
}
return '';
};
const onSubmit = async (data, event) => {
try {
const ip = await getIP();
setSubmitting(true);
const { Email: email, Name: name, Comments: text } = data;
await axios({
url: '/api/submit-comment',
method: 'POST',
data: {
email,
ip,
name,
slug,
text,
parentCommentId: null,
},
});
handleServerResponse(true, 'Thanks for your comment it will be reviewed and posted shortly.');
setSubmitting(false);
event.target.reset();
setShowForm(false);
} catch (error) {
handleServerResponse(
false,
'There was an error processing your comment. Please try again later.',
);
}
};
const emailRegex =
/^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
if (!showForm) {
return (
<div className={successText}>
<p>{serverState.message}</p>
</div>
);
}
return (
<form className={container} onSubmit={handleSubmit(onSubmit)}>
<h3>Leave a comment:</h3>
<div className={formInput}>
<FormInput
ariaInvalid={!!errors.Name}
ariaLabel="Enter your name"
id="comment-name"
label="Name"
maxLength={64}
register={register}
required
/>
{errors.Name ? (
<span className={formError}>
<small>Please let us know your name, it will appear along with your comment.</small>
</span>
) : null}
</div>
<div className={formInput}>
<FormInput
ariaInvalid={!!errors.Email}
ariaLabel="Enter your email address"
id="comment-email"
label="Email"
maxLength={64}
pattern={emailRegex}
register={register}
required
/>
{errors.Email ? (
<span id="comment-email-error" className={formError}>
<small>
We use your email address for spam detection purposes only. It is not stored on our
database and does not appear alongside your comment.
</small>
</span>
) : null}
</div>
<div className={formInput}>
<FormInputArea
ariaInvalid={!!errors.Comments}
ariaLabel="Enter your comment"
id="comment"
label="Comments"
maxLength={512}
register={register}
required
/>
{errors.Comments ? (
<span className={formError}>
<small>Please enter a comment. Limit your text to 512 characters.</small>
</span>
) : null}
</div>
<div className={formButton}>
<small>
This site uses Akismet to reduce spam.{' '}
<ExternalLink
aria-label="Learn how Akismet process comment data"
href="https://akismet.com/privacy/"
>
Learn how your comment data is processed
</ExternalLink>
. We pass your comment, name, email, IP address and{' '}
<ExternalLink
aria-label="Learn more about browser user agent from M D N"
href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent"
>
browser user agent
</ExternalLink>{' '}
to Akismet for spam detection. Neither your email address, IP address or user agent is
stored in our database.
</small>
<input
type="submit"
aria-disabled={submitting}
disabled={submitting}
value="Submit your comment"
/>
{serverState.message ? (
<small className={serverState.ok ? '' : formError}>{serverState.message}</small>
) : null}
</div>
</form>
);
};
CommentForm.propTypes = {
slug: PropTypes.string.isRequired,
};
export { CommentForm as default };
That's a little bit of code! So let's take a look at a few of the methods we have added. The getIP
method helps us get the client IP address. We only need this for spam detection purposes and will not store it in the database. Currently Gatsby serverless functions are not able to tell us the IP address of the client, so we use a Netlify serverless function included in the repo at netlify/functions/client-ip.js
. I requested this feature for Gatsby Functions and there is an open issue, so it might be possible now, depending on when you are reading this!
Calling Gatsby Cloud Serverless Functions
The most interesting method as regards serverless functions is onSubmit
. As you might expect it collates the form data submitted by the user and send it to our serverless function. We will see shortly to create a serverless function we just need a JavaScript file in the src/api
directory. Here, in our onSubmit
method, we are submitting the form to an endpoint on our own app: /api/submit-comment
. We need the name of our serverless function file to match submit-comment
when we create it. We will see we can use Gatsby serverless functions on our dev server, which makes debugging easier (we don't need to push to the hosting server and test on a staging build). Most of the rest of the file renders form data.
Lastly, before creating our serverless function, we will render the new form component at the bottom of each blog post. To do this, open up src/components/PureBlogPost.jsx
and first import the CommentForm
component:
import { MDXProvider } from '@mdx-js/react';
import { Link } from 'gatsby';
import PropTypes from 'prop-types';
import React from 'react';
import { Helmet } from 'react-helmet';
import BannerImage from './BannerImage';
import CommentForm from './CommentForm';
import { PureLayout as Layout } from './Layout';
import { ExternalLink, TwitterMessageLink } from './Link';
import { PureSEO as SEO } from './SEO';
then add a new section to the template, containing the component:
<Layout data={data}>
<article>
<h1>{postTitle}</h1>
<BannerImage imageData={bannerImage} alt={featuredImageAlt} />
<section itemProp="articleBody">
<MDXProvider components={shortcodes}>{children}</MDXProvider>
</section>
<section>
<CommentForm slug={slug} />
</section>
</article>
</Layout>
Open up a blog post in your browser and you will see the form rendered. You can try filling out the form and submitting, but will get an error because we still have to create our serverless function to listen on the endpoint. Let's create the serverless function now.
Gatsby Serverless Function
We place our Gatsby serverless function in the src/api
folder. The name of our file tells Gatsby which endpoint to listen for requests on. We will create the file src/api/submit-comment.js
which means Gatsby will listen on the /api/submit-comment
route, exactly the one we used in the comment form submission above. Let's create a src/api
directory and add a submit-comment.js
in our new folder:
import { AkismetClient } from 'akismet-api';
import axios from 'axios';
import faunadb from 'faunadb';
const TRIGGER_REBUILD_ON_NEW_COMMENT = true;
const createComment = async ({ name, parentCommentId, text, markedSpam, slug }) => {
try {
const client = new faunadb.Client({
secret: process.env.FAUNA_SECRET,
domain: 'db.us.fauna.com',
scheme: 'https',
});
const q = faunadb.query;
const response = await client.query(
q.Create(q.Collection(process.env.FAUNA_COLLECTION), {
data: {
date: new Date().toISOString(),
markedSpam,
name,
parentCommentId,
slug,
text,
},
}),
);
return { successful: true, message: response };
} catch (error) {
return { successful: false, message: error };
}
};
const spamCheck = async ({ email, ip, name, text, userAgent }) => {
const client = new AkismetClient({
key: process.env.AKISMET_API_KEY,
blog: process.env.SITE_URL,
});
return client.checkSpam({
user_ip: ip,
useragent: userAgent,
content: text,
email,
name,
});
};
const triggerRebuild = async () => {
if (!process.env.NETLIFY_BUILD_HOOK_ID) {
return { successful: false, message: 'Netlify build hook ID is not defined.' };
}
try {
const response = await axios({
url: `https://api.netlify.com/build_hooks/${process.env.NETLIFY_BUILD_HOOK_ID}`,
method: 'POST',
});
return { successful: true, message: response };
} catch (error) {
let message;
if (error.response) {
message = `Server responded with non 2xx code: ${error.response.data}`;
} else if (error.request) {
message = `No response received: ${error.request}`;
} else {
message = `Error setting up response: ${error.message}`;
}
return { successful: false, message };
}
};
export default async function handler(req, res) {
if (req.method !== 'POST') {
res.status(405).send('Method not allowed');
} else {
const { email, ip, name, parentCommentId, slug, text } = req.body;
const userAgent = req.headers['user-agent'];
let markedSpam;
let akismetError;
try {
markedSpam = await spamCheck({
email,
name,
ip,
text,
userAgent,
});
} catch (error) {
akismetError = error.message;
}
if (akismetError) {
res.status(400).send(akismetError);
} else {
const createCommentResult = await createComment({
name,
parentCommentId,
text,
markedSpam,
slug,
});
if (!createCommentResult.successful) {
res.status(400).send(createCommentResult.message);
} else {
if (TRIGGER_REBUILD_ON_NEW_COMMENT && !markedSpam) {
await triggerRebuild();
}
res.status(200).send('All is well that ends well.');
}
}
}
}
Let's look at the functions we have defined here. In the createComment
function, we first set up a Fauna client instance. This uses the credentials stored in our .env.development
or .env.production
file. We will need to remember also to define them on our hosting server. If you selected a region other than US when you created your database, you may need to change then domain
value passed to the Fauna DB client you can see more details in the Fauna docs.
Next, within the createComment
functions we see how to set up a query using the Fauna API. If you are used to GraphQL don't get confused by the naming here. Although we are mutating the database (adding a document), we use a faunadb.query
object to assist us. The data we supply can be any key-pair value we like and we are not restricted to a particular schema. In addition to the fields provided by the user, we are also adding a date
and markedSpam
field. The markedSpam
is generated by our spamCheck
function.
spamCheck
Function
The spamCheck
function just passes the comment data to the Akismet service, which reviews it and lets us know if it considers the comment to be spam. Next there is a function for triggering a rebuild when a non-spam comment is triggered. This will use up build minutes, so depending on your plan, you may want to keep this switched off, or add some logic to limit the number of times a rebuild can happen in a day. An extra Fauna collection keeping track of build time would help here.
handle
Function
The final function in the file is what links everything. This is the function which responds when the user hits our /api/submit-comment
endpoint. It takes the client request as input and responds with a status code and a body. We see a good example in the first two lines, where we check that the client submitted a POST
request and respond with a 405
error code if they did not.
Moving on, in the else
branch we destructure the form data from the request body. Then get the user agent from the header.
The rest of the handler
function just calls the functions we just mentioned, passing in the data needed.
We are almost ready to test out this code. You probably already noticed we have a couple of missing dependencies. To provide access to the Fauna API from our Gatsby serverless function, we will use the faunadb
package. On top for spam detection we will use the Akismet service via the akismet-api
package. Let's install these packages so we can test our new function out:
npm install faunadb akismet-api
Akismet API Key
Finally we will need an Akismet API key. Go the Akismet site to register for an API key. Once you have your key, let's add it to .env.development
and .env.production
:
AKISMET_API_KEY="ENTER_YOUR_AKISMET_API_KEY_HERE"
SITE_URL="https://example.com"
SITE_URL
is a parameter requested by the Akismet API. It is just the URL for you blog site.
We can now test our new database. Try adding a test comment from the comment form on a blog post in the browser. If all is well, in the Fauna dashboard, you will see a new document is created straight away in our comments collection.
Now we have a live comment in our database, we will next see how we can source it during our site build process. That will let us display all existing comments at the bottom of the relevant blog post.
Pulling Fauna Database Comments into Gatsby
You might not know that it is pretty easy also to add your own data to the Gatsby GraphQL data layer. We will do that here so that you will be able to see comments in the GraphiQL explorer. If you're not sure what that is, sit tight we will see next.
Before that, we will create an Index using the Fauna Shell. An Index is just an interface which helps us define the exact data we want Fauna to return from a database query. Here we will want to return all documents in our comments
collection which are not marked spam and haven't been moved to trash. You will see we can also specify which fields we want to return for the matching documents.
Let's create an Index using the Fauna Shell from the web dashboard. Open up our database then select Shell from the menu on the left. You can run queries using Fauna's own query language here. In the bottom window, paste the following code:
CreateIndex({
name: 'get-comments',
unique: false,
serialized: true,
source: Collection('comments'),
terms: [
{
field: ['data', 'markedSpam'],
},
{
field: ['data', 'movedToTrash'],
},
],
values: [
{
field: ['ref'],
},
{
field: ['data', 'date'],
},
{
field: ['data', 'name'],
},
{
field: ['data', 'slug'],
},
{
field: ['data', 'text'],
},
],
})
This creates an Index which returns a ref (essentially an ID), as well as the date, name, slug and text fields. We can filter on the markedSpam
and movedToTrash
fields. To see the new index in the dashboard, click Indexes on the left hand side menu. Try querying using the new Index, from the dashboard. First we need to specify a value for the markedSpam
and movedToTrash
terms. From the dropdown list, choose FQL and in the boxes below, type false
and undefined
then press the search button. You should see your test comment returned. Click to expand and see details.
We are just scratching the surface on what Fauna Indexes can do. See the docs to learn more!
gatsby-node.js
That's all the Fauna setup we need. Next let's install a helper package for sourcing data in Gatsby:
npm install gatsby-node-helpers
Let's create a gatsby-node.js
file in the project root. We will add a function to gatsby-node.js
to query Fauna using our new index. Add the following code to gatsby-node.js
:
const faunadb = require('faunadb');
const { createNodeHelpers } = require('gatsby-node-helpers');
const { FAUNA_SECRET } = process.env;
const FAUNA_COMMENTS_INDEX = 'get-comments';
const getComments = async ({ secret, reporter }) => {
try {
const q = faunadb.query;
const client = new faunadb.Client({
secret,
domain: 'db.us.fauna.com',
});
const results = await client.query(
q.Paginate(q.Match(q.Index(FAUNA_COMMENTS_INDEX), false, undefined)),
);
return results.data.map(([ref, date, name, slug, text]) => ({
commentId: ref.id,
date,
name,
slug,
text,
}));
} catch (error) {
reporter.warn('Error setting up fauna fetch. ', error.message);
}
return [];
};
As before (in the serverless function), if you used a different region when you set up the database, be sure to update the domain
field.
The first part here does not look too different from what we had in our serverless function. Next we use our Index to read the comments from the database, in the line:
const results = await client.query(q.Paginate(q.Match(q.Index(FAUNA_COMMENTS_INDEX), false)));
Our query is at the heart of the line in the Match function call. We query using our newly created index. The false
argument is referencing the markedSpam
field and undefined
, the movedToTrash
field. We are telling Fauna only to return comments which are not marked as spam (and not moved to trash). The query is wrapped in a utility function which paginates the result. This is handy if you have a popular blog which has received many comments. Instead of pulling hundreds of comments in a single operation, Fauna's pagination function will split up the result in smaller more manageable chunks.
Fauna Pagination
Although pagination is helpful for sending and receiving the comments over the network. In our own getComments
function, it is more convenient to have a single object containing all of the comments, rather than iterating through pages. The q.Paginate
function takes care of this for us.
If you run this command in the dashboard shell:
Paginate(Match(Index('get-comments'), false))
you will get back something like this:
{
data: [
[
Ref(Collection("comment"), "306552151776165954"),
"2021-08-10T15:36:06.630Z",
"John",
"best-medium-format-camera-for-starting-out/",
"Test comment",
],
];
}
data
is an array containing an element for each matching document (this would be a matching row if we were using an SQL database). Each document is itself represented by an array, rather than an object. There are no object keys, just the values, in the same order they appear in our Index.
In the following line, we destructure the array returned for each element, then in the code which comes after this, convert it into an object, with keys.
Gatsby sourceNodes API
We will use Gatsby's sourceNodes API to add our comments to the data layer. Add the following code to the end of gatsby-node.js
:
exports.sourceNodes = async ({ actions, createNodeId, createContentDigest, reporter }) => {
const { createNode, createTypes } = actions;
const commentsNodeHelpers = createNodeHelpers({
typePrefix: 'Comment',
createNodeId,
createContentDigest,
});
const CommentEntryNode = commentsNodeHelpers.createNodeFactory('Entry');
const commentsTypeDefs = `
type CommentEntry implements Node {
id: String
commentId: String
date: Date @dateformat
name: String
parentCommentId: String
text: String
slug: String
verified: Boolean
}
`;
createTypes(commentsTypeDefs);
const comments = await getComments({
secret: FAUNA_SECRET,
reporter,
});
if (comments !== null) {
comments.forEach(async (element) => {
const { commentId } = element;
const stringCommentId = commentId.toString();
const node = CommentEntryNode({
...element,
commentId: stringCommentId,
id: stringCommentId,
});
createNode(node);
});
}
};
To add the comments data to Gatsby's GraphQL data layer, we need to associate each field with a type. Once that is done, the code calls our getComments
function and then creates nodes using the API to make the comments data accessible in our regular Gatsby components.
To see the data, save gatsby-node.js
and restart your dev server. Go to localhost:8000/___graphql in your browser and replace the contents of the middle pane with the following code:
query FaunaQuery {
allCommentEntry {
edges {
node {
commentId
name
slug
text
}
}
}
}
Run the query by pressing the play button and on the right you will see your comment. Now we have our comments in the data layer, we can use them in our blog posts. We will do this next.
Rendering comments
The GraphQL query which pulls data into blog posts is in the file src/pages/{Mdx.slug}.mdx
. Edit this file adding the comments query near the bottom:
...
bannerImage: featuredImage {
...BannerImageFragment
}
}
}
comments: allCommentEntry(
sort: { fields: date, order: DESC }
filter: { slug: { eq: $slug } }
) {
edges {
node {
id
name
slug
text
commentId
parentCommentId
date(formatString: "YYYY-MM-DDTHH:mm:ssZ")
}
}
}
}
`;
With that done, let's move on the Comments component which will render existing comments. Create a src/components/Comments.jsx
file and add the following content:
import dayjs from 'dayjs';
import 'dayjs/locale/en-gb';
import localizedFormat from 'dayjs/plugin/localizedFormat';
import relativeTime from 'dayjs/plugin/relativeTime';
import PropTypes from 'prop-types';
import React from 'react';
import Card from './Card';
import { container, dateText, footer } from './Comments.module.scss';
dayjs.extend(localizedFormat);
dayjs.extend(relativeTime);
dayjs.locale('en-gb');
const Comments = ({ comments }) => (
<div className={container}>
<h2>Comments</h2>
<ul>
{comments.map((element) => {
const { commentId, date, name, text } = element.node;
const dayjsDate = dayjs(date);
const dateString = dayjsDate.fromNow();
return (
<li key={commentId}>
<Card>
<h3>{name}</h3>
<p>{text}</p>
<div className={footer}>
<small>
<span className={dateText}>{dateString}</span>
</small>
</div>
</Card>
</li>
);
})}
</ul>
</div>
);
Comments.propTypes = PropTypes.arrayOf(
PropTypes.shape({
node: PropTypes.shape({
commentId: PropTypes.string,
date: PropTypes.string,
name: PropTypes.string,
text: PropTypes.text,
}),
}),
).isRequired;
export { Comments as default };
When we render the Comments component, we will pass in the comments as a prop. Let's do this now, by editing src/components/PureBlogPost.jsx
. First we will import our new Comments
component:
import { MDXProvider } from '@mdx-js/react';
import { Link } from 'gatsby';
import PropTypes from 'prop-types';
import React from 'react';
import { Helmet } from 'react-helmet';
import BannerImage from './BannerImage';
import Comments from './Comments';
import CommentForm from './CommentForm';
If you recall we added the comments data to the blog post query previously, in the Mdx template file. This makes comments available in the data prop. To access the comments data in the PureBlogPost
component, we just need to destructure them from the data
object:
const PureBlogPost = ({ children, data }) => {
const { comments } = data;
const { frontmatter, slug } = data.post;
const {
bannerImage, featuredImageAlt, seoMetaDescription, postTitle,
} = frontmatter;
const { siteUrl } = data.site.siteMetadata;
We will render the existing comments just below the comment form we added earlier:
<section>
<CommentForm slug={slug} />
{comments.edges.length > 0 ? <Comments comments={comments.edges} /> : null}
</section>
Finally we can add comments
to the prop types:
PureBlogPost.propTypes = {
data: PropTypes.shape({
site: PropTypes.shape({
siteMetadata: PropTypes.shape({
siteUrl: PropTypes.string,
}),
}),
comments: PropTypes.shape({
edges: PropTypes.arrayOf(
PropTypes.shape({
node: PropTypes.shape({
commentId: PropTypes.string,
date: PropTypes.string,
name: PropTypes.string,
text: PropTypes.text,
}),
}),
),
}),
post: PropTypes.shape({
That was a little work. However if you go to the page where you added the test comment earlier and scroll down to the bottom, you should see your test comment rendered.
Automatic Site Rebuild
We will use a Netlify Build Hook to trigger a site rebuild automatically whenever a visitor leaves a non-spam comment. In the Netlify console click Site settings then Build & deploy in side menu. Scroll down to Build hooks, for a name you can enter new-comment
then click save. Once saved the console will show you the url something like https://api.netlify.com/build_hooks/abcdef0123456789abcdef01
. Add the final part as an environment variable in .env.development
:
NETLIFY_BUILD_HOOK_ID="abcdef0123456789abcdef01"
don't forget to update this with your own ID!
Also add this as an environment variable in the Netlify console, or if you already have Netlify CLI configured using the command:
netlify env:import .env.development
Finally enable automatic rebuilds by setting the TRIGGER_REBUILD_ON_NEW_COMMENT
variable in src/api/submit-comment.js
to true. Note that rebuilds will use up your build minutes. If you have a popular site and your site builds slowly, you may want to keep the feature switched off or add some logic to throttle the number of times it can run in a day.
What's Next?
In this article we have built out the client side functions for adding comments and viewing other site visitors' comments. This is just the first part in a two-part article. In the follow up article, we will see:
how you can use Fauna to authenticate users,
the process for updating documents in Fauna,
how to create a dashboard for blog administrators to change spam flags on comments and delete comments.
For additional functionality, you might want to see Fauna's detailed docs on the JavaScript driver.
Posted on September 1, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.