Getting started with Rails API and Fauna
Pablo Acuña
Posted on November 4, 2021
Introduction
This article describes the integration between a Ruby on Rails API service and the serverless database Fauna. Even though a typical Ruby On Rails service is centered around relational SQL databases and the ActiveRecord ORM, we can still use its fast prototyping features and conventions to build REST APIs deviating a bit from the usual architecture. There's no official support for Fauna in Rails, and the Ruby driver is not supported anymore, which means we will need to create all the basic database interaction methods, but this won't be hard since Fauna supports GraphQL, which has a pretty cool syntax and it kind of resembles an ORM.
Our proof of concept will consist of a straightforward service to shorten URLs via a JSON API. We will implement just two actions: create a short URL version and retrieve the longer version when receiving the short version.
Prerequisites
To follow up with this tutorial, you will need:
- Docker and Docker Compose
- A Fauna account
- Basic knowledge of Ruby on Rails
- Basic understanding of NoSQL databases and GraphQL
Coding
Let's jump right into the code.
Setting up a new Rails API service
We can quickly spin up a new Rails API service by using Docker Compose. There's a quick start guide for Compose and Rails that we can use as a starting point. We just need to add minor modifications to work with the latest version of Ruby and Ruby on Rails.
Let's start with this simple Dockerfile to pull our desired Ruby version and the name we will give our project (shortener
):
# syntax=docker/dockerfile:1
FROM ruby:3.0.2
WORKDIR /shortener
COPY Gemfile /shortener/Gemfile
COPY Gemfile.lock /shortener/Gemfile.lock
RUN bundle install
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
Now we need to create a Gemfile
to create the Rails application:
source 'https://rubygems.org'
gem 'rails', '~>6'
We also need a Gemfile.lock
. We can use an empty file for this:
$ touch Gemfile.lock
The quick start guide also provides a helpful entrypoint.sh
script that prevents errors when the server restarts. Create an entrypoint.sh
file with the following content:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /shortener/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$@"
Finally, we need a docker-compose.yml
file to manage our service:
version: "3.9"
services:
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/shortener
ports:
- "3000:3000"
Now we can create the Rails service with Docker compose. We will pass a couple of custom parameters to the rails new
command. We want a Rails API only service, and since we are not using a supported database, we don't need ActiveRecord:
$ docker-compose run api rails new . --force --skip-active-record --api
Once that's done, we can build the project:
$ docker-compose build
After the build is done, you can run docker-compose up
and visit http://localhost:3000
or run curl -I localhost:3000
to check if your new Rails service is running.
Creating the Fauna database
We can interact with our Fauna account using the UI interface or via the fauna-shell. It's always a good idea to know how to use CLI tools and APIs to automate the creation and management of resources. With that said, we will go with the shell for this tutorial. To install it, you'll need Node's package manager npm
available on your machine. Then you can run:
$ npm install -g fauna-shell
Then you can configure a new account by running:
$ fauna cloud-login
Use my-cloud
as the alias prefix for this configuration and then enter the email and password credentials you used previously to create your account. Finally, choose the my-cloud-us
endpoint so you can replicate the commands we will run later.
$ fauna cloud-login
> ? The endpoint alias prefix (to combine with a region): my-cloud
> ? How do you prefer to authenticate? Email and Password
> ? Email address: youremail@example.com
> ? Password: [hidden]
> ? Endpoints created. Would you like to set one of them as default? my-> cloud-us
> Endpoint 'my-cloud-us' set as default endpoint.
If the configuration was successful, you can run fauna list-databases
:
$ fauna list-databases
> listing databases
> No databases created
No databases as expected. Let's create a new one and name it shortener-db
:
$ fauna create-database shortener-db
The way we authenticate against our database is via a secret key. We can generate a key for our database with:
fauna create-key shortener-db
Make sure you save that key somewhere else since you won't be able to retrieve it from anywhere later.
Now, if you run fauna list-databases
again, you should see the new database.
Creating the database schema
Let's create a simple schema for our application. We are building a basic URL shortener service where the user sends a URL, and our back end creates a shorter string that maps to the original URL. We will only use two fields for our schema, shortURL
and longURL
.
Create a new config/schema.gql
file and add the following GraphQL code:
type Url {
shortURL: String!
longURL: String!
}
To import this schema into our database, we need to make an HTTP call to one of Fauna's GraphQL enpoinds. If you copied the previous commands, we used the us
region, so we should use the endpoint https://graphql.us.fauna.com/import
. We can import the schema by executing an HTTP request to that endpoint using a tool such as cURL
.
To import the schema, run the following command replacing the placeholder [APP_KEY]
with the secret application key for your database. Don't forget the :
at the end of the key:
curl -u [APP_KEY]: https://graphql.us.fauna.com/import --data-binary "
@config/schema.gql"
Schema imported successfully.
Use the following HTTP header to connect to the FaunaDB GraphQL API:
{ "Authorization": "Bearer [API_KEY]" }%
We will use the HTTP header from the output to make requests to our GraphQL database endpoint.
If you check the schema Fauna created for us by either going to the Fauna GraphQL playground UI or by sending an introspection request to the GraphQL endpoint, you'll see the following mutations that were created for us:
type Mutation {
createUrl(data: UrlInput!): Url!
updateUrl(
id: ID!
data: UrlInput!
): Url
deleteUrl(id: ID!): Url
}
type Query {
findUrlByID(id: ID!): Url
}
input UrlInput {
shortURL: String!
longURL: String!
}
With these Mutations and Query, we can quickly start interacting with our url
collection.
Setting up the GraphQL client
Having a GraphQL client library will make running requests much simpler. For Ruby, we have the graphql-client gem by GitHub, which works pretty well and it's easy to set up. Let's add this gem to our Gemfile:
gem 'graphql-client'
And build the project again:
docker-compose build
Once that's done, we will need an initializer to configure the client. Create the new file config/initializers/fauna_api.rb
and add the following content replacing the [API_KEY]
placeholder with your key:
require "graphql/client"
require "graphql/client/http"
module FaunaAPI
HTTP = GraphQL::Client::HTTP.new("https://graphql.us.fauna.com/graphql") do
def headers(context)
{ "Authorization": "Bearer [API_KEY]" }
end
end
Schema = GraphQL::Client.load_schema(HTTP)
Client = GraphQL::Client.new(schema: Schema, execute: HTTP)
end
There are some improvements you can make here. First, it's never a good idea to hard-code API secrets in your code. A better option would be to use Rails secrets or environmental variables. Second, you can load the schema just once and dump it to disk to faster read after. There are examples in the library documentation on how to make these improvements. For our little project, we will keep it simple.
To ensure the configuration is working correctly, you can run docker-compose up
and verify your service started with no issues by visiting http://localhost:3000
.
Our first action: Create a URL
Right now, we don't have any routes defined for our API. Let's scaffold a controller, so we have some boilerplate code to work on:
docker-compose run api rails generate scaffold_controller url
This command will create a new controller with all the common REST actions we use in Rails. In this project, we will only use show
and create
.
Now that we have a controller for our url
resource let's talk about using the GraphQL client. The first step to make a request is to parse a GraphQL statement. We will use the FaunaAPI::Client.parse
method, which takes a string containing our GraphQL request. Remember, we created the FaunaAPI
module as a part of the configuration initializer. One requirement in this step is that this request must be declared as a static constant, so we need to declare it outside of our controller action. Once we have our GraphQL request ready, we can execute it using the FaunaAPI::Client.query
method, which receives the GraphQL request and a variables
dictionary with values we can inject into the request. These values will replace the variables starting with $
in our GraphQL statement.
Now that we know the basics to run Graphql requests let's write the create
action. We only need the user to provide the long_url
field to generate the short version. Every time a new create
request comes in, we can generate a random uuid
consisting of six characters and save it as the short version. Some improvements you can make are making sure you don't have duplicate URLs and also making sure you don't have collisions when you generate the short version. As usual, we won't worry about these details here.
The complete create action looks like the following:
CREATE_MUTATION = FaunaAPI::Client.parse <<-'GRAPHQL'
mutation($shortURL: String!, $longURL: String!) {
createUrl(data: { shortURL: $shortURL, longURL: $longURL}) {
_id
longURL
shortURL
}
}
GRAPHQL
def create
long_url = params[:url][:long_url]
short_url = SecureRandom.uuid[0..5]
response = FaunaAPI::Client.query(CREATE_MUTATION, variables: {shortURL: short_url, longURL: long_url})
if response.errors.any?
render json: response.errors.to_h[:data].join(", "), status: :unprocessable_entity
else
render json: response.data.to_h, status: :created
end
end
As you can see, we are using two variables for the long and short URL in the mutation that we can inject when we execute the request. We also return the Fauna _id
identifier along with the other fields.
To test the action, start the service with docker-compose up
and then execute a request using your favorite HTTP client. For example, if you use cURL
you can run:
curl 'http://localhost:3000/urls' -d "url[long_url]=http://www.example.com"
Output:
{"createUrl":{"_id":"314252391726710852","longURL":"http://www.example.com","shortURL":"6512ca"}}%
Get URL
Right now, the schema only has a query to find a URL by its ID (findUrlByID
). Fauna created this query when we imported our schema. What we really need is to find a URL by its short version instead. A user creates a shortened URL and then shares that link, so other users can click on it and be redirected to the original version.
We can easily register a new query by importing a new version of the schema. Open the config/schema.gql
file and add the new query:
type Url {
shortURL: String!
longURL: String!
}
type Query {
findUrlByShortURL(shortURL: String!): Url
}
And now import the new schema:
curl -u [API_KEY]: "https://graphql.us.fauna.com/import?mode=replace" --data-binary "@config/schema.gql"
Output:
Schema imported successfully.
Use the following HTTP header to connect to the FaunaDB GraphQL API:
{ "Authorization": "Bearer [API_SECRET]" }%
Now we can create our show action:
SHOW_QUERY = FaunaAPI::Client.parse <<-'GRAPHQL'
query($shortURL: String!) {
findUrlByShortURL(shortURL: $shortURL) {
_id
longURL
shortURL
}
}
GRAPHQL
def show
short_url = params[:id]
response = FaunaAPI::Client.query(SHOW_QUERY, variables: {shortURL: short_url})
if response.errors.any?
render json: response.errors.to_h[:data].join(", "), status: :unprocessable_entity
else
render json: response.data.to_h['findUrlByShortURL'], status: :ok
end
end
We are reusing the show
route for this action, which is OK since it's the primary way to retrieve a URL, but we should rewrite it using the short_url
parameter instead of id
to make the code more readable and correct.
Before testing it, remove or comment out the set_url
action from the UrlsController
.
# before_action :set_url, only: [:show, :update, :destroy]
Also, restart your service (stop Docker and run docker-compose up
again), so the service picks up the new schema.
Now you can try making a get request to your API. First, we need a short URL we can test, so let's create a new record and query the same back:
curl 'http://localhost:3000/urls' -d "url[long_url]=http://www.anotherexample.com"
Output:
{"createUrl":{"_id":"314287535668854849","longURL":"http://www.anotherexample.com","shortURL":"c3048a"}}%
Now we can make a get request using this new short URL c3048a
:
curl 'http://localhost:3000/urls/c3048a'
Output:
{"_id":"314287535668854849","longURL":"http://www.anotherexample.com","shortURL":"c3048a"}
Conclusion
More people are starting to use serverless tools every day. Having a serverless database is great but it's not helpful if we can't integrate it easily with our other tools.
Even though Fauna doesn't provide drivers for all major languages, it gives us a much more powerful tool: GraphQL. Now you only need a GraphQL client, which almost every major language has, and you can talk to your database using a much nicer syntax.
This tutorial went through all the basic steps to integrate Fauna into a Rails API service by interfacing with the GraphQL endpoints. We saw that we can still use Rails' nicest features to quickly write an API without a database driver and Rails' ORM and only rely on the controller and routing layers. We also saw how easy it was to register a GraphQL schema and use a GraphQL client to execute our mutations and queries.
Thanks for reading!
Posted on November 4, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.