Building Rust Web API with Warp and Diesel

szymongib

Szymon Gibała

Posted on August 3, 2020

Building Rust Web API with Warp and Diesel

Introduction

In this article, I would like to share with you my experience of writing a very simple Web API in Rust using Warp and Diesel.

As I am still Rust newbie, please let me know of any mistakes you have spotted, and of course, any feedback is appreciated.

Prerequisites

  • Basic knowledge of Rust
  • Basic knowledge of how web APIs work

Project overview

To not create another Todo List, we are going to create a simple book catalog (I know I know it is almost as original).

API

We will start by defining our API, it will consist of the following methods:

  • POST /api/v1/books - to add book to the catalog.
  • GET /api/v1/books - to list all our books.
  • PUT /api/v1/books/:id - to update the status of our book for example: ToRead, Reading, Finished, Rereading.
  • DELETE /api/v1/books/:id - to delete book from our collection

As mentioned before, we will use Warp as our web framework. It is based on composable Filters and I have found it quite easy to work with.

Database

To manage our database and connect it with our application, we will use Diesel, which is probably the most popular Rust ORM.

Diesel not only allows us to read and write to the database from our code but also provides a CLI tool to manage migrations.

We will use Postgres as a database but Diesel also supports other drivers like MySQL or SQLite.

Let's implement it!

Setup the project

First, we will create a new project with cargo:

cargo new rust-api-warp-and-diesel
Enter fullscreen mode Exit fullscreen mode

Now let's declare dependencies for our application in Cargo.toml:

...

[dependencies]
tokio = { version = "0.2", features = ["macros"] }
warp = "0.2"
serde_derive = "1.0"
serde = "1.0"
log = "0.4"
pretty_env_logger = "0.3"
diesel = { version = "1.4.4", features = ["postgres", "r2d2"] }
Enter fullscreen mode Exit fullscreen mode

To explain things quickly:

  • Warp is using tokio as an async runtime therefore we need it as a dependency.
  • We will also need serde to work with JSON.
  • For diesel we need postgres and r2d2 features for working with the Postgres database and creating a connection pool.
  • For some basic logging, we will use log and pretty_env_logger.

Setup database with Diesel

After we set up our project we can go ahead and start preparing our database. For that, we will need to install Diesel CLI. You can get a detailed guide on how to do it in Diesel getting start guide.

To setup up Diesel with our project we need to provide it with DATABASE_URL environment variable or the .env file. Let's create it now:

echo DATABASE_URL=postgres://postgres:password@localhost:5432/book_catalog > .env
Enter fullscreen mode Exit fullscreen mode

To continue setup we will need a running database. You can use local Postgres or spin up an instance in Docker container:

docker run -p 5432:5432 --rm -e POSTGRES_PASSWORD=password postgres:12
Enter fullscreen mode Exit fullscreen mode

And now we can run the setup:

diesel setup
Enter fullscreen mode Exit fullscreen mode

This will create a book_catalog database in our Postgres and add some files to our project:

  • migrations directory is the place where our migrations live.
  • diesel.toml is a configuration file for diesel-cli for our project.

Now, let's add our first migration:

diesel migration generate book_catalog_initial_schema
Enter fullscreen mode Exit fullscreen mode

Every migration is a subdirectory in the migrations and its name is a timestamp joined with the name we passed to the command. The migration consist of two SQL files:

  • up.sql for performing the migration.
  • down.sql for reverting it.

Database schema

Our database will be stupid simple with just one table representing our books. It obviously is far from perfect but it is enough for demonstration purposes.

In the up.sql we will simply create the table:

CREATE TABLE books (
    id BIGSERIAL PRIMARY KEY,
    title varchar(256) NOT NULL,
    author varchar(256) NOT NULL,
    status varchar(256) NOT NULL
);
Enter fullscreen mode Exit fullscreen mode

and in the down.sql we will drop it:

DROP TABLE books;
Enter fullscreen mode Exit fullscreen mode

You may ask why not use an enum for book status? Unfortunately, Diesel does not support enums out of the box so to keep it simple we will just use varchar and map it to the Rust enum in our code.
If you really need enums you can check out this create which makes it possible to use enums directly with Diesel.

The last step will be to run our migration on the database and generate schema.rs file:

diesel migration run
Enter fullscreen mode Exit fullscreen mode

The file contains the table! macro which creates code based on our database schema to represent tables and columns.

If you would like to adjust the file name or its location, you can do so by modifying diesel.toml. For our case, the default is perfectly fine.

Define model

Before we start operating on the database we need to have an internal representation of our data. We will create our structs in the model.rs file.

use serde_derive::{Deserialize, Serialize};
use crate::schema::books;

#[derive(Serialize, Debug, Clone, Queryable)]
pub struct BookDTO {
    pub id: i64,
    pub title: String,
    pub author: String,
    pub status: BookStatus,
}

// Struct for creating Book
#[derive(Debug, Clone, Insertable)]
#[table_name = "books"]
pub struct CreateBookDTO {
    pub title: String,
    pub author: String,
    pub status: BookStatus,
}
Enter fullscreen mode Exit fullscreen mode

This part if pretty straight forward. We declare two structs one of which - CreateBookDTO - will be used to create books, as it does not have an id field, which will be assigned by Postgres. The other one - BookDTO - will represent the whole book object. We will use it for queries.

Besides that, we specify the table_name and derive from some of the Diesel traits like Queryable for performing database queries and Insertable for performing inserts.

You may have noticed that in the case of BookDTO struct, we do not actually need to specify the table_name. That is because structs implementing Queryable are not related to a specific table. They just represent the result of a query with a specific type signature and therefore can be used with multiple tables.

We are still missing one thing which is the BookStatus enum. As I have mentioned before, enums are not supported in Diesel out of the box, and for Postgres to treat it as a text field (varchar(256) in our case), we need to implement two traits:

  • ToSql - to convert Rust enum value to text stored in the database.
  • FromSql - to match text from the database to Rust enum value.

Let's add it to our model.rs:

...
use diesel::serialize::{ToSql, Output, IsNull};
use diesel::pg::Pg;
use std::io::Write;
use diesel::{serialize, deserialize};
use diesel::deserialize::FromSql;
use diesel::sql_types::Text;

#[derive(Serialize, Deserialize, Debug, Copy, Clone, AsExpression, FromSqlRow)]
#[sql_type = "Text"]
pub enum BookStatus {
    WantToRead,
    Reading,
    Finished,
    Rereading,
}

impl ToSql<Text, Pg> for BookStatus {
    fn to_sql<W: Write>(&self, out: &mut Output<W, Pg>) -> serialize::Result {
        match *self {
            BookStatus::WantToRead => out.write_all(b"WANT_TO_READ")?,
            BookStatus::Reading => out.write_all(b"READING")?,
            BookStatus::Finished => out.write_all(b"FINISHED")?,
            BookStatus::Rereading => out.write_all(b"REREADING")?, 
        }
        Ok(IsNull::No)
    }
}

impl FromSql<Text, Pg> for BookStatus {
    fn from_sql(bytes: Option<&[u8]>) -> deserialize::Result<Self> {
        match not_none!(bytes) {
            b"WANT_TO_READ" => Ok(BookStatus::WantToRead),
            b"READING" => Ok(BookStatus::Reading),
            b"FINISHED" => Ok(BookStatus::Finished),
            b"REREADING" => Ok(BookStatus::Rereading),
            _ => Err("Unrecognized enum variant".into()),
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Custom Errors

The last step for our model will be the custom error type. We will add it to a new errors.rs file. Let's define new enum - ErrorType - and new struct - AppError.

use std::fmt;
use warp::reject::Reject;

#[derive(Debug)]
pub enum ErrorType {
    NotFound,
    Internal,
    BadRequest,
}

#[derive(Debug)]
pub struct AppError {
    pub err_type: ErrorType,
    pub message: String,
}

impl AppError {
    pub fn new(message: &str, err_type: ErrorType) -> AppError {
        AppError { message: message.to_string(), err_type }
    }

    pub fn to_http_status(&self) -> warp::http::StatusCode {
        match self.err_type {
            ErrorType::NotFound => warp::http::StatusCode::NOT_FOUND,
            ErrorType::Internal => warp::http::StatusCode::INTERNAL_SERVER_ERROR,
            ErrorType::BadRequest => warp::http::StatusCode::BAD_REQUEST,
        }
    }

    pub fn from_diesel_err(err: diesel::result::Error, context: &str) -> AppError {
        AppError::new(
            format!("{}: {}", context, err.to_string()).as_str(),
            match err {
                diesel::result::Error::DatabaseError(db_err, _) => {
                    match db_err {
                        diesel::result::DatabaseErrorKind::UniqueViolation => ErrorType::BadRequest,
                        _ => ErrorType::Internal,
                    }
                }
                diesel::result::Error::NotFound => ErrorType::NotFound,
                // Here we can handle other cases if needed
                _ => {
                    ErrorType::Internal
                }
            },
        )
    }
}

impl std::error::Error for AppError {}

impl fmt::Display for AppError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "{}", self.message)
    }
}

impl Reject for AppError {}
Enter fullscreen mode Exit fullscreen mode

ErrorType will help us to differentiate between different kinds of errors and map them properly to HTTP status codes in the to_http_status() method. For our application, we will only use three error types but you can add more if you need it.

We will also need to convert errors from Diesel to our AppError and for that, we have from_diesel_err(...). Note that we are mapping Diesel errors to a specific ErrorType so if we get diesel::result::Error::NotFound from the database, our API will properly respond with 404 status code.

Furthermore, the AppError implements standard traits like Display and Error but also one specific to Warp - Reject. This trait will allow us to pass AppError to the warp::reject::custom(...) function so that we can later use it while handling the rejections.

Implement data access

Now we have our database and model representing the entities. We can go ahead and write some code that will allow us to access the DB. The heavy lifting here is done by Diesel so we will just need a couple of simple methods. We will wrap them up with a DBAccessManager struct.

Let's create a new file for that and call it data_access.rs.

First, we will add the required imports and define the struct. It will contain database connection object which we will get from the connection pool - more on that later.

use diesel::prelude::*;
use diesel::r2d2::{ConnectionManager, PooledConnection};
use crate::model::{CreateBookDTO, BookDTO, BookStatus};
use crate::errors::{AppError,ErrorType};

type PooledPg = PooledConnection<ConnectionManager<PgConnection>>;

pub struct DBAccessManager {
    connection: PooledPg,
}
Enter fullscreen mode Exit fullscreen mode

Now let's implement the first method.

impl DBAccessManager {
    pub fn new(connection: PooledPg) -> DBAccessManager {
        DBAccessManager {connection}
    }

    pub fn create_book(&self, dto: CreateBookDTO) -> Result<BookDTO, AppError> {
        use super::schema::books;

        diesel::insert_into(books::table) // insert into books table
            .values(&dto) // use values from CreateBookDTO
            .get_result(&self.connection) // execute query
            .map_err(|err| {
                AppError::from_diesel_err(err, "while creating book")
            }) // if error occurred map it to AppError
    }
}
Enter fullscreen mode Exit fullscreen mode

For inserting data to the database we are using insert_into function, passing it the books::table generated by macro from schema.rs. Then we set values from our CreateBookDTO struct and finally we execute the query.

As a result, we are expecting to get either BookDTO or diesel::result::Error, therefore if an error occurs we use the previously prepared function AppError::from_diesel_err to map it to the AppError.

Let's add remaining methods for listing, updating, and deleting books.

impl DBAccessManager {

    ...

    pub fn list_books(&self) -> Result<Vec<BookDTO>, AppError> {
        use super::schema::books::dsl::*;

        books
            .load(&self.connection)
            .map_err(|err| {
                AppError::from_diesel_err(err, "while listing books")
            })
    }

    pub fn update_book_status(&self, book_id: i64, new_status: BookStatus) -> Result<usize, AppError> {
        use super::schema::books::dsl::*;

        let updated = diesel::update(books)
            .filter(id.eq(book_id))
            .set(status.eq(new_status))
            .execute(&self.connection)
            .map_err(|err| {
                AppError::from_diesel_err(err, "while updating book status")
            })?;

        if updated == 0 {
            return Err(AppError::new("Book not found", ErrorType::NotFound))
        }
        return Ok(updated)
    }

    pub fn delete_book(&self, book_id: i64) -> Result<usize, AppError> {
        use super::schema::books::dsl::*;

        let deleted = diesel::delete(books.filter(id.eq(book_id)))
            .execute(&self.connection)
            .map_err(|err| {
                AppError::from_diesel_err(err, "while deleting book")
            })?;

        if deleted == 0 {
            return Err(AppError::new("Book not found", ErrorType::NotFound))
        }
        return Ok(deleted)
    }
}
Enter fullscreen mode Exit fullscreen mode

The code is pretty similar. We use filter([COLUMN_NAME].eq([VALUE])) as an equivalent of SQL WHERE statement and set([COLUMN_NAME].eq([NEW_VALUE]) for column updates. We use load() for querying multiple rows and execute() to run queries like update or delete.

In the case of update_book_status() and delete_book() methods we additionally check if any rows were affected and if that is not the case we return new error with type NotFound.

We can now import macros from diesel crate in our main.rs as well as declare our modules:

#[macro_use]
extern crate diesel;

mod model;
mod errors;
mod data_access;
mod schema;
Enter fullscreen mode Exit fullscreen mode

Create API

Before we create our awesome Books API, let's start with something simple to get the taste of Warp.

We will start with simple HelloWorld handler so let's replace our main function with the following:

...

use std::env;
use warp::{Filter, reject};
use log::{info};

#[tokio::main]
async fn main() {
    if env::var_os("RUST_LOG").is_none() {
        env::set_var("RUST_LOG", "info");
    }
    pretty_env_logger::init();

    let routes = warp::path!("hello").map(|| "Hello World!".to_string());

    info!("Starting server on port 3030...");

    // Start up the server...
    warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
Enter fullscreen mode Exit fullscreen mode

We initialize or Filter using warp::path! macro and specify the path to hello. Then we extend it with the map function which simply returns Hello World! string. By default, the response will have a 200 status code.

Then we are just starting our server on port 3030.

We can now run it with cargo:

cargo run
Enter fullscreen mode Exit fullscreen mode

And verify if it is working correctly using curl:

curl localhost:3030/hello -v
Enter fullscreen mode Exit fullscreen mode

We should get 200 response with:

Hello World!
Enter fullscreen mode Exit fullscreen mode

Add database connection pool

To access the database we need the database connection and we will need it for handling every request. Initializing connection every time someone calls our API would be expensive so as mentioned in previous sections, we will use the connection pool.

To create a connection pool we will use the r2d2 feature from Diesel. First, we need a function to create our connection pool. Let's add it to the main.rs:

use diesel::pg::PgConnection;
use diesel::r2d2::{ConnectionManager, Pool};

type PgPool = Pool<ConnectionManager<PgConnection>>;

fn pg_pool(db_url: &str) -> PgPool {
    let manager = ConnectionManager::<PgConnection>::new(db_url);
    Pool::new(manager).expect("Postgres connection pool could not be created")
}
Enter fullscreen mode Exit fullscreen mode

Instead of passing the connection object itself, we will wrap it with the DBAccessManager that we have created earlier.

use crate::data_access::DBAccessManager;
use crate::errors::{AppError, ErrorType};

fn with_db_access_manager(pool: PgPool) -> impl Filter<Extract = (DBAccessManager,), Error = warp::Rejection> + Clone {
    warp::any()
        .map(move || pool.clone())
        .and_then(|pool: PgPool| async move {  match pool.get() {
            Ok(conn) => Ok(DBAccessManager::new(conn)),
            Err(err) => Err(reject::custom(
                AppError::new(format!("Error getting connection from pool: {}", err.to_string()).as_str(), ErrorType::Internal))
            ),
        }})
}
Enter fullscreen mode Exit fullscreen mode

This function will get a connection from the pool, use it to create DBAccessManager, and append it to the parameters tuple of the Filter, we will see this in action when we will be setting up our filters.

Creating handlers

Before we stitch everything together let's create structs and handlers for our endpoints. We will do it in a dedicated file api.rs.

First, we need structs that will represent the JSON objects that our API will be receiving and responding with:

use serde_derive::{Deserialize, Serialize};
use crate::model::{BookStatus, CreateBookDTO};

#[derive(Debug, Deserialize, Clone)]
pub struct AddBook {
    pub title: String,
    pub author: String,
    pub status: BookStatus,
}

#[derive(Debug, Deserialize, Clone)]
pub struct UpdateStatus {
    pub status: BookStatus,
}

#[derive(Debug, Serialize, Clone)]
pub struct IdResponse {
    pub id: i64,
}

impl IdResponse {
    pub fn new(id: i64) -> IdResponse {
        IdResponse{id}
    }
}
Enter fullscreen mode Exit fullscreen mode

We can also add a method to the AddBook struct, to convert it to the CreateBookDTO that we use later:

...
impl AddBook {
    pub fn to_dto(&self) -> CreateBookDTO {
        CreateBookDTO{
            title: self.title.clone(),
            author: self.author.clone(),
            status: self.status.clone(),
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Before adding handler methods, let's add one more function, that will take a Result and based on that respond either with an object serialized to JSON or an error. Here we will leverage the Reject trait implemented for our AppError:

...
use crate::AppError;

fn respond<T: Serialize>(result: Result<T, AppError>, status: warp::http::StatusCode) -> Result<impl warp::Reply, warp::Rejection> {
    match result {
        Ok(response) => {
            Ok(warp::reply::with_status(warp::reply::json(&response), status))
        }
        Err(err) => {
            log::error!("Error while trying to respond: {}", err.to_string());
            Err(warp::reject::custom(err))
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

To serialize the struct to JSON it needs to implement the Serialize trait, therefore T: Serialize.

Now we can use it in every handler method we create. We can add them now to api.rs:

...
use crate::data_access::DBAccessManager;
use serde::Serialize;

pub async fn add_book(
    db_manager: DBAccessManager,
    new_book: AddBook,
) -> Result<impl warp::Reply, warp::Rejection> {
    log::info!("handling add book");

        let create_book_dto = new_book.to_dto();

    let id_response = db_manager.create_book(create_book_dto).map(|book|
        { IdResponse::new(book.id) }
    );

    respond(id_response, warp::http::StatusCode::CREATED)
}

pub async fn update_status(
    book_id: i64,
    db_manager: DBAccessManager,
    status_update: UpdateStatus,
) -> Result<impl warp::Reply, warp::Rejection> {
    log::info!("handling update status");

    let id_response = db_manager.update_book_status(book_id, status_update.status).map(|_|
        { IdResponse::new(book_id) }
    );

    respond(id_response, warp::http::StatusCode::OK)
}

pub async fn delete_book(
    book_id: i64,
    db_manager: DBAccessManager,
) -> Result<impl warp::Reply, warp::Rejection> {
    log::info!("handling delete book");

    let result = db_manager.delete_book(book_id).map(|_| -> () {()});

    respond(result, warp::http::StatusCode::NO_CONTENT)
}

pub async fn list_books(
    db_manager: DBAccessManager,
) -> Result<impl warp::Reply, warp::Rejection> {
    log::info!("handling list books");

    let result = db_manager.list_books();

    respond(result, warp::http::StatusCode::OK)
}

Enter fullscreen mode Exit fullscreen mode

We have four simple methods:

  • add_book to add a new book to our collection.
  • update_status to update the status of the specified book.
  • delete_book to delete the book.
  • list_books to list all of our books.

As you can see all of our handlers are async functions and their logic is quite simple:

  • Log that the method is called.
  • Call a method from DBAccessManager.
  • Map the result to the desired struct.
  • Respond with a JSON object or an error.

We could get away by not defining new methods for our handlers as their logic is quite trivial, but I find it useful to decouple it from all the Filters setup, that we will do in our main file. This would be much more apparent in the case of more complex applications.

Before we move on we need to declare a new module in our main.rs file.

...
mod api;
...
Enter fullscreen mode Exit fullscreen mode

Handling rejections

We will add one more function that will help us handle rejections. Because we implemented the Reject trait for the AppError, we can now extract it from the warp::Rejection struct. We will try to do it in the handle_rejection function. Let's add it to errors.rs:

...

use std::convert::Infallible;
use warp::{Rejection, Reply};
use serde_derive::Serialize;

#[derive(Serialize)]
struct ErrorMessage {
    code: u16,
    message: String,
}

pub async fn handle_rejection(err: Rejection) -> Result<impl Reply, Infallible> {
    let code;
    let message;

    if err.is_not_found() {
        code = warp::http::StatusCode::NOT_FOUND;
        message = "Not Found";
    } else if let Some(app_err) = err.find::<AppError>() {
        code = app_err.to_http_status();
        message = app_err.message.as_str();
    } else if let Some(_) = err.find::<warp::filters::body::BodyDeserializeError>() {
        code = warp::http::StatusCode::BAD_REQUEST;
        message = "Invalid Body";
    } else if let Some(_) = err.find::<warp::reject::MethodNotAllowed>() {
        code = warp::http::StatusCode::METHOD_NOT_ALLOWED;
        message = "Method Not Allowed";
    } else {
        // In case we missed something - log and respond with 500
        eprintln!("unhandled rejection: {:?}", err);
        code = warp::http::StatusCode::INTERNAL_SERVER_ERROR;
        message = "Unhandled rejection";
    }

    let json = warp::reply::json(&ErrorMessage {
        code: code.as_u16(),
        message: message.into(),
    });

    Ok(warp::reply::with_status(json, code))
}

Enter fullscreen mode Exit fullscreen mode

Here we try to extract different errors from the warp::Rejection struct and map it to proper HTTP status code.

For serializing the error response to JSON we use simple struct - ErrorMessage - and use warp::reply::with_status(...) to respond with a proper HTTP status code.

Connecting the pieces

Before we make use of our handlers we need to add one more filter, to decode the request body from JSON and append it to the parameters tuple.

...
use serde::de::DeserializeOwned;

fn with_json_body<T: DeserializeOwned + Send>(
) -> impl Filter<Extract = (T,), Error = warp::Rejection> + Clone {
    // When accepting a body, we want a JSON body
    // (and to reject huge payloads)...
    warp::body::content_length_limit(1024 * 16).and(warp::body::json())
}
Enter fullscreen mode Exit fullscreen mode

We will create every route as a separate function:

...
use crate::api::{AddBook, UpdateStatus};

/// POST /books
fn add_book(
    pool: PgPool
) -> impl Filter<Extract = impl warp::Reply, Error = warp::Rejection> + Clone {
    warp::path!("books")                    // Match /books path
        .and(warp::post())                  // Match POST method
        .and(with_db_access_manager(pool))  // Add DBAccessManager to params tuple
        .and(with_json_body::<AddBook>())   // Try to deserialize JSON body to AddBook
        .and_then(api::add_book)            // Pass the params touple to the handler function
}
Enter fullscreen mode Exit fullscreen mode

The rest of the methods follow a similar structure:

...
/// GET /books
fn list_books(
    pool: PgPool
) -> impl Filter<Extract = impl warp::Reply, Error = warp::Rejection> + Clone {
    warp::path!("books")
        .and(warp::get())
        .and(with_db_access_manager(pool))
        .and_then(api::list_books)
}

/// PUT /books/:id
fn update_status(
    pool: PgPool
) -> impl Filter<Extract = impl warp::Reply, Error = warp::Rejection> + Clone {
    warp::path!("books" / i64 )
        .and(warp::put())
        .and(with_db_access_manager(pool))
        .and(with_json_body::<UpdateStatus>())
        .and_then(api::update_status)
}

/// DELETE /books/:id
fn delete_book(
    pool: PgPool
) -> impl Filter<Extract = impl warp::Reply, Error = warp::Rejection> + Clone {
    warp::path!("books" / i64 )
        .and(warp::delete())
        .and(with_db_access_manager(pool))
        .and_then(api::delete_book)
}
Enter fullscreen mode Exit fullscreen mode

Now we will add the final method to combine all the previously created filters into a single one, that will be passed to warp::serve.

...
fn api_filters(
    pool: PgPool
) -> impl Filter<Extract=impl warp::Reply, Error=warp::Rejection> + Clone  {
    warp::path!("api" / "v1" / ..)   // Add path prefix /api/v1 to all our routes
        .and(
            add_book(pool.clone())
                .or(update_status(pool.clone()))
                .or(delete_book(pool.clone()))
                .or(list_books(pool))
        )
}
Enter fullscreen mode Exit fullscreen mode

Finally, let's update our main function to finalize our API.

We will read the database connection string from the DATABASE_URL environment variable using env::var("DATABASE_URL"). Thanks to that we can reuse the .env file created for Diesel.

We will also use previously prepared functions to create our database connection pool and combined filter with the API endpoints.

Last but not least we will use recover function on the filter and pass it handle_rejection so that it will be called when the request will not match any filters or the error will be returned.

...

#[tokio::main]
async fn main() {
    if env::var_os("RUST_LOG").is_none() {
        env::set_var("RUST_LOG", "info");
    }
    pretty_env_logger::init();

    let database_url = env::var("DATABASE_URL").expect("DATABASE_URL env not set");

    let pg_pool = pg_pool(database_url.as_str());

    let routes = api_filters(pg_pool)
        .recover(errors::handle_rejection);

    info!("Starting server on port 3030...");

    // Start up the server...
    warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
...
Enter fullscreen mode Exit fullscreen mode

Now we should be able to successfully compile the application.

Run the application

First, let's make sure we still have our database up and running. If that is the case we need to set the DATABASE_URL environment variable, we can do it manually or leverage existing .env file that we created for Diesel:

export $(cat .env | xargs)
Enter fullscreen mode Exit fullscreen mode

Now we can run the application again using cargo:

cargo run
Enter fullscreen mode Exit fullscreen mode

We should see some logs indicating that the application has started:

 INFO  rust_api_warp_and_diesel > Starting server on port 3030...
 INFO  warp::server             > Server::run; addr=V4(127.0.0.1:3030)
 INFO  warp::server             > listening on http://127.0.0.1:3030
Enter fullscreen mode Exit fullscreen mode

Use the API

Now that everything is up and running let's make some calls!

Add book, that we are currently reading:

curl localhost:3030/api/v1/books -X POST -d '{"title":"Game of Thrones", "author": "George R.R. Martin", "status":"Reading"}' -H "Content-Type: application/json"
Enter fullscreen mode Exit fullscreen mode
{"id":1}
Enter fullscreen mode Exit fullscreen mode

We should get back the id that we now can use to update the book status:

curl localhost:3030/api/v1/books/1 -X PUT -d '{"status":"Finished"}' -H "Content-type: application/json"
Enter fullscreen mode Exit fullscreen mode
{"id":1}
Enter fullscreen mode Exit fullscreen mode

Let's list books to see that the status was updated:

curl localhost:3030/api/v1/books
Enter fullscreen mode Exit fullscreen mode
[{"id":1,"title":"Game of Thrones","author":"George R.R. Martin","status":"Finished"}]
Enter fullscreen mode Exit fullscreen mode

And finally, we can also delete it:

curl localhost:3030/api/v1/books/1 -X DELETE
Enter fullscreen mode Exit fullscreen mode

Summing up

Obviously our application is very simplistic and far from perfect, there are tons of things that we would have to do to make it even close to the production quality, but it is enough to get started with something and learn some fundamentals of Warp and Diesel.

If you have any suggestions or feedback please let me know!

Want to get your hands dirty?

A good way to learn new things in software space (at least for me) is to take an existing piece of code and add something to it as you exercise both your code reading and understanding skills as well as writing skills.

If you are up for the challenge and want to get your hands a little dirty, try to implement another endpoint (GET /api/v1/books/:id) that will return a book with a specified ID.

The following curl:

curl localhost:3030/api/v1/books/1
Enter fullscreen mode Exit fullscreen mode

should return the book with ID 1, for example:

{"id":1,"title":"Game of Thrones","author":"George R.R. Martin","status":"Finished"}
Enter fullscreen mode Exit fullscreen mode

You can clone the code from the repository.
If you will struggle with doing it by yourself, don't worry! The answer is available on the get-book-endpoint branch.

💖 💪 🙅 🚩
szymongib
Szymon Gibała

Posted on August 3, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related