Rust GraphQL APIs for NodeJS Developers: Introduction

tugascript

Afonso Barracha

Posted on February 8, 2024

Rust GraphQL APIs for NodeJS Developers: Introduction

Series Intro

Although I'm a fan of NodeJS, it faces a significant limitation: lack of multi-threading support. Furthermore, TypeScript/JavaScript can be slow and resource-intensive when executing processor-heavy operations.

In my usual NodeJS tech stack, which includes GraphQL, NestJS, SQL (predominantly PostgreSQL with MikroORM), I encountered these limitations. To overcome them, I've developed a new stack utilizing Rust, which still offers some ease of development:

  • Async-GraphQL: A server-side GraphQL library implemented in Rust. It complies fully with the GraphQL specification and most of its extensions, providing type safety and high performance.
  • SQL with SeaORM: SQL, or Structured Query Language, is the foundation for popular databases like PostgreSQL. SeaORM is a relational ORM designed to facilitate the building of web services in Rust.
  • Actix-Web: A powerful, pragmatic, and extremely fast web framework for Rust.
  • Tokio: An asynchronous runtime for the Rust programming language, enabling efficient and scalable non-blocking I/O operations.

Introduction

In this article, I will explore and compare the NodeJS frameworks I typically use against my experiences with Rust, providing an overview of how to utilize them effectively.

Why Use Rust?

For applications requiring intensive processor tasks such as Cryptography Algorithms, Keyword Extraction Algorithms, or PDF generation, Rust emerges as the ideal language for server development. Its performance and efficiency in handling such tasks are unparalleled.

NestJS vs. Actix-Web

For general I/O-heavy applications, I prefer NestJS, a progressive Node.js framework designed for building efficient, reliable, and scalable server-side applications.

However, it's important to note the limitation of Node.JS: its single-threaded nature. For processor-intensive tasks, this can lead to blocking the thread being used by a user, alongside non-scalable resource usage.

Transitioning to Actix-Web

Actix-web, while not as feature-rich as NestJS, resembles lighter-weight frameworks such as Fastify or Express. This means much of the folder structure and architectural decisions are left to the developer. I recommend adopting a Model-Service-Resolver (MSR) structure for your application, consisting of:

  • Model: Represents the Data Transfer Objects (DTOs) and GraphQL Objects that expose API data.
  • Service: Manages data processing for specific models, encapsulating the business logic.
  • Resolver: Handles GraphQL queries and mutations, interacting with specific models.
  • Providers: External services utilized by the application, such as PostgreSQL or Redis.

Model

For our models, we utilize SeaORM, which necessitates a Workspace structure. Begin by creating a library named entities:

$ cargo new entities --lib
Enter fullscreen mode Exit fullscreen mode

Then, include it in your main Cargo.toml file:

# ...

[workspace]
members = [".", "entities"]
Enter fullscreen mode Exit fullscreen mode

In the entities library Cargo.toml, add the necessary dependencies:

[dependencies]
serde = { version = "1.0", features = ["derive"] }
chrono = "0.4"
sea-orm = { version = "0.12", features = [
    "sqlx-postgres",
    "runtime-actix-native-tls",
] }
Enter fullscreen mode Exit fullscreen mode

Creating an entity

Follow the SeaORM documentation to create an entity. For example, to define a User entity, create a user.rs file and reference it in your lib.rs:

use chrono;
use sea_orm::QueryOrder;
use sea_orm::{entity::prelude::*, ActiveValue, Condition};

#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "users")]
pub struct Model {
    #[sea_orm(primary_key)]
    pub id: i32,
    pub name: String,
    #[graphql(skip)]
    pub date_of_birth: String,
    pub created_at: DateTime,
    pub updated_at: DateTime,
}

// Define a relationship enum if needed (empty for this example)
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
}

// Implement `ActiveModelBehavior` for auto-updating timestamps
#[async_trait::async_trait]
impl ActiveModelBehavior for ActiveModel {
    async fn before_save<C: ConnectionTrait>(mut self, _: &C, insert: bool) -> Result<Self, DbErr> {
        let current_time = Utc::now().naive_utc();
        self.updated_at = ActiveValue::Set(current_time);
        if insert {
            self.created_at = ActiveValue::Set(current_time);
        }
        Ok(self)
    }
}
Enter fullscreen mode Exit fullscreen mode

This approach is slightly more complex than using MikroORM for entity definition, which would typically look like this in TypeScript:

import { Entity, PrimaryKey, Property } from '@mikro-orm/core';
import { IsString, Length } from 'class-validator';

@Entity({ tableName: 'users' })
export class UserEntity implements IUser {
  @PrimaryKey()
  public id: number;

  @Property({ columnType: 'varchar', length: 100 })
  @IsString()
  @Length(3, 100)
  @Matches(NAME_REGEX, {
    message: 'Name must not have special characters',
  })
  public name: string;

  @Property({ columnType: 'varchar', length: 10 })
  @IsString()
  @Length(10, 10)
  public dateOfBith: string;

  @Property({ onCreate: () => new Date() })
  public createdAt: Date = new Date();

  @Property({ onUpdate: () => new Date() })
  public updatedAt: Date = new Date();
}
Enter fullscreen mode Exit fullscreen mode

Migrations

For migrations, both schema-first and entity-first approaches are viable. Here, we'll focus on entity-first. Begin by installing necessary CLI tools and setting up the migration folder:

$ cargo install sqlx-cli
$ cargo install sea-orm-cli
$ sea-orm-cli migrate init -d migrations
Enter fullscreen mode Exit fullscreen mode

Include the new migrations directory in your workspace in the Cargo.toml file, and add the necessary dependencies:

# ...

[workspace]
members = [".", "entities", "migrations"]

[dependencies]
# ...
anyhow = "1"
actix-web = "4"
async-graphql-actix-web = "7"
async-graphql = { version = "7", features = ["default", "dataloader"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
sea-orm = { version = "0.12", features = [
    "sqlx-postgres",
    "runtime-actix-native-tls",
] }
Enter fullscreen mode Exit fullscreen mode

Then, proceed to create and modify migration files as required, ensuring they follow the structured approach for creating or dropping tables based on your entities:

use sea_orm_migration::{
    prelude::*,
    sea_orm::{DbBackend, Schema},
};

use entities::user::{Column, Entity};

#[derive(DeriveMigrationName)]
pub struct Migration;

#[async_trait::async_trait]
impl MigrationTrait for Migration {
    async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
        let schema = Schema::new(DbBackend::Postgres);
        manager
            .create_table(
                schema
                    .create_table_from_entity(Entity)
                    .if_not_exists()
                    .to_owned(),
            )
            .await
    }

    async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
        manager
            .drop_table(Table::drop().table(Entity).to_owned())
            .await
    }
}
Enter fullscreen mode Exit fullscreen mode

DTOs (Data Transfer Objects)

In async-graphql, DTOs are essential for constructing objects based on entities. Implement the From trait for entity-to-DTO conversions, and define complex GraphQL object fields and resolvers as needed for enhanced API functionality.

use async_graphql::dataloader::DataLoader;
use async_graphql::{ComplexObject, Context, Error, Result, SimpleObject};
use chrono::{NaiveDate, Utc};

use entities::user::Model;

#[derive(SimpleObject, Debug, Clone)]
#[graphql(complex)]
pub struct User {
    pub id: i32,
    pub name: String,
    pub username: String,
    #[graphql(skip)]
    pub date_of_birth: String,
    pub created_at: i64,
    pub updated_at: i64,
}

impl From<Model> for User {
    fn from(value: Model) -> Self {
        Self {
            id: value.id,
            name: value.name,
            email: value.email,
            username: value.username,
            date_of_birth: value.date_of_birth.to_string(),
            created_at: value.created_at.timestamp(),
            updated_at: value.updated_at.timestamp(),
        }
    }
}

#[ComplexObject]
impl User {
    pub async fn age(&self) -> Result<u32> {
        let date_of_birth = NaiveDate::parse_from_str(&self.date_of_birth, "%Y-%m-%d")
            .map_err(|_| Error::from("Invalid date of birth"))?;

        if let Some(age) = Utc::now().date_naive().years_since(date_of_birth) {
            Ok(age)
        } else {
            Err(Error::from("Invalid date of birth"))
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Providers

In your application, providers act as the bridge between your application and external resources. Typically, you might find yourself needing providers for the following:

  • Database: Manages connections to your database system.
  • Redis Cache: Handles caching mechanisms to improve response times and reduce database load.
  • Object Storage Service: Facilitates interactions with object storage solutions for managing binary data, such as images or documents.

Creating a Database Provider

The first step in setting up your providers is to create a dedicated providers folder. Within this folder, you'll manage the various external resource providers your application uses. For the database provider, begin by creating a database.rs file:

use std::env;

use anyhow::Result;
use sea_orm::DatabaseConnection;

#[derive(Clone, Debug)]
pub struct Database {
    connection: DatabaseConnection,
}

impl Database {
    pub async fn new() -> Result<Self> {
        let database_url =
            env::var("DATABASE_URL").expect("Missing the DATABASE_URL environment variable.");
        let connection = sea_orm::Database::connect(&database_url).await?;

        Ok(Self { connection })
    }

    pub fn get_connection(&self) -> &DatabaseConnection {
        &self.connection
    }
}
Enter fullscreen mode Exit fullscreen mode

This example demonstrates how to define a DatabaseProvider struct that encapsulates the functionality to establish a connection to your database using an environment variable (DATABASE_URL). The new async function initializes a new DatabaseProvider instance by connecting to the database, and the get_connection method provides a reference to the established DatabaseConnection.

Next Steps

Following the database provider setup, you would similarly create providers for Redis Cache and Object Storage Service. The implementation details would vary based on the specific libraries and services you choose to use. Remember, the goal of these providers is to centralize and abstract the logic for interacting with external resources, making your application's main logic cleaner and more maintainable.

Service

Before we dive into creating services, it's essential to establish a robust error handling mechanism. Unlike languages that use try/catch blocks, Rust utilizes Result types, which can be gracefully handled using the ? operator. This approach necessitates the creation of custom error handlers to manage different types of errors effectively.

Error Handling

Setting Up Tracing

First, ensure your application can trace errors by adding the necessary dependencies to your Cargo.toml file:

# ...
tracing = "0.1"
tracing-opentelemetry = "0.22"
tracing-actix-web = "0.7"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tracing-bunyan-formatter = "0.3"
tracing-log = "0.2"
derive_more = "0.99"
Enter fullscreen mode Exit fullscreen mode

Then, initialize a telemetry system in a startup/telemetry.rs module:

use tracing::{subscriber::set_global_default, Subscriber};
use tracing_bunyan_formatter::{BunyanFormattingLayer, JsonStorageLayer};
use tracing_log::LogTracer;
use tracing_subscriber::{layer::SubscriberExt, EnvFilter, Registry};

pub struct Telemetry;

impl Telemetry {
    pub fn get_subscriber(name: &str, env_filter: &str) -> impl Subscriber + Send + Sync {
        let env_filter = EnvFilter::try_from_default_env().unwrap_or(EnvFilter::new(env_filter));
        let formatting_layer = BunyanFormattingLayer::new(name.into(), std::io::stdout);
        Registry::default()
            .with(env_filter)
            .with(JsonStorageLayer)
            .with(formatting_layer)
    }

    pub fn init_subscriber(subscriber: impl Subscriber + Send + Sync) {
        LogTracer::init().expect("Failed to set logger");
        set_global_default(subscriber).expect("Failed to set subscriber");
    }
}
Enter fullscreen mode Exit fullscreen mode

Custom Error Handling

Create a common/error_handling.rs module to define a structure for internal errors and an enumeration for categorizing different types of service errors:

use derive_more::Display;

#[derive(Debug, Display)]
pub struct InternalCause(String);

impl InternalCause {
    pub fn new(cause: &str) -> Self {
        Self(cause.to_string())
    }
}

#[derive(Debug, Display)]
pub enum ServiceError {
    InternalServerError(String),
    BadRequest(String),
    Unauthorized(String),
    NotFound(String),
    Forbidden(String),
    Conflict(String),
}
Enter fullscreen mode Exit fullscreen mode

Implement methods for generating specific error types and integrating with tracing for logging:

pub const INTERNAL_SERVER_ERROR: &'static str = "Internal Server Error";
pub const INTERNAL_SERVER_ERROR_STATUS_CODE: u16 = 500;
pub const BAD_REQUEST: &'static str = "Bad Request";
pub const BAD_REQUEST_STATUS_CODE: u16 = 400;
pub const UNAUTHORIZED: &'static str = "Unauthorized";
pub const UNAUTHORIZED_STATUS_CODE: u16 = 401;
pub const NOT_FOUND: &'static str = "Not Found";
pub const NOT_FOUND_STATUS_CODE: u16 = 404;
pub const FORBIDDEN: &'static str = "Forbidden";
pub const FORBIDDEN_STATUS_CODE: u16 = 403;
pub const CONFLICT: &'static str = "Conflict";
pub const CONFLICT_STATUS_CODE: u16 = 409;
pub const SOMETHING_WENT_WRONG: &'static str = "Something went wrong";

impl ServiceError {
    pub fn to_str_name(&self) -> &'static str {
        match self {
            ServiceError::InternalServerError(_) => INTERNAL_SERVER_ERROR,
            ServiceError::BadRequest(_) => BAD_REQUEST,
            ServiceError::Unauthorized(_) => UNAUTHORIZED,
            ServiceError::NotFound(_) => NOT_FOUND,
            ServiceError::Forbidden(_) => FORBIDDEN,
            ServiceError::Conflict(_) => CONFLICT,
        }
    }

    pub fn get_status_code(&self) -> u16 {
        match self {
            ServiceError::InternalServerError(_) => INTERNAL_SERVER_ERROR_STATUS_CODE,
            ServiceError::BadRequest(_) => BAD_REQUEST_STATUS_CODE,
            ServiceError::Unauthorized(_) => UNAUTHORIZED_STATUS_CODE,
            ServiceError::NotFound(_) => NOT_FOUND_STATUS_CODE,
            ServiceError::Forbidden(_) => FORBIDDEN_STATUS_CODE,
            ServiceError::Conflict(_) => CONFLICT_STATUS_CODE,
        }
    }

    pub fn internal_server_error<T: std::fmt::Display + std::fmt::Debug>(
        message: &str,
        cause: Option<T>,
    ) -> Self {
        let error = Self::InternalServerError(message.to_string());

        if let Some(cause) = cause {
            tracing::error!(INTERNAL_SERVER_ERROR, %message, %cause);
        } else {
            tracing::error!(INTERNAL_SERVER_ERROR, %message);
        }

        error
    }

    pub fn bad_request<T: std::fmt::Display + std::fmt::Debug>(
        message: &str,
        cause: Option<T>,
    ) -> Self {
        let error = Self::BadRequest(message.to_string());

        if let Some(cause) = cause {
            tracing::error!(BAD_REQUEST, %message, %cause);
        } else {
            tracing::error!(BAD_REQUEST, %message);
        }

        error
    }

    pub fn unauthorized<T: std::fmt::Display + std::fmt::Debug>(
        message: &str,
        cause: Option<T>,
    ) -> Self {
        let error = Self::Unauthorized(message.to_string());

        if let Some(cause) = cause {
            tracing::error!(UNAUTHORIZED, %message, %cause);
        } else {
            tracing::error!(UNAUTHORIZED, %message);
        }

        error
    }

    pub fn not_found<T: std::fmt::Display + std::fmt::Debug>(
        message: &str,
        cause: Option<T>,
    ) -> Self {
        let error = Self::NotFound(message.to_string());

        if let Some(cause) = cause {
            tracing::error!(NOT_FOUND, %message, %cause);
        } else {
            tracing::error!(NOT_FOUND, %message);
        }

        error
    }

    pub fn forbidden<T: std::fmt::Display + std::fmt::Debug>(
        message: &str,
        cause: Option<T>,
    ) -> Self {
        let error = Self::Forbidden(message.to_string());

        if let Some(cause) = cause {
            tracing::error!(FORBIDDEN, %message, %cause);
        } else {
            tracing::error!(FORBIDDEN, %message);
        }

        error
    }

    pub fn conflict<T: std::fmt::Display + std::fmt::Debug>(
        message: &str,
        cause: Option<T>,
    ) -> Self {
        let error = Self::Conflict(message.to_string());

        if let Some(cause) = cause {
            tracing::error!(CONFLICT, %message, %cause);
        } else {
            tracing::error!(CONFLICT, %message);
        }

        error
    }
}
Enter fullscreen mode Exit fullscreen mode

Finally, extend the error handling to integrate with frameworks like actix_web and async_graphql by implementing the ResponseError and converting between ServiceError and GraphQLError:

use actix_web::{error, http::StatusCode, HttpResponse};
use async_graphql::{Error, ErrorExtensions};
// ...

impl error::ResponseError for ServiceError {
    fn status_code(&self) -> StatusCode {
        match *self {
            ServiceError::InternalServerError(_) => StatusCode::INTERNAL_SERVER_ERROR,
            ServiceError::BadRequest(_) => StatusCode::BAD_REQUEST,
            ServiceError::Unauthorized(_) => StatusCode::UNAUTHORIZED,
            ServiceError::NotFound(_) => StatusCode::NOT_FOUND,
            ServiceError::Forbidden(_) => StatusCode::FORBIDDEN,
            ServiceError::Conflict(_) => StatusCode::CONFLICT,
        }
    }

    fn error_response(&self) -> HttpResponse {
        match *self {
            ServiceError::InternalServerError(ref message) => {
                HttpResponse::InternalServerError().json(message)
            }
            ServiceError::BadRequest(ref message) => HttpResponse::BadRequest().json(message),
            ServiceError::Unauthorized(ref message) => HttpResponse::Unauthorized().json(message),
            ServiceError::NotFound(ref message) => HttpResponse::NotFound().json(message),
            ServiceError::Forbidden(ref message) => HttpResponse::Forbidden().json(message),
            ServiceError::Conflict(ref message) => HttpResponse::Conflict().json(message),
        }
    }
}

// Start by creating an identitical enum so there are no conflicts
#[derive(Debug)]
pub enum GraphQLError {
    InternalServerError(String),
    BadRequest(String),
    Unauthorized(String),
    NotFound(String),
    Forbidden(String),
    Conflict(String),
}

impl From<ServiceError> for GraphQLError {
    fn from(error: ServiceError) -> Self {
        match error {
            ServiceError::InternalServerError(message) => {
                GraphQLError::InternalServerError(message)
            }
            ServiceError::BadRequest(message) => GraphQLError::BadRequest(message),
            ServiceError::Unauthorized(message) => GraphQLError::Unauthorized(message),
            ServiceError::NotFound(message) => GraphQLError::NotFound(message),
            ServiceError::Forbidden(message) => GraphQLError::Forbidden(message),
            ServiceError::Conflict(message) => GraphQLError::Conflict(message),
        }
    }
}

// Create the Into trait for the new enum
impl Into<Error> for GraphQLError {
    fn into(self) -> Error {
        match self {
            GraphQLError::InternalServerError(message) => {
                Error::new(message).extend_with(|_, e| {
                    e.set("type", "Internal Server Error");
                    e.set("code", "500");
                })
            }
            GraphQLError::BadRequest(message) => Error::new(message).extend_with(|_, e| {
                e.set("type", "Bad Request");
                e.set("code", "400");
            }),
            GraphQLError::Unauthorized(message) => Error::new(message).extend_with(|_, e| {
                e.set("type", "Unauthorized");
                e.set("code", "401");
            }),
            GraphQLError::NotFound(message) => Error::new(message).extend_with(|_, e| {
                e.set("type", "Not Found");
                e.set("code", "404");
            }),
            GraphQLError::Forbidden(message) => Error::new(message).extend_with(|_, e| {
                e.set("type", "Forbidden");
                e.set("code", "403");
            }),
            GraphQLError::Conflict(message) => Error::new(message).extend_with(|_, e| {
                e.set("type", "Conflict");
                e.set("code", "409");
            }),
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Creating a basic User Service

Creating a user

Define an input structure for creating a user within the dtos/inputs directory:

use async_graphql::InputObject;

#[derive(InputObject, Debug)]
pub struct CreateUser {
    pub id: i32,
    pub name: String,
    pub username: String,
    pub date_of_birth: String,
}
Enter fullscreen mode Exit fullscreen mode

Implement the service method to create a user, leveraging your database provider and handling potential errors:

use entities::{
    user::{ActiveModel, Column, Entity, Model},
};
use sea_orm::{ActiveModelTrait, PaginatorTrait, Set};

use create::dtos::inputs::CreateUser;
use crate::providers::Database;

pub fn create_user(db: &Database, input: CreateUser) -> Result<Model, ServiceError> {
    let username = input.to_lowercase();
    tracing::info_span!("users_service::create_user", %username);

    let count = Entity::find()
          .filter(Column::Username.eq(&username))
          .count(db.get_connection())
          .await?;

    if count > 0 {
        return Err(ServiceError::conflict::<Error>("User already exists", None));
    }

    let user = ActiveModel {
        name: Set(input.name),
        date_of_birth: Set(input.date_of_birth),
        username: Set(username)
    }
    .insert(db.get_connection())
    .await?;


    let user_id = user.id;
    tracing::info_span!("User created", %user_id);
    Ok(user)
}
Enter fullscreen mode Exit fullscreen mode

Finding a User by ID

Implement a method to find a user by their ID, which demonstrates handling potential not found errors:

// ...

pub async fn find_one_by_id(db: &Database, id: i32) -> Result<Model, ServiceError> {
    tracing::info_span!("users_service::find_one_by_id", %id);
    let user = Entity::find_by_id(id).one(db.get_connection()).await?;
    match user {
        Some(value) => {
            tracing::info!("User found");
            Ok(value)
        }
        None => Err(ServiceError::not_found::<Error>(USER_NOT_FOUND, None)),
    }
}
Enter fullscreen mode Exit fullscreen mode

Through these implementations, your service layer will efficiently handle business logic, interact with the database, and manage errors. These foundations ensure that your application is robust, maintainable, and scalable.

Resolvers

In a GraphQL server, resolvers are crucial as they determine how each query or mutation translates into an operation that fetches or manipulates data. This section outlines how to implement resolvers that interact with our previously defined services.

Structuring Queries and Mutations

For organization, async-graphql dictates that our GraphQL schema has to have distinct structs for queries and mutations. This style helps in maintaining clear boundaries between data retrieval (queries) and data manipulation (mutations) operations.

use async_graphql::{Context, Object, Result};

use crate:dtos::inputs::CreateUser;
use crate::providers::Database;
use crate::services::users_service;

#[derive(Default)]
pub struct UsersQuery;

#[derive(Default)]
pub struct UsersMutation;
Enter fullscreen mode Exit fullscreen mode

Implementing Resolver Methods

Within these structs, we define methods corresponding to each query and mutation defined in our GraphQL schema. These methods utilize the services layer to interact with the database and perform the necessary business logic.

Query Resolvers

For fetching user data by ID, we implement a method in the UsersQuery struct:

// ...

#[Object]
impl UsersQuery {
    async fn user_by_id(&self, ctx: &Context<'_>, id: i32) -> Result<User> {
       Ok(
           users_service::find_one_by_id(ctx.data::<Database>()?, id)
                .await?
                .into(),
        )
    }
}
Enter fullscreen mode Exit fullscreen mode

This resolver accesses the Database provider from the context, passes it to the users_service::find_one_by_id method, and maps the service result into the GraphQL User object.

Mutation Resolvers

For creating a new user, we define a method in the UsersMutation struct:

// ...

#[Object]
impl UsersMutation {
    async fn creater_user(&self, ctx: &Context<'_>, input: CreateUser) -> Result<User> {
        Ok(
            users_service::create_user(ctx.data::<Database>()?, input)
                .await?
                .into(),
        )
    }
}
Enter fullscreen mode Exit fullscreen mode

Similar to the query resolver, this mutation accesses the database through the context and utilizes the users_service::create_user method to add a new user. The result is then mapped to the GraphQL User object.

Schema Configuration

We define the schema by merging query and mutation roots. This setup allows your GraphQL server to understand how to process incoming queries and mutations.

Create a file called startup/schema_builder.rs with the query root and mutation root:

use async_graphql::{EmptySubscription, MergedObject, Schema};

use crate::providers::Database;
use crate::resolvers::users_resolver;


#[derive(MergedObject, Default)]
pub struct MutationRoot(users_resolver::UsersMutation);

#[derive(MergedObject, Default)]
pub struct QueryRoot(users_resolver::UsersQuery);

pub fn build_schema(database: &Database) -> Schema<QueryRoot, MutationRoot, EmptySubscription> {
    Schema::build(
        QueryRoot::default(),
        MutationRoot::default(),
        EmptySubscription,
    )
    .data(database.to_owned())
    .finish()
}
Enter fullscreen mode Exit fullscreen mode

GraphQL Configuration

Next, we configure Actix Web to serve the GraphQL API and Playground, facilitating both API interaction and a user-friendly interface for testing queries and mutations.

Create the functions for the GraphQL API POST (the main API) and GET (the playground) routes:

use actix_web::{web::Data, HttpRequest, HttpResponse, Result};
use async_graphql::{
    http::{playground_source, GraphQLPlaygroundConfig},
    // ...
};
use async_graphql_actix_web::{GraphQLRequest, GraphQLResponse};

// ...

pub async fn graphql_request(
    schema: Data<Schema<QueryRoot, MutationRoot, EmptySubscription>>,
    req: HttpRequest,
    gql_req: GraphQLRequest,
) -> GraphQLResponse {
    schema
        .execute(gql_req.into_inner())
        .await
        .into()
}

pub async fn graphql_playground() -> Result<HttpResponse> {
    let source = playground_source(GraphQLPlaygroundConfig::new("/api/graphql"));
    Ok(HttpResponse::Ok()
        .content_type("text/html; charset=utf-8")
        .body(source))
}
Enter fullscreen mode Exit fullscreen mode

Application Initialization

For the main application setup, we focus on integrating the schema with Actix Web and configuring the server to listen for incoming requests.

Create a file called startup/app.rs:

use std::{io, net::TcpListener};

use actix_web::guard;
use actix_web::{dev::Server, web, App, HttpServer};
use anyhow::Error;
use tracing_actix_web::TracingLogger;

use crate::providers::Database

use super::schema_builder::{build_schema, graphql_playground, graphql_request};


pub struct ActixApp {
    port: u16,
    server: Server,
}


impl ActixApp {
    pub async fn new() -> Result<Self, Error> {
        if let Err(e) = dotenvy::dotenv() {
            tracing::warn!("Failed to load .env file: {}", e);
            tracing::warn!("Using default environment variables");
        }

        let host = env::var("HOST").unwrap_or_else(|_| "127.0.0.1".to_string());
        let port = env::var("PORT")
            .unwrap_or_else(|_| "8080".to_string())
            .parse::<u16>()
            .unwrap_or(8080);
        let listener = TcpListener::bind(format!("{}:{}", &host, &port))?;
        let port = listener.local_addr().unwrap().port();
        let server = HttpServer::new(move || {
            App::new()
                .wrap(TracingLogger::default())
                .configure(Self::build_app_config(Environment::new(), port, &db))
        })
        .listen(listener)?
        .run();
        tracing::info!("Server running on port {}", port);
        Ok(Self { port, server })
    }

    pub fn port(&self) -> u16 {
        self.port
    }

    pub async fn start_server(self) -> Result<(), io::Error> {
        self.server.await
    }

    pub fn build_app_config(
        port: u16,
        db: &Database,
    ) -> impl Fn(&mut web::ServiceConfig) {
        let db = db.clone();
        move |cfg: &mut web::ServiceConfig| {
            cfg.app_data(web::Data::new(build_schema(
                &db,
                &jwt,
                ObjectStorage::new(&environment),
            )))
            .service(
                web::resource("/api/graphql")
                    .guard(guard::Post())
                    .to(graphql_request),
            )
            .service(
                web::resource("/api/graphql")
                    .guard(guard::Get())
                    .to(graphql_playground),
            );
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Running the API

To get the API up and running, we'll utilize the tokio::main macro for our main function, facilitating an asynchronous runtime necessary for our Actix web server. This setup involves initializing telemetry for application insights and starting the Actix application.

Here's how you can do it:

use std::fmt::{Debug, Display};
use tokio::task::JoinError;
use your_project_name::startup::{ActixApp, Telemetry};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Initialize telemetry for structured logging.
    let subscriber = Telemetry::get_subscriber("your_project_name", "info"); // Customize the application name and log level as needed.
    Telemetry::init_subscriber(subscriber);

    // Create and start the Actix application.
    let application = ActixApp::new().await?;
    let application_task = tokio::spawn(application.start());

    // Monitor the application's exit status.
    tokio::select! {
        outcome = application_task => report_exit("API", outcome),
    };

    Ok(())
}

// Helper function to log the outcome of the application task.
fn report_exit(task_name: &str, outcome: Result<Result<(), impl Debug + Display>, JoinError>) {
    match outcome {
        Ok(Ok(())) => {
            tracing::info!("{} has exited", task_name)
        }
        Ok(Err(e)) => {
            tracing::error!(
                error.cause_chain = ?e,
                error.message = %e,
                "{} failed",
                task_name
            )
        }
        Err(e) => {
            tracing::error!(
                error.cause_chain = ?e,
                error.message = %e,
                "{}' task failed to complete",
                task_name
            )
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

Throughout this article, we embarked on a comparative journey between the NestJS framework and its Rust counterpart, exploring not just the theoretical differences but also delving into the practical aspects of using them in real-world applications. From setting up the project structure and handling data models to implementing services, resolvers, and finally running the API, we've covered a comprehensive guide to understanding the strengths and nuances of developing server-side applications in Rust as an alternative to NodeJS with NestJS.

As we conclude, it's evident that the choice between NestJS and Rust depends on specific project requirements, performance considerations, and developer proficiency. While NestJS offers a quick and efficient way to build server-side applications with JavaScript/TypeScript, Rust provides an avenue for achieving unparalleled performance and reliability, albeit with a steeper learning curve.

About the Author

Hey there! I am Afonso Barracha, a back-end developer with a soft spot for GraphQL. If you enjoyed reading this article, why not show some love by buying me a coffee?

Lately, I have been diving deep into more advanced subjects. As a result, I have switched from sharing my thoughts every week to posting once or twice a month. This way, I can make sure to bring you the highest quality content possible.

Do not miss out on any of my latest articles – follow me here on dev and LinkedIn to stay updated. I would be thrilled to welcome you to our ever-growing community! See you around!

💖 💪 🙅 🚩
tugascript
Afonso Barracha

Posted on February 8, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related