Livestream platform backend — Detailed architecture

teyz

Bastien R.

Posted on July 23, 2024

Livestream platform backend — Detailed architecture

Previously

In the previous article, dflmnq presents a backend architecture for a streaming platform use case.

This article follows this one: Backend Architecture — Use Case: Live stream Platform.
I invite you to read it if you haven’t already.

Who am I?

I’m Bastien, a 26yo backend developer (at the time of writing this article).

I’m passionate about streaming and video games. I was delighted when I had an opportunity to join Kick. I learned a lot about architecture / good practices, etc. It’s my experience at Kick that prompted me to write this article.

Let’s go into more details

We finish the previous article with the description of the global architecture of the backend as you can see below

Global backend architecture

I added the contract component(purple one).

Any communication happening between services, synchronous or not, involving a gateway or not, should be defined in this contracts repository which is the only source of truth.
Respecting this rule ensures that payloads stay consistent, versioning is way easier and we do not miss nor hide any information.

How contracts is organized?

In short, it’s the glue between gateways and services. Without it, our services won’t be able to communicate with each other, and that’s a real shame 😅.

Architecture of contracts

As stated above, there are two types of contracts :

  • The internal ones, between our services.
  • The external ones, between the gateways and the internet.

The internal ones live inside services, generated, and clients folders while the external ones live inside models and docs. How does it work?

Internal contracts

The first folder is services.

This one contains the protobuf definition for each of our services.
It is versioned and each service is independent here. Any time we modify services, we can use buf generate to update the go interfaces matching the protobuf definitions.
This generated code is pushed into the generated folder.

Then, the final part is the clients one.
We use clients to write wrappers around the generated code, adding to it unified monitoring, errors handling, payloads translating...
In the end, the only thing used out of contracts is the code from clients.

External contracts

Whenever a gateway needs to build a response for an endpoint, entities should be used from the ones available in the models folder.
It contains each entities, versioned.

The second folder, docs contains all the OpenAPI documents for both the Public, Private, and Admin API. It has to always be in sync with the models.

Now, let’s dive deep into clients

clients is the most important folder on this repository, as I said before, we will find everything we need to create our gRPC wrappers.

Here is an example of the integration of user-store-svc in contracts:

Architecture of clients

entities is the folder where we can retrieve all the definitions of our entities for user-store-svc.

grpc, in this folder, we will find the gRPC logic of our service. All methods need to be defined in interface.go.

mocks is auto-generated with interface.go.

What about Models ?

Models is composed by 4 folders, here the architecture:

Architecture of models

entities must list all the entities in our various gateways/services.

kafka must list all our events/constants.

public must list all our entities and stuff related to our public API.

ws must list all our models/entities for our websockets events.

Best for last: Services

In this last folder, we will retrieve all the definitions of our protobuf services.
Those protobuf will define the contracts between our gateway and the service.

contracts/
├─ services/
│ ├─ user/
│ │ ├─ store/
│ │ │ ├─ svc/
│ │ │ │ ├─ v1/
│ │ │ │ │ ├─ user.proto
│ │ │ │ │ ├─ service.proto

service.proto is like an interface in go but for protobuf.
It should be like this (for our example).

service UserStoreSvc {
rpc CreateUser(CreateUserRequest) returns (CreateUserResponse);
}

user.go must define the contract for every methods in our user-store-svc. It should be like this (for our example).

message User {
google.protobuf.StringValue id = 1;
google.protobuf.StringValue username = 2;
google.protobuf.StringValue email = 3;
google.protobuf.Timestamp created_at = 4;
google.protobuf.Timestamp updated_at = 5;
}

message CreateUserRequest {
google.protobuf.StringValue username = 1;
google.protobuf.StringValue email = 2;
}

message CreateUserResponse {
User user = 1;
}

You can find the GitHub repository here: https://github.com/Golerplate/contracts

Note: It is very important to version all our files in contracts.

That’s globally all for the contracts. And this is a very good transition to continue with the code organization of a service. We will see that contracts will be very useful 😀.

What will a service look like in terms of code organization

Our services are built the way we might play with Legos. Let me explain what this means.

Architecture of the microservice

The first folder cmd contains the file main.go, we will define how the server/database/redis runs and start the gRPC and HTTP server if we need it.

The second folder internal is where the fun begins, we will find 5 main folders.

  • config: At the beginning, config was always redefined in each service and he was looking like this:

type Config struct {
HTTPServerConfig
GRPCServerConfig
PlanetScaleConfig
CacheConfig
GeneralConfig
}

type GeneralConfig struct {
Environment string
env:"ENVIRONMENT"
}

type CacheConfig struct {
Host string
env:"CACHE_HOST"
Port uint16
env:"CACHE_PORT"
Password string
env:"CACHE_PASSWORD"
}

type PlanetScaleConfig struct {
WriterHost string
env:"DB_HOST_WRITER"
ReaderHost string
env:"DB_HOST_READER"
Username string
env:"DB_USER"
Password string
env:"DB_PASSWORD"
DBName string
env:"DB_NAME"
Port uint16
env:"DB_PORT"
}

type HTTPServerConfig struct {
Port uint16
env:"HTTP_SERVER_PORT"
}

type GRPCServerConfig struct {
Port uint16
env:"GRPC_SERVER_PORT"
}

It was painful and we had a lot of duplicate code so we decided to move this logic into pkg (I’ll come back to it later in this article.)

So now, config looks like:

type Config struct {
ServiceConfig config.Config
GRPCServerConfig grpc.GRPCServerConfig
PostgresConfig database_postgres.Config
RedisConfig redis.Config
KafkaProducerConfig pkg_kafka_producer.Config
}

We’ve taken our inspiration from the Lego system, so we can build our service and configuration as we wish using the libraries in pkg.

  • database: The database folder is a little bit more complex than the config one. There is more logic in there.

First of all, the migrations folder will list all our migrations files for our database, it’s very important to maintain consistency between our prod and our local environments.

Then, we have interface.go where we have to define all our methods.

Finally, we have the postgres folder (we chose postgres for the name as we are using postgres for our database, but we are free to change it).
Inside this folder, we have all the files related to the methods we describe later in interface.go.

  • entities: Do we need to describe this one? Joke aside, we’ll need to describe the entities we’ll be using in the service, paying particular attention to versioning.

  • handlers:

Architecture of handlers

Handlers is composed by 2 main elements, the first one is interface.go which describes the methods to launch our server.

Into grpc folder, we have server.go, we will find a bench of functions to setup / start and stop our gRPC server.

When we zoom into user/v1/ we will find init.go, this file describes the struct and the function to create a NewUserStoreServiceHandler (he will be used into main.go in cmd).
As we can imagine, user.go contains all the gRPC methods related to User.

  • services:

architecture of service/v1

Init.go has the same role as init.go in handlers and if we have added cache in our services, it is in this file that we will defines our cacheDuration and CacheKey

const (
userCacheDuration = time.Hour * 24
)

func generateUserCacheKeyWithEmail(email string) string {
return fmt.Sprintf("user-store-svc:user:email:%v", email)
}

Interface.go, We’re getting the hang of it, it’s where we define our interface for the methods of the service.

The last one, user.go is the main file where we wrote all the methods for the service, this layer is a middleman between the gRPC layer and the database layer. It takes parameters from the gRPC, calls the database service, and formats the data retrieved by the database layer to pass it to the gRPC layer.

Where will be the shared code? (pkg)

The pkg repository contains all the libraries that we use in the different micro-services. These libraries are non-product ones. It can be anything from a client connecting to our database to a utility function improving the way we handle the errors.

For our streaming platform, we need several libraries.

  • config: We will define our global config for all services. For example ServiceName and Environment (dev, prod, etc).

  • cache: As we are processing a lot of requests per second (dozens of thousands of requests per second), we need to add some caching to relieve our Database.

  • constants: All our common constants. For example, our id prefixes.

  • database: We will define what type of database we use and the config and the connection methods.
    errors: Here we have defined custom database errors for our services.

  • grpc: Here we have the gRPC config (Port) and the custom errors for our gRPC methods.

  • http: Same as gRPC but for the HTTP layer (errors, response, and config).
    You can find the GitHub repository here:
    https://github.com/Golerplate/pkg

💖 💪 🙅 🚩
teyz
Bastien R.

Posted on July 23, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related