Ednah
Posted on March 10, 2024
In this blog post, we will guide you through the process of running a simple React application inside a Docker container. You can find the React app on our GitHub repository here. We will start by preparing the React application for Docker and then move on to setting up Elastic Beanstalk to deploy the application. Finally, we will configure GitHub Actions for automated deployment.
Here’s the blog outline:
- Preparing the React Application: Using a Dockerfile to define the environment and dependencies and running the React application inside a Docker container.
- Setting up Elastic Beanstalk: Setting up necessary IAM roles, attaching policies and choosing the appropriate configurations for the Dockerized React application.
- Configuring GitHub Actions: Setting up GitHub Actions for automated deployment, and configuring environment variables in the GitHub repository secrets.
- Testing the Application
Follow along to learn how to deploy your React applications with ease.
Preparing the React application
We will be running a simple React application inside a docker container. You can find the react app on my github repo here.
Create a new folder at a location of your choosing for the react application. To create a docker container for our application, we will run the following command in the folder:
docker run -it -v ${PWD}:/app -p 3000:3000 node:18 sh
This command starts a Docker container based on the node:18 image (node version the application was built on), mounts the current directory to /app inside the container, maps port 3000, and opens an interactive shell session inside the container.
Let's break down the command further:
-
docker run
: This is the command to run a Docker container. -
-it
: These are two flags combined. -i stands for interactive, which allows you to interact with the container's shell. -t allocates a pseudo-TTY, which helps in keeping the session open and receiving input. -
-v ${PWD}:/app
: This is a volume mount flag. It mounts the current directory (${PWD}) on the host machine to the /app directory inside the container. This allows the container to access and modify files in the current directory. -
-p 3000:3000
: This flag maps port 3000 on the host machine to port 3000 inside the container. This is typically used for accessing services running inside the container from outside. -
node:18
: This specifies the Docker image to use for creating the container. In this case, it's the node image with the tag 18, which presumably refers to a specific version of Node.js. -
sh
: This is the command to run inside the container. It starts a shell session (sh is a common Unix/Linux shell) so that you can interact with the container's file system and execute commands.
If docker doesn’t find the image locally it will pull it from the remote repository.
Since we started a shell session, after running the command, you will be inside the active shell session. Now cd into the app directory, which is where our application will be.
cd app
We have mapped this directory in our container to our current directory. Thus, any file that you create inside this directory will also reflect on your current directory.
We can now clone our react project into this directory.
git clone https://github.com/Ed-Neema/simpleTodoApp.git
Note that when you interact with a remote Git repository using HTTPS URLs, Git will typically prompt you for your username and password to authenticate (if your Github isn’t globally configured). However, due to changes in GitHub's authentication mechanisms, using a personal access token (PAT) is now required instead of your password for increased security. You may read more about this here.
You can find more detailed information about how to create your personal token here. In summary, here are the steps:
- Go to your GitHub account settings.
- Navigate to
"Developer settings" > "Personal access tokens." > Tokens (classic)
- Click on
"Generate new token."
- Give your token a descriptive name, select the scopes or permissions you'd like to grant this token, and click "Generate token."
- Important: Copy your new personal access token. Once you leave or refresh the page, you won’t be able to see it again.
Now that the application has been cloned, let’s create a Dockerfile and a docker compose file.
A Dockerfile is a text file that contains instructions for building a Docker image. It defines the environment inside a Docker container, including the base image to use, any additional dependencies to install, environment variables to set, and commands to run when the container starts. In our case, this is the Dockerfile we need:
This Dockerfile uses a multi-stage build to first build a Node.js application and then sets up an Nginx server to serve the built static files.
Here's a breakdown of each part:
-
FROM node:18 as builder
: This sets the base image to node:18 and assigns it an alias builder. This stage will be used for building the application. -
WORKDIR /app/simpleTodoApp
: This sets the working directory inside the container to/app/simpleTodoApp
. All subsequent commands will be executed relative to this directory. -
COPY package.json .
: This copies the package.json file from the host machine to the/app/simpleTodoApp
directory in the container. This is done before running npm install to take advantage of Docker's layer caching mechanism. -
RUN npm install
: This installs the dependencies listed in the package.json file. -
COPY . .
: This copies the rest of the application files from the host machine to the/app/simpleTodoApp
directory in the container. -
RUN npm run build
: This runs the build script specified in the package.json file. This script is typically used to build the production version of the application. -
FROM nginx
: This starts a new build stage using the nginx base image. This stage will be used for the final image that will run the application. -
EXPOSE 80
: This exposes port 80 on the container. This is the default port used by Nginx for serving web content. -
COPY --from=builder /app/simpleTodoApp/dist /usr/share/nginx/html
: This copies the build output from the builder stage (the/app/simpleTodoApp/dist
directory) to the Nginx web root directory (/usr/share/nginx/html
). This effectively sets up Nginx to serve the static files generated by the Node.js build process. (Note: Since we are using Vite, the builder outputs the built files to the “dist” directory. A pure react app would have its build files in a “build” directory, while a nextjs application would have them in a “.next” directory).
Next, we will create a docker-compose.yml file.
This Docker Compose file defines a service named web that builds a Docker image using the Dockerfile in the current directory and exposes port 80 on the host machine to the container.
Here's a breakdown of each part:
-
version: '3'
: This specifies the version of the Docker Compose file format. In this case, it's version 3, which is a widely used version that supports most features. -
services
: This is the key under which you define the services that make up your application. Each service is a containerized application. -
web
: This is the name of the service. You can use any name you like to identify your services. -
build
: This specifies how to build the Docker image for the web service. -
context
: .: This specifies the build context, which is the path to the directory containing the Dockerfile and any other files needed for the build. In this case, it's set to . (the current directory). -
dockerfile
: Dockerfile: This specifies the name of the Dockerfile to use for building the image. In this case, it's Dockerfile in the build context. -
ports
: This specifies the ports to expose on the host machine and the container. - '
80:80
': This maps port 80 on the host machine to port 80 on the container. This means that you can access the service running in the container on port 80 of the host machine.
Let’s now test whether what we have done works before we deploy to beanstalk.
Run the following command in a new terminal (outside the shell of your container) in your app’s directory:
docker-compose up --build
The docker-compose up --build command is used to build the images for the services defined in your docker-compose.yml file and start the containers. In summary, the command will build the Docker image for the web service using the Dockerfile in the current directory. It will then start the container for the web service and expose port 80 on the host machine, mapping it to port 80 on the container.
This will take a while to build the application’s assets. After it’s done, you should be able to access your application at http://localhost/
Setting up Elastic Beanstalk
We will first start with creating an instance profile.
Creating an EC2 instance profile and attaching the specified policies is necessary when setting up an Elastic Beanstalk environment that uses EC2 instances to run your application. So, navigate to IAM and create an IAM role. The policies that can be attached are the following:
- AWSElasticBeanstalkWebTier: This policy provides permissions necessary for the EC2 instances to serve web traffic. It includes permissions to create and manage Elastic Load Balancers (ELB), which are used to distribute incoming traffic to your application across multiple EC2 instances.
- AWSElasticBeanstalkWorkerTier: This policy is used for worker environments in Elastic Beanstalk, which are used for background processing or tasks that don't require direct handling of web requests. It provides permissions needed for worker environments, such as reading from and writing to SQS queues.
- AWSElasticBeanstalkMulticontainerDocker: This policy is specifically for multicontainer Docker environments in Elastic Beanstalk. It provides permissions for managing Docker containers and interacting with the Docker daemon on the EC2 instances.
By attaching these policies to the IAM role associated with your EC2 instances, Elastic Beanstalk can manage the resources (such as ELBs and Docker containers) required to run your application. This allows Elastic Beanstalk to automatically scale your application, handle load balancing, and manage the underlying infrastructure, making it easier to deploy and manage your application in a scalable and fault-tolerant manner.
AWS provides several predefined roles that you can use for common tasks and services. These predefined roles are known as AWS managed policies. In our case, AWS already has a role for Elastic Beanstalk called aws-elasticbeanstalk-service-role. You may select this role and add the above three permissions.
When creating the role, here are the options you can select:
You may now navigate to the Elastic Beanstalk console and click on create new application. This will take you through a series of steps in creating the application.
Step 1:
We will use a Web Server environment since the aim is to deploy a sample web application
Next, an environment name will be generated for you. You may also enter a domain name, otherwise, it will be automatically generated for you.
Here, we select the appropriate configurations for the application we aim to run. In our case, since we are deploying a dockerized react application, we will choose docker.
We will start by deploying the sample application given to us by AWS, then modify it to our application.
Step 2:
This step has to do with IAM roles that will give elastic beanstalk the appropriate permissions to set up and run our application with the appropriate permissions. Here, under existing service roles, we will select the IAM role we made previously for Elastic Beanstalk.
Even though Elastic Beanstalk is setting up and managing the environment, it is possible to log into your EC2 once it’s running. You can create the EC2 key pair and specify it under the EC2 pair.
For the EC2 instance profile, you may create a role for the EC2 instances and select it here (Give it a try!).
Step 3
The next section is about setting up some network and database configurations.
You may select the VPC and the subnets that you want your instances to run in.
Since we will not be using a database for this application, we can leave it as disabled.
For Step 4 and 5, you may configure the instance traffic settings and monitoring and logging as desired.
After reviewing and creation, you may view the running instance of you web application through the given url:
Configuring Github Actions
To configure Github actions as needed, we can use instructions from this repo.
Essentially, your deploy yaml file will look something like this:
To configure the environment variables navigate to your repository’s settings tab > actions
Then create new repository secrets for the keys preceded with the word "secret" in the image above.
Under actions, add the Repository secrets:
After this, push the changes to your github.
git add .
git commit -m "commit message"
git push
After pushing the changes, you will see a new action triggered.
Once the environment is fully updated and set up on your elastic beanstalk, you should see the todo application deployed.
Congratulations on your first deployment!
Conclusion:
In summary, deploying a ReactJS application on AWS Elastic Beanstalk using GitHub Actions simplifies deployment and enhances application lifecycle management efficiency. GitHub Actions' integration automates build and deployment steps, ensuring consistency and enabling rapid, reliable releases. AWS Elastic Beanstalk provides scalability and managed environments, allowing seamless application scaling based on demand. This combination offers a robust solution for deploying and maintaining ReactJS applications, empowering developers to focus on delivering exceptional user experiences while ensuring deployment reliability and scalability.
Stay tuned for more deployment tutorials!
Posted on March 10, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.