Automating a 3-Tier Application Deployment with Docker & Jenkins
Avesh
Posted on October 25, 2024
Here's a guide to implementing a 3-tier application project using Docker and Jenkins. We'll walk through the components of a 3-tier application, creating Docker containers for each tier, and setting up a Jenkins pipeline to automate the deployment.
Overview of a 3-Tier Architecture
A 3-tier architecture typically comprises:
- Presentation Layer (Frontend) - Handles the user interface.
- Application Layer (Backend) - Contains the business logic.
- Data Layer (Database) - Stores and manages the application’s data.
Each layer will be deployed in a Docker container, and we'll use Jenkins to manage the CI/CD pipeline for the application.
Project Roadmap
-
Setup Environment:
- Install Docker and Docker Compose.
- Set up Jenkins for CI/CD.
-
Build Docker Images:
- Create Docker images for each layer (frontend, backend, database).
-
Configure Docker Compose:
- Use Docker Compose to define multi-container applications.
-
Develop Jenkins Pipeline:
- Create Jenkins jobs and scripts to build, test, and deploy the Docker containers.
-
Deploy and Test:
- Deploy the application and test functionality across the three layers.
Tools Required
- Docker: To containerize the application components.
- Docker Compose: To manage multi-container Docker applications.
- Jenkins: For CI/CD automation.
- Git: Version control.
- A Code Editor: Visual Studio Code or any preferred IDE.
Step 1: Setting up Docker and Jenkins
- Install Docker and Docker Compose on your local machine or server.
-
Install Jenkins on your local machine or a server, then set up necessary plugins:
- Docker Pipeline Plugin
- Git Plugin
- Pipeline Plugin
Step 2: Structure of the 3-Tier Application
The directory structure for a 3-tier application project can look like this:
3-tier-app/
├── frontend/
│ ├── Dockerfile
│ └── app/
│ └── index.html
├── backend/
│ ├── Dockerfile
│ └── app/
│ └── server.js
├── database/
│ ├── Dockerfile
│ └── data/
├── docker-compose.yml
└── Jenkinsfile
Frontend (Presentation Layer)
This layer can be a simple HTML file served by Nginx.
Frontend Dockerfile (frontend/Dockerfile
):
FROM nginx:alpine
COPY app /usr/share/nginx/html
EXPOSE 80
Backend (Application Layer)
For this example, the backend will be a Node.js application.
Backend Dockerfile (backend/Dockerfile
):
FROM node:alpine
WORKDIR /app
COPY app /app
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
Backend Code (backend/app/server.js
):
const express = require('express');
const app = express();
app.get('/', (req, res) => res.send('Hello from Backend!'));
app.listen(3000, () => console.log('Backend server running on port 3000'));
Database (Data Layer)
For the database layer, we can use MySQL with a custom Dockerfile.
Database Dockerfile (database/Dockerfile
):
FROM mysql:5.7
ENV MYSQL_ROOT_PASSWORD=rootpassword
ENV MYSQL_DATABASE=app_db
EXPOSE 3306
Step 3: Configuring Docker Compose
Docker Compose file (docker-compose.yml
):
version: '3'
services:
frontend:
build: ./frontend
ports:
- "80:80"
networks:
- app-network
backend:
build: ./backend
ports:
- "3000:3000"
depends_on:
- database
networks:
- app-network
database:
build: ./database
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: app_db
ports:
- "3306:3306"
networks:
- app-network
networks:
app-network:
driver: bridge
This setup defines each service and links them in a common Docker network, app-network
, which enables inter-container communication.
Step 4: Creating the Jenkins Pipeline
Create a Jenkinsfile in the root of the project directory. This file will define the stages for the CI/CD pipeline.
Jenkinsfile:
pipeline {
agent any
environment {
DOCKER_HUB_CREDENTIALS = credentials('dockerhub')
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build Docker Images') {
steps {
script {
docker.build("frontend", "./frontend")
docker.build("backend", "./backend")
docker.build("database", "./database")
}
}
}
stage('Push Images to Docker Hub') {
steps {
script {
docker.withRegistry('https://index.docker.io/v1/', 'DOCKER_HUB_CREDENTIALS') {
docker.image("frontend").push("latest")
docker.image("backend").push("latest")
docker.image("database").push("latest")
}
}
}
}
stage('Deploy with Docker Compose') {
steps {
sh 'docker-compose down'
sh 'docker-compose up -d'
}
}
}
post {
always {
echo 'Pipeline Complete!'
}
}
}
Explanation of Jenkinsfile Stages:
- Checkout: Pulls the latest code from the repository.
- Build Docker Images: Builds Docker images for frontend, backend, and database.
- Push Images to Docker Hub: Pushes the images to Docker Hub (ensure Jenkins has access to Docker credentials).
- Deploy with Docker Compose: Pulls down the running containers (if any) and redeploys the new versions.
Jenkins Configuration
- Add Docker Hub credentials to Jenkins by navigating to Manage Jenkins > Manage Credentials.
- Set up a Jenkins job:
- Point the job to the repository containing the
Jenkinsfile
. - Trigger the job manually or set it to trigger on commits.
- Point the job to the repository containing the
Step 5: Testing the Application
Once the pipeline is complete:
- Access the frontend by navigating to
http://localhost
. - Access the backend via
http://localhost:3000
. - Ensure the backend is connected to the database by checking logs for successful queries.
Security and Best Practices
- Limit Container Permissions: Use non-root users within containers.
- Network Segmentation: Use Docker networks to limit container communication.
- Secrets Management: Use tools like Docker Secrets or environment variables managed through Jenkins.
- Regular Backups: Set up regular backups for the database container’s data volume.
Final Thoughts
This 3-tier architecture using Docker and Jenkins provides a reliable, isolated environment for each layer of the application. With Jenkins managing the pipeline, deployments are automated and can be easily triggered on code changes. This approach reduces manual intervention, enhances consistency, and enables fast iterations, making it ideal for modern applications.
Posted on October 25, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
October 30, 2024