Install a Second Instance of Nginx via Docker.
darwish
Posted on October 8, 2024
Recently, I faced a use case where I needed to install a Django app on a server that already had a deployed instance of Frappe ERPNext. If you don’t know, Frappe has a CLI called Bench, and it handles the generation of config files automatically. So, I found it a bad idea to inject other files to deploy my new app. Instead, I decided to install another instance of Nginx via Docker and connect it to another port on my server.
Assumptions
To follow this tutorial, I assume that you already have a cloud server running anywhere like AWS, GCP, Digital Ocean, etc., and you can SSH to your server.
Setting Up a New User and ssh
please note that i use UBUNTU instance so you can follow the steps if your package manager is apt
First, let’s create a new user to separate everything. I will SSH as root to the server and run this command
adduser dev
This should create a new username “dev” on my cloud instance. After this, I will go to my normal terminal and generate a new SSH key for the new user I’ve created with this command.
ssh-keygen -t rsa -b 4096 -f ~/.ssh/digital
this should generate two files under ~/.ssh/digital and ~/.ssh/digital.pub
I will copy the contents of the public file by running this.
xclip -sel clip < ~/.ssh/digital.pub
And after that, I will go to my server and open the authorized_keys file.
vim /home/dev/.ssh/authorized_keys
and add the contents of my public key .
you can install vim if it’s not installed by running
apt install vim
with that i can run this command on my terminal to directly connect to the new user i’ve created
ssh dev@ipaddress -i ~/.ssh/digital
install docker and give permissions to the new user
Now we need to install Docker and add the new user to the Docker group to be able to run all Docker commands without sudo. We can do this by following the documentation page: https://docs.docker.com/engine/install/ubuntu/
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
And after adding the repository source, now you can run.
apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Remember that I am still logged in as root, so I don’t need sudo here. However, of course, you will need to run sudo if you are not logged in as root. The reason I do this from the root account is that I don’t want to give the dev account sudo access to limit their permissions and avoid any conflicts with the running frapppe instance.
Now we can test our Docker installation by running this.
docker run hello-world
if this is result then the installation is successfull
But still, if you tried to log in with the "dev" user and run this command, you will get a permission error. To solve this, we need to run this command from our root account.
usermod -aG docker dev
This should add our user “dev” to the Docker group, which enables this user to use Docker.
login with the new user and create the folders
After all this, I will SSH to the new user and try to run “docker run hello-world” to validate that Docker is working. Also, I can open my ~/.ssh/config file on my local computer and add this block.
Host digital
HostName 68.183.214.170
User dev
IdentityFile ~/.ssh/digital
To be able to easily SSH to my server, via this command.
ssh digital
I prefer to create this structure
src
├── apps
└── common
I put everything in the “src” directory, and the second level includes “src/apps” and “src/common”. In the “common” directory, I install every container that will be used across different apps, such as nginx, postgres, or redis. In the “apps” directory, I install my apps to avoid installing multiple instances of the common containers.
so let’s create our folders by runing these commands
mkdir -p src/common/nginx
mkdir -p src/apps/omdabus.esolvelabs.com
Notice here, I used “omdabus.esolvelabs.com” as the folder name to easily memorize my folders by the domain they are deployed at later on.
And then we need to go to the nginx folder and create the docker-compose.yml file.
cd src/common/nginx `
vim docker-compose.yaml
nginx/docker-compose.yml
services:
nginx:
image: nginx
container_name: nginx
restart: always
volumes:
- ./conf.d:/etc/nginx/conf.d/:ro
ports:
- "81:80"
- "4433:443"
networks:
default:
external: true
name: nginx
Notice here we used 81 & 4433 for our ports, as well as adding volumes to bind our config folder to our container. We also added a network with the name Nginx to be able to connect our Django app later on.
but we should create this network first by running this.
docker network create nginx
Now let’s create our Django container.
For the sake of this tutorial, I won’t delve deeper into how I dockerized the Python app. Instead, I will focus on Nginx and how to configure it.
So here is an example of the docker-compose.yml file inside src/apps/omdabus.esovlelabs.com.
services:
elomda_bus:
image: exploremelon/elomda_bus:0.0.11
container_name: elomda_bus
restart: always
networks:
- nginx
networks:
nginx:
external: true
After this, I will run my container and make sure that it connects to the same network “nginx”.
you can run this to validate your network
docker network inspect nginx
Now, we can go back to our Nginx folder and add the config file. First, we need to add a DNS record to connect the domain to our app. I have a domain managed by Cloudflare, so I just need to open my Cloudflare account and add a new A record with the name of the subdomain I need to use, along with the IP address of your server.
You can get your IP by running this command:
curl ifconfig.me
Now we can add a new file called “omdabus.esolvelabs.com.conf” under nginx/conf.d and add this content.
upstream bus {
server elomda_bus:8000;
}
server {
listen 80;
server_name omdabus.esolvelabs.com;
location / {
proxy_pass http://bus;
}
}
If you try to visit the URL: http://omdabus.esolvelabs.com:81 now, you will see an error,
So, we need to edit settings.py file in Django to update the allowed hosts. To do this, we need to add a new volume on our app docker-compose file to avoid re-uploading each time we change things on the configuration. We can do this by going to the app folder.
cd src/apps/omdabus.esolvelabs.com
and then add the “settings.py” file
vim settings.py
I will put the code inside my Python repo settings.py.
"""
Generated by 'django-admin startproject' using Django 2.0.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
STRIPE_SECRET_KEY = os.getenv('STRIPE_SECRET_KEY')
STRIPE_PUBLISHABLE_KEY = os.getenv('STRIPE_PUBLISHABLE_KEY')
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SECRET_KEY = os.getenv('DJANGO_SECRET_KEY', 'i0&iq&e9u9h6(4_7%pt2s9)f=c$kso=k$c$w@fi9215s=1q0^d')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['0.0.0.0' , '192.168.1.6', 'localhost', '127.0.0.1']
CSRF_TRUSTED_ORIGINS = []
CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = True
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
X_FRAME_OPTIONS = 'DENY'
SECURE_SSL_REDIRECT = False
SECURE_HSTS_SECONDS = 31536000
# Application definition
INSTALLED_APPS = [
'orders.apps.OrdersConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
# 'import_export',
]
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'pizza.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'pizza.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = '/static/'
but i will change the allowed hosts line to bus to it
ALLOWED_HOSTS = ['0.0.0.0' , '192.168.1.6', 'bus', 'localhost', '127.0.0.1']
notice that bus here is the name of the upstream we used on nginx config file
after this i need to edit the docker-compose file of my app to add the new volume
services:
elomda_bus:
image: exploremelon/elomda_bus:0.0.11
container_name: elomda_bus
volumes:
- ./settings.py:/app/pizza/settings.py
restart: always
networks:
- nginx
networks:
nginx:
external: true
Then, try to reload,
docker compose up -d
and now it should be working.
But if you try to submit any form again, you will see an error.
To solve this, we need to add our URL to the CSRF hosts in settings.py.
CSRF_TRUSTED_ORIGINS = ["http://omdabus.esolvelabs.com:81" , "https://omdabus.esolvelabs.com:4433"]
At this point, everything should work if we visit http://omdabus.esolvelabs.com:81
setup tls via certbot and dns-cloudflare-credentials
Let’s set up TLS with Certbot. I prefer to use dns-cloudflare-credentials for its simplicity, so what you need is to get the API token from your Cloudflare account that is connected to your domain. You can do this by going to https://dash.cloudflare.com/profile/api-tokens
make sure you are logged in first. Then, you can choose one of the provided templates.
For the sake of this tutorial, I will use the Edit Zone DNS. After this, you need to install Certbot if it’s not installed and also install the Cloudflare plugin by running this as the root user.
apt install certbot python3-certbot-dns-cloudflare
Finally, to generate our certificates, we need to obtain the generated API token and save it in a file inside our nginx folder. Personally, I use a file named dns-credentials/cloudflare.ini.
dns_cloudflare_api_token=yourtoken
now we can run this command to generate our certificates
certbot certonly --dns-cloudflare --dns-cloudflare-credentials dns-credentials/cloudflare.ini -d omdabus.esolvelabs.com --email info@esolvelabs.com --agree-tos --non-interactive --force-renewal --cert-name omdabus.esolvelabs.com --keep-until-expiring --rsa-key-size 4096
notice here you need to replace the domain name with your actual domain
This should create folder under /etc/letsenctrypt/live
.
Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/omdabus.esolvelabs.com/fullchain.pem
So let’s add a volume to our container, and let’s edit the config file for the last time.
nginx/docker-compose.yaml
services:
nginx:
image: nginx
container_name: nginx
restart: always
volumes:
- ./conf.d:/etc/nginx/conf.d/:ro
- /etc/letsencrypt/:/etc/letsencrypt
ports:
- "81:80"
- "4433:443"
networks:
default:
external: true
name: nginx
The new volume should connect our letsencrypt folder on the host to the letsencrypt folder on the container, and we should edit the conf.d/omdabus.esolvelabs.com.conf.
upstream bus {
server elomda_bus:8000;
}
server {
listen 80;
server_name omdabus.esolvelabs.com;
return 301 https://$server_name:4433$request_uri;
}
server {
listen 443 ssl http2;
server_name omdabus.esolvelabs.com;
ssl_certificate /etc/letsencrypt/live/omdabus.esolvelabs.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/omdabus.esolvelabs.com/privkey.pem;
location / {
proxy_pass http://bus;
}
}
Notice here, we redirected the traffic on port 80 to the HTTPS server. However, we also used port 4433 for the redirection because we mapped the ports on our host differently. Additionally, we added the SSL block and attached the generated certificates to it. Now, if we visit https://omdabus.esolvelabs.com:4433, we should see that everything works properly.
Conclusion
This guide has shown you how to set up a second Nginx server using Docker. This is useful when you want to run multiple web applications on the same server without them interfering with each other.
Here are the main things we did:
1 . Created a new user: This helps keep your different applications separate.
Installed Docker: Docker is a tool that makes it easy to run applications in containers.
Set up Nginx: Nginx is a web server that helps people access your website.
Created a network: This allows your Django app and Nginx to communicate.
5.Configured Nginx: We told Nginx to serve your Django app on a specific port.
- Secured the website: We added HTTPS to your website using Certbot.
Here’s a GitHub repository that demonstrates the steps outlined in the previous response:
Posted on October 8, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 27, 2024