Use NGINX as database proxy
Maxime Guilbert
Posted on March 6, 2023
In our everyday life as dev, we can lose temporary an access to a database from our workstation and it's still available from our Kubernetes cluster. So, in this kind of situation, we can use a proxy to access the database to resolve this network issue.
So today, we will see how to do it with Nginx.
What is NGINX ?
For the ones who didn't already know what is Nginx, it's an opensource software which can be used as web server, load balancer or proxy.
Configurate NGINX
For our case, the Nginx configuration should look like this.
user nginx;
worker_processes auto;
error_log /dev/stdout info;
pid /var/run/nginx.pid;
events {
worker_connections 1024; ## Default: 1024
}
http {
access_log /dev/stdout combined;
log_format logger-json escape=json '{"source": "nginx", "time": $msec, "resp_body_size": $body_bytes_sent, "host": "$http_host", "address": "$remote_addr", "request_length": $request_length, "method": "$request_method", "uri": "$request_uri", "status": $status, "user_agent": "$http_user_agent", "resp_time": $request_time, "upstream_addr": "$upstream_addr"}';
server {
listen 8080 so_keepalive=on;
location = /health {
auth_basic off;
return 200;
}
}
}
stream {
upstream database {
server [database endpoint]:[database port];
}
server {
listen [database port] so_keepalive=on;
proxy_connect_timeout 60s;
proxy_socket_keepalive on;
proxy_pass database;
}
}
Before to use it, some elements must be replaced:
-
[database endpoint]
: The url to access the database -
[database port]
: The port to access the database
Detail of the configuration
http
This part is only to expose a /health
endpoint to use it for the readiness and liveness probes.
stream
This part is the major one in our case.
upstream
is here to create a server group. (We don't have multiple instances here, but it makes easier the configuration to read)
The server
part will define :
- on which port Nginx must listen ->
listen
- where the traffic will be redirected ->
proxy_pass
And other configurations likeso_keepalive=on
andproxy_socket_keepalive on;
which are really importante to keep the connection alive.
Deploy on Kubernetes
Once the Nginx configuration is ready, we can now deploy a Nginx instance in our cluster.
Here is a template to create the configmap, deployment & service.
apiVersion: v1
data:
nginx.conf: "user nginx;\r\nworker_processes auto;\r\nerror_log /dev/stdout info;\r\npid
\ /var/run/nginx.pid;\r\nevents {\r\n worker_connections 1024; ## Default:
1024\r\n}\r\n\r\nhttp {\r\n access_log /dev/stdout combined;\r\n log_format
logger-json escape=json '{\"source\": \"nginx\", \"time\": $msec, \"resp_body_size\":
$body_bytes_sent, \"host\": \"$http_host\", \"address\": \"$remote_addr\", \"request_length\":
$request_length, \"method\": \"$request_method\", \"uri\": \"$request_uri\", \"status\":
$status, \"user_agent\": \"$http_user_agent\", \"resp_time\": $request_time,
\"upstream_addr\": \"$upstream_addr\"}';\r\n\r\n\r\n server {\r\n listen
8080 so_keepalive=on;\r\n\r\n location = /health {\r\n auth_basic
off;\r\n return 200;\r\n }\r\n }\r\n}\r\nstream {\r\n upstream
postgres {\r\n server database:port;\r\n
\ }\r\n\r\n server {\r\n listen port so_keepalive=on;\r\n proxy_connect_timeout
60s;\r\n proxy_socket_keepalive on;\r\n proxy_pass postgres;\r\n
\ }\r\n}"
kind: ConfigMap
metadata:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: http
- port: port
protocol: TCP
targetPort: port
name: database
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- args:
- nginx
- -g
- daemon off;
- -c
- /nginx/etc/config/nginx.conf
image: nginx:1.23.3
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /health
port: nginx
scheme: HTTP
initialDelaySeconds: 10
name: nginx
ports:
- containerPort: 8080
name: nginx
protocol: TCP
- containerPort: port
name: database
protocol: TCP
readinessProbe:
httpGet:
path: /health
port: nginx
scheme: HTTP
initialDelaySeconds: 10
resources:
limits:
memory: 200Mi
requests:
cpu: 50m
memory: 200Mi
volumeMounts:
- mountPath: /nginx/etc/config
name: nginx-config-volume
volumes:
- configMap:
name: nginx-config
name: nginx-config-volume
Before to use it, be sure to put the correct informations in the confimap and to define correctly the port in the service & deployment definition!
Usage
Once deployed, you can expose the deployment with an Ingress
or simply do a port forward from your workstation to be able to do locally all the manipulations you want!
I hope it will help you! 🍺
You want to support me?
Posted on March 6, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.