Caddy, Go, Docker and a Single Page App
Chris Rowley
Posted on April 3, 2023
On a recent project I was tasked to create a Golang-based web service and a Single Page App to go with it. The company wasn't set on deployment so I decided to package things in a way that would best simulate a production environment while retaining the ability to launch and test the SPA from any machine. The big sticking point to this simulation was https
connectivity. While the Caddy server was something I'm familiar with, using it along with the Go API server would require two shells and a platform-specific version of Caddy. With more than a few questions lingering I decided to try out Docker as a single deployment point for the project.
This article isn't about any of the technologies involved. I'm trying to revisit my roots and write a comprehensive tutorial on the project. There is a work-in-progress version as well as the repository at Github. No, here I'm going to assume the reader is familiar with Go, SPAs, Caddy and Docker, and is looking for a method to tie them all together in a localhost
environment. Non-standard ports are used to avoid competing with other web services. These steps have been tested on Windows but should be adaptable to other operating systems.
We will be serving our SPA from a folder public
in the root of the project. The root also contains our Go-based API. We also need to configure Caddy to generate TLS certificates, reverse proxy our API and serve our SPA static files. This Caddyfile
handles our needs:
{
http_port 2010
servers :2015 {
listener_wrappers {
http_redirect
tls
}
}
}
https://localhost:2015 {
encode zstd gzip
handle /api/* {
reverse_proxy localhost:3000
}
handle {
root * public
try_files {path} index.html
file_server
}
}
Caddy needs an http
port so it doesn't try to bind with :80
but the http_redirect
listener will ensure only https
will be served off of port :2015
. Caddy will redirect calls to the /api
path to our Go API, will use the public
folder as the root of the site, and deliver index.html
as the SPA when a named resource isn't available.
After installing Docker Desktop and having it running we can develop a Dockerfile
to execute commands needed to construct our environment as well as a docker-compose.yml
file to define how the two containers, Caddy and our API, are accessible from the package.
The final addition to our formula is the introduction of ☁️ Air - Live reload for Go apps. With this or a similar tool we can keep Docker running after making changes to our API server. To configure Air we need a .air.toml
file:
[build]
cmd = "go build -o ./tmp/spa ."
bin = "tmp/spa"
exclude_dir = ["public","docs"]
This is fairly basic, issuing go build
command and directing the output to a tmp
folder; pointing to that folder as the location of the executable; and excluding the folders that are not part of the Go-based API server.
While Air provides its own image we're going to use the basic Go version instead so we need to establish the environment using a Dockerfile
:
FROM golang:1.19
WORKDIR /app
RUN go install github.com/cosmtrek/air@latest
COPY go.mod ./
RUN go mod download
CMD ["air", "-c", ".air.toml"]
Here we're requesting an image of 1.19 version of Go and setting a working folder for the container. Into this folder we're installing Air, copying go.mod
and downloading relevant modules. Finally we're starting up Air and loading our configuration file. These actions will serve as the basis for our Go container. For better explanations of each of these directives please check out the official documentation Dockerfile reference.
We're almost there, just the Docker Compose configuration docker-compose.yml
file remains:
version: "3.8"
services:
go:
build: ./
ports:
- "3000"
volumes:
- ./:/app
caddy:
image: caddy:latest
restart: on-failure
ports:
- "2010:2010"
- "2015:2015"
- "2019:2019"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./public:/srv
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
external: true
caddy_config:
Our two containers, described as services here, take advantage of both a Dockerfile
build and a Docker image. Our go
service uses the build
directive set to the root of our project from which it uses the Dockerfile
setup and exposes our API server on port 3000. Our caddy
service uses official image, requesting the latest version. As this image is defined by another party it has specific parameters necessary for its container to operate. The ports
correspond to our definitions in Caddyfile
with one addition, 2019:2019
which maps to Caddy's admin API. This is necessary to store the SSL certificate after Docker is up and running.
Caddy uses a number of volumes
. Two point directly at files within our project, first our Caddyfile
, then our public
folder which Caddy will serve live files. The other two are virtual filesystems Docker will create as defined by the master volumes
parameters. We can assume the caddy_config
volume is where active configuration is stored as it is not discussed on the Caddy Docker Official Image page, so we're copying their parameter exactly, but the caddy_data
volume needs some extra discussion. It is used to store a number of things including SSL certificates. By default Docker creates and destroys volumes upon startup and exit. As we want to persist our certificate across sessions we can take advantage of an external
Docker volume. These virtual filesystems are created before starting the Docker session for the first time. This can be done from the command line or more easily from within the Docker Desktop app. Simply choose "Volumes", click the "Create" button and specify caddy_data
.
We're now ready to start up our new environment by entering this in a terminal:
docker compose up
Here's where the external
volume and Caddy's admin API come into play. Make sure you have a local copy of Caddy and navigate to its folder and execute:
caddy trust --address localhost:2019
This should present a certificate confirmation dialog such as this:
The SSL information gets stored in the Docker caddy_data
volume so it will be available any time we start up our package. The fruits of our labor aren't readily visible when we visit https://localhost:2015
again. But under the hood we can develop any part of our project and have the changes automatically updated in a local version of a production environment.
Now it's time to take a break knowing that at any point you can fire up docker compose up
and continue to develop your API or SPA with live-reloading and in a secure server environment.
Posted on April 3, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.