Microservices with Go modules
NightGhost
Posted on November 8, 2019
Problem
I've seen a lot of guides on how to use Go modules to manage dependencies in applications. All of them describe a single-module project with the go.mod file in the project root. In this case, all you need to do is just to copy your go.mod and go.sum files to the application image and launch go mod download
.
But the most of Go applications consist of several microservices. Let's consider a project with the structure presented below:
.
├── tracker
│ ├── tracking
│ │ └── tracker.go
│ ├── Dockerfile
│ └── main.go
├── data
│ ├── packets.go
│ └── messages.go
├── docker-compose.yml
├── tgram-bot
│ ├── conf
│ │ └── config.yml
│ ├── Dockerfile
│ ├── main.go
│ └── bot
│ ├── loadconf.go
│ └── bot.go
├── util
│ ├── cache.go
│ ├── storage.go
│ └── converters.go
└── web-server
├── api
│ ├── controller.go
│ ├── adminController.go
│ └── middlewares.go
├── Dockerfile
└── main.go
On the scheme above tracker, tgram-bot and web-server are actual microservices whereas data and util are packages used by all the microservices. It's so called multi-module single-repository project. That's a pretty common thing in the world of microservice architecture. And we need to somehow manage dependencies of such project.
Considerations
All the people I've seen on the forums and etc. tried to turn each shared package or microservice into a single Go module, which is a bit wrong approach. Go modules just don't work in such manner. Every Go module is a little GOPATH, so you can launch your application without putting it in the actual GOPATH (read this for more detailed explanation). Linker won't find your cross-shared modules unless you put them in their single git repositories. Here is a tutorial on how to do that. If this procedure is done, you will have go.mod file for each microservice or cross-shared package. That's pretty good because each microservice will have its own isolated set of dependencies for downloading them into its Docker image.
But it's not convenient and straightforward approach. Moreover, it violates the Go modules pattern. Each repository should have only one go.mod file which should be placed in the repository root.
Solution
What I suggest you to do is totally opposite to what is described above. We'll have all the dependencies of our application put in the single pool. For that we've got to write this dependencies.Dockerfile:
FROM golang:1.12 AS dep
# Add the module files and download dependencies.
ENV GO111MODULE=on
COPY ./go.mod /go/src/app/go.mod
COPY ./go.sum /go/src/app/go.sum
WORKDIR /go/src/app
RUN go mod download
# Add the shared packages.
COPY ./data /go/src/app/data
COPY ./util /go/src/app/util
You should build the image with the project root as a build context:
docker build -t dependencies -f ./dependencies.Dockerfile .
This image will keep all the dependencies of the project. We'll use it as a base image for each microservice of the application:
FROM dependencies AS builder
# Copy the application source code.
COPY ./web-server /go/src/app/web-server
# Build the application.
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -o /go/bin/web-server /go/src/app/web-server/
ENTRYPOINT [ "/go/bin/web-server" ]
FROM alpine:latest
COPY --from=builder /go/bin/web-server /bin/web-server
ENTRYPOINT [ "/bin/web-server" ]
All the other microservices have similar containerization code. Now we have only one image which keeps all our dependencies inside. It doesn't violate the pattern of Go modules and provides us a little optimization: if we had to use the same library in two different microservices, we would have to download this library twice and store it in each of two images. But with the approach of single dependency pool we need to download it only once and store it only in the one shared image. This approach saves us a bit of time and disk space. Besides, we can put everything that's shared by all our microservices in this image.
And the finishing touch - extending our Makefile:
all: build run
build-dependencies:
docker build -t dependencies -f ./dependencies.Dockerfile .
build: build-dependencies
docker-compose build
run:
docker-compose up
This set of commands will check for new dependencies every time we rebuild the application.
Thank you for reading! I hope this article was helpful for you.
Posted on November 8, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.