Uendi Hoxha
Posted on October 7, 2024
Docker allows developers to package applications with their dependencies into a lightweight, portable container. However, creating efficient and secure Docker images is crucial, especially in production environments where performance and security are paramount. In this article, we’ll explore best practices to help you write optimized and secure Dockerfiles, ensuring your containers are small, fast, and robust.
I. Choose the Right Base Image
The base image sets the foundation of your container. Opting for a lightweight base image can significantly reduce the size of your image and minimize security vulnerabilities.
- Use official Docker images whenever possible, as they are maintained and regularly updated.
- Prefer lightweight images like alpine over full OS images like ubuntu or debian. Alpine is only around 5 MB compared to 100+ MB for Ubuntu :
FROM node:20-alpine
II. Leverage Multistage Builds for Smaller Images
Multistage builds allow you to separate the build environment from the final production image, ensuring the final image only contains the necessary runtime files. This helps in reducing the size of the image and removing build-time dependencies.
- Use multistage builds to compile or build your application in one stage and only copy necessary artifacts to the next stage.
III. Minimize Layers
Each command in a Dockerfile adds a new layer to the final image. Reducing the number of layers and consolidating commands can lead to a more efficient image.
- Combine multiple
RUN
commands into a single layer. - Avoid adding unnecessary files to the image.
# Instead of this:
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean
# Use this:
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean
IV. Use .dockerignore
Just like .gitignore
, .dockerignore
helps exclude unnecessary files from your Docker image, reducing its size and preventing sensitive files (like env files or Git directories) from being included in the build context.
Add unnecessary files like documentation, .git
directories, and local configuration files to .dockerignore
.
# .dockerignore
node_modules
.git
.env
README.md
V. Set User Permissions
By default, Docker containers run as the root user, which can pose security risks. It’s a good practice to run your containers with a non-root user wherever possible.
- Use the USER directive to switch to a non-root user.
- Create a user in the Dockerfile if one doesn’t exist in the base image.
# Add a user and switch to it
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
CMD ["./myapp"]
VI. Optimize Caching with Build Arguments
Docker caches each layer during the build process, which can speed up subsequent builds. However, improper caching can lead to outdated dependencies or inefficient builds. Using build arguments can help control when the cache should be invalidated.
- Add frequently changing commands (for example,
COPY
for source code) after more stable ones (like dependency installation). For example:
# First install dependencies (cacheable)
COPY package.json .
RUN npm install
# Then add source code (likely to change)
COPY . .
CMD ["npm", "start"]
By copying the package.json
file before the source code, you allow Docker to cache the dependencies layer, saving time on rebuilds.
VII. Use Official Docker Image Scanning Tools
Docker images can contain security vulnerabilities. Regularly scan your images using tools like Docker Scan or AWS ECR Image Scanning to detect and fix potential issues.
- Integrate security scanning into your CI/CD pipeline to catch vulnerabilities early.
- Use tools like Docker Scan.
VIII. Avoid Hardcoding Secrets
Avoid adding sensitive information (like API keys, passwords, or tokens) directly into your Dockerfile. Instead, pass them securely using environment variables or Docker Secrets.
- Use
ARG
andENV
for dynamic configurations, but ensure they are passed securely. - Utilize Docker Secrets or other secret management tools for production deployments.
ARG API_KEY
ENV API_KEY=$API_KEY
IX. Clean Up After Installing Dependencies
After installing packages or dependencies, ensure you clean up the temporary files and cache to keep the image lean.
- Use
apt-get clean
or equivalent commands for other package managers. - Remove any temporary files after installation.
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
X. Use COPY Instead of ADD
While ADD can be used to copy files and fetch remote URLs, it's safer and more explicit to use COPY
for local file transfers. Use ADD
only when you need to extract tar files or download remote files.
- Use
COPY
for local files to avoid unintended behavior. - Use
ADD
only for advanced use cases like fetching remote files.
Here’s an example Dockerfile that incorporates the best practices:
# Stage 1: Build Stage - Using multistage builds for smaller images
FROM node:20-alpine AS builder
# Set working directory
WORKDIR /app
# Install dependencies (cacheable layer)
COPY package.json package-lock.json ./
RUN npm install --production && \
# Clean up npm cache after installing
npm cache clean --force
# Copy source files
COPY . .
# Build the application
RUN npm run build
# Remove dev dependencies and unnecessary files
RUN rm -rf ./src ./tests ./node_modules && \
npm install --production && \
# Clean up any temporary files
npm cache clean --force && \
rm -rf /var/cache/apk/* /tmp/*
# Stage 2: Production Stage - Creating a lightweight final image
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy necessary files from build stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
# Add a non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# Expose the port the app runs on
EXPOSE 3000
# Start the application
CMD ["node", "dist/index.js"]
# .dockerignore
node_modules
.git
.env
README.md
Dockerfile
Posted on October 7, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.