I use Docker to share software. Here's how:
Summary
- I use human-readable
Dockerfile
s - I use some sort of image tags in my
FROM
statements - I greatly prefer official images over community images
- I prefer to configure the container to run as the
application
user, UID and GID1000
Usage
I create Dockerfiles at the ./Dockerfile
path.
I build Dockerfiles like docker build -t foo .
I run Docker images like docker run --rm -it foo
If I need to mount files into the container, I run docker run --rm -it -v $PWD/file.txt:/path/file.txt foo
or docker run --rm -it -v $PWD/dir:/path/dir foo
I check what containers are running with docker ps -a
. -a
makes sure that stopped containers are included in the output.
Frequently, I run docker container prune
to remove stopped containers that I don't need anymore.
Every so often, I run docker image prune -a
to remove old images that I don't use anymore.
To remove everything that isn't running right now, I run docker system prune
.
Building in CI
For my public projects, I build my Docker containers using Github Actions and push them to the Github Container Registry. See How I Use: Github Actions for Docker.
Sample Dockerfiles
Sample Single-Stage Dockerfile
This is from my AlbinoDrought/daily-shuffle project:
FROM node:18-alpine3.15
RUN addgroup -S application && adduser -S application -G application
USER application
COPY --chown=application:application . /app
WORKDIR /app
RUN npm ci
I specified a specific version of the node
image. Otherwise, my application may fail to build in the future when Node releases breaking changes.
I specified the alpine
variant of the node
image. The alpine
variant is smaller. If I need to use native C libraries, I usually avoid alpine
(glibc vs musl issues)
I would build this using a command like docker build -t foo .
This container doesn't have a custom entrypoint. I would run it like docker run --rm -it foo npm run shuffle
.
Sample Multi-Stage Dockerfile: Go, Slimmer Final Stage
This is from my AlbinoDrought/creamy-videos project:
# Build binary
FROM golang:1.21 as builder
ENV CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
WORKDIR $GOPATH/src/github.com/AlbinoDrought/creamy-videos
# install dependencies
COPY go.mod go.sum $GOPATH/src/github.com/AlbinoDrought/creamy-videos/
RUN go mod download \
&& go install github.com/a-h/templ/cmd/templ@v0.2.334
COPY . $GOPATH/src/github.com/AlbinoDrought/creamy-videos
# generate latest assets,
# compress source for later downloading,
# shove compressed source into static dist,
# build full binary
RUN go generate ./... \
&& tar -zcvf /tmp/source.tar.gz . \
&& mv /tmp/source.tar.gz ui2/static/source.tar.gz \
&& go build -a -installsuffix cgo -o /go/bin/creamy-videos
# start from ffmpeg for thumbnail gen
FROM jrottenberg/ffmpeg:4.0-alpine
RUN apk add --no-cache tini
# Copy our static executable
COPY --from=builder /go/bin/creamy-videos /go/bin/creamy-videos
ENTRYPOINT ["/sbin/tini"]
CMD ["/go/bin/creamy-videos", "serve"]
I build the Golang code using the official golang
image. This image contains many things that aren't needed by the application during runtime, so I want to use a multi-stage build here.
Because I'm using a multi-stage build, there's no guarantee that I will have the same or any version of libc in the final stage, so I specify CGO_ENABLED=0
. Obviously, this flag breaks the build if your application requires CGO.
Around # install dependencies
, I copy the go.mod and go.sum files and download the dependencies. This helps use Docker's build cache: the dependency step only runs again if go.mod or go.sum change.
Around # start from ffmpeg for thumbnail gen
, I switch to a different alpine-based image that contains ffmpeg. At the time of writing, ffmpeg was not available in the Alpine package repository, and it was easier to use this community image.
Sample Multi-Stage Dockerfile: Node, Go, Slimmer Final Stage
This is from my AlbinoDrought/np-scanner project:
# Build SPA
FROM node:16-alpine3.12 AS SPA
COPY ./ui /ui
COPY Makefile /
WORKDIR /
RUN apk add --no-cache make && make ui
# Build binary
FROM golang:1.17.1-alpine AS builder
RUN apk add --no-cache make git
COPY . $GOPATH/src/go.albinodrought.com/neptunes-pride
WORKDIR $GOPATH/src/go.albinodrought.com/neptunes-pride
## Embed SPA
COPY --from=SPA /ui/dist $GOPATH/src/go.albinodrought.com/neptunes-pride/ui/dist
ENV CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
RUN make dist/np-scanner && mv dist/np-scanner /np-scanner
# Lightweight Runtime Env
FROM gcr.io/distroless/base-debian10
COPY --from=builder /np-scanner /np-scanner
CMD ["/np-scanner", "serve"]
I build the UI using the node
image.
I build the backend using the golang
image. I combine the build artifacts from the node
image in this stage, since I embed them into the Go binary using //go:embed
.
I use gcr.io/distroless/base-debian10
for the final stage. The Google distroless images are slim like Alpine but still use glibc. I don't have a reason for choosing distroless over Alpine here. However, I do have a reason for choosing distroless/alpine over FROM scratch
here:
FROM scratch
does not includeca-certificates
. If your application makes outbound TLS calls, your application may fail to validate certificates in aFROM scratch
imageFROM scratch
does not have a/tmp
dir. If your application writes to a temp dir, it may fail in aFROM scratch
image
- For Golang applications in particular: if you're receiving multipart requests that contain files or are otherwise large (r.ParseForm
, r.ParseMultipartForm
), Golang will write to the temp dir after a certain amount of bytes. This will fail in a FROM scratch
image.