I ran Nginx Proxy Manager for years on my home server. It works fine — GUI for SSL certs, easy reverse proxy rules, click and done. But when you're managing configs with code and deploying with Docker Compose, a GUI becomes the odd one out.
Caddy does the same job with a config file. Automatic HTTPS, simple syntax, and it plays well with Docker networks — containers talk to each other by name, no published ports needed.
The migration
The switch was straightforward. NPM had six subdomains proxied to various Docker containers. Each became a short Caddy config block:
app.example.com {
reverse_proxy app-container:3000
}
The only tricky part was building a custom Caddy image. I needed the replace-response plugin for rewriting URLs in proxied responses. A two-stage Dockerfile handles it:
FROM caddy:2-builder AS builder
RUN xcaddy build --with github.com/caddyserver/replace-response
FROM caddy:2-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
Docker network setup
The key insight: Caddy joins every backend's Docker network. Instead of publishing ports to the host and proxying to localhost:PORT, Caddy sits on the same network and proxies to container-name:PORT.
This means backend containers don't expose any ports to the host. Only Caddy publishes 80 and 443. Cleaner, more secure.
Local development with mkcert
For local .local domains, Caddy uses certificates generated by mkcert. One script generates a wildcard cert covering all local domains, and the Caddyfile references it:
app.example.local {
tls /etc/caddy/certs/local.pem /etc/caddy/certs/local-key.pem
root * /srv/app
file_server
}
The mkcert CA gets installed in the system trust store, so Chrome trusts the local certs without warnings.
Result
Six fewer containers (NPM's web UI, database, etc.), one config file instead of a GUI, and the same functionality. The config lives in git, deploys with docker compose up -d, and new services just need a Caddyfile dropped into the sites/ directory.