You've just run your frontend build — npm run build, vite build, or next build — and now you need to actually serve it. Maybe you want to test production behavior locally before pushing, or you're wiring up a multi-service stack and need your static assets running alongside an API container. Either way, Docker Compose is a clean, repeatable way to do it.
This guide walks through the most practical patterns for how to serve build output locally using Docker Compose, with real configuration examples you can drop into your own project.
Why serve a static build with Docker Compose?
Running npm start or vite preview works fine for quick checks. But there are real reasons to reach for Docker Compose instead:
- Parity with production — if your production environment runs containers, testing locally in a container surfaces port conflicts, missing env vars, and path issues before they become deployment bugs
- Multi-service setups — when your app needs to talk to a local API, database, or cache, Compose lets you define and start everything together with a single
docker compose up - Sharing reproducible environments — a
docker-compose.ymlcommits alongside your code, so any contributor can serve the build the same way you do - Testing reverse proxy or CDN behavior — you can put Nginx or Caddy in front of your build to replicate how a cloud edge serves your assets
The simplest setup: Nginx to serve a static build
The go-to image for serving static files in a container is nginx:alpine. It's lightweight (~8MB), has sensible defaults for static file serving, and is battle-tested.
Step 1: Build your frontend
npm run build
# Output lands in ./dist or ./build depending on your toolStep 2: Write a docker-compose.yml
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./dist:/usr/share/nginx/html:roThe :ro flag mounts the directory read-only — good practice for serving static assets.
Step 3: Start it
docker compose upOpen http://localhost:8080 and you're serving your build.
Heads up: If your app uses client-side routing (React Router, Vue Router, etc.), Nginx will return a 404 on direct navigation to/aboutor/settings. You'll need a custom Nginx config to handle this — covered in the next section.
Handling client-side routing
Single-page apps need the server to fall back to index.html for any unmatched path. Here's how to configure that.
Create a file called nginx.conf in your project root:
server {
listen 80;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
# Optional: cache hashed assets aggressively
location ~* \.(js|css|png|jpg|svg|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}Then reference it in your docker-compose.yml:
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./dist:/usr/share/nginx/html:ro
- ./nginx.conf:/etc/nginx/conf.d/default.conf:roNow Nginx falls back to index.html for any route your SPA handles, and caches static assets with a one-year TTL — matching what most CDNs do in production.
Multi-stage Dockerfile pattern (build + serve in one image)
If you want the build step itself to happen inside Docker — useful for CI consistency or onboarding contributors who don't have Node installed — use a multi-stage Dockerfile:
# Stage 1: build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: serve
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80Your docker-compose.yml then builds from this file:
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"Run docker compose up --build to trigger a fresh build inside the container before serving.
This pattern is great for local testing parity, but keep in mind that the firstdocker compose up --buildis slower than a nativenpm run build— it downloads Node, installs packages, then compiles. Use layer caching wisely by copyingpackage*.jsonbefore your source files, sonpm ciis cached on subsequent builds when dependencies haven't changed.
Serving a full stack: frontend + API + database
Where Docker Compose really earns its place is when your frontend build needs to talk to a local backend. Here's a realistic example for a React frontend + Node.js API + PostgreSQL:
services:
web:
build:
context: ./frontend
ports:
- "8080:80"
depends_on:
- api
api:
build:
context: ./api
ports:
- "3001:3001"
environment:
DATABASE_URL: postgres://user:password@db:5432/myapp
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pgdata:A few things worth noting here:
depends_onwithcondition: service_healthyensures the API doesn't try to connect to Postgres before it's accepting connections — a common source of confusing startup failures- The
pgdatanamed volume persists your database betweendocker compose down/upcycles (usedocker compose down -vto wipe it intentionally) - The API container reaches the database at
db:5432because Compose creates a shared network where containers resolve each other by service name
Pointing the frontend at the API
If your React app makes requests to /api/..., you need Nginx to proxy those to the API container. Update nginx.conf:
server {
listen 80;
root /usr/share/nginx/html;
index index.html;
location /api/ {
proxy_pass http://api:3001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
}Now your frontend build can make requests to /api/users locally and they'll be proxied to the Node API — exactly how a cloud load balancer or reverse proxy would behave in production.
Using environment variables in your build
Static builds often need environment variables baked in at build time (API URLs, feature flags, etc.). Handle this with build args:
services:
web:
build:
context: ./frontend
args:
VITE_API_URL: http://localhost:3001
ports:
- "8080:80"In your Dockerfile:
FROM node:20-alpine AS builder
ARG VITE_API_URL
ENV VITE_API_URL=$VITE_API_URL
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run buildVite (and Create React App with REACT_APP_ prefix) will embed these at build time. For Next.js, use NEXT_PUBLIC_ prefixed variables the same way.
Watch out: environment variables baked into a static build are visible in the browser bundle. Never put secrets, API keys, or anything sensitive in build-time env vars.
Common issues and how to fix them
Container starts but the page is blank
Check that your dist or build directory actually exists and isn't empty. Run ls dist/ before docker compose up. Also check that your volume mount path matches your build tool's output directory — Vite defaults to dist/, Create React App to build/.
Changes to source files aren't reflected
If you're volume-mounting the build output (not rebuilding the image), you need to re-run your build tool on the host first. For development workflows where you want hot-reload, you're better off running the dev server directly (npm run dev) and reserving Docker Compose for testing the production build.
Port already in use
Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:8080 -> 0.0.0.0:0: listen tcp4 0.0.0.0:8080: bind: address already in useChange the host port mapping: "8081:80". Or find what's using the port with lsof -i :8080 on macOS/Linux.
Nginx 403 on Linux
File permission issues can cause Nginx to return 403 even when the dist folder looks fine. Try adding user nginx; to your nginx.conf or explicitly setting permissions on the mounted directory. Alternatively, running Nginx as root (not recommended for production) bypasses the issue: override the user in docker-compose.yml with user: root.
What about serving a build in production?
The Docker Compose patterns above are solid for local development and testing. When you're ready to deploy, you have a few options:
- Self-managed VPS — copy your Compose file to a server, run
docker compose up -d. Works, but you're now managing OS updates, SSL certs, and uptime yourself. - Managed container platforms — services like NoVPS let you push a Dockerized app and get a running deployment without provisioning servers. They handle the underlying infrastructure, so the same
Dockerfileyou tested locally goes straight to production without a DevOps detour. Worth considering if you're on a tight timeline and don't want to become an Nginx-on-EC2 expert. - Static hosting — if your app is a fully static build with no SSR, platforms like Cloudflare Pages, Netlify, or Vercel will serve it faster (global CDN edge) with less configuration than a containerized Nginx setup.
The right choice depends on how dynamic your app is, what your backend looks like, and how much infrastructure you want to own.
Quick reference
| Goal | Command |
|---|---|
Serve existing ./dist folder | docker compose up (volume-mount approach) |
| Rebuild and serve in one step | docker compose up --build |
| Run in background | docker compose up -d |
| Tail logs | docker compose logs -f web |
| Stop and remove containers | docker compose down |
| Stop and wipe volumes | docker compose down -v |
Serving a build with Docker Compose is one of those things that feels over-engineered until the first time it catches a production bug locally — a missing env var, a misconfigured proxy, an asset path that only breaks when served from a container. The setup is a one-time cost, and the reproducibility pays for itself.


