NoVPS
PricingFAQDocumentationBlog
Sign InSign Up
Tutorials

How to serve build locally with Docker Compose up

Mark Hayes

Tue, Mar 24, 2026

Main picture

You've just run your frontend build — npm run build, vite build, or next build — and now you need to actually serve it. Maybe you want to test production behavior locally before pushing, or you're wiring up a multi-service stack and need your static assets running alongside an API container. Either way, Docker Compose is a clean, repeatable way to do it.

This guide walks through the most practical patterns for how to serve build output locally using Docker Compose, with real configuration examples you can drop into your own project.

Why serve a static build with Docker Compose?

Running npm start or vite preview works fine for quick checks. But there are real reasons to reach for Docker Compose instead:

  • Parity with production — if your production environment runs containers, testing locally in a container surfaces port conflicts, missing env vars, and path issues before they become deployment bugs
  • Multi-service setups — when your app needs to talk to a local API, database, or cache, Compose lets you define and start everything together with a single docker compose up
  • Sharing reproducible environments — a docker-compose.yml commits alongside your code, so any contributor can serve the build the same way you do
  • Testing reverse proxy or CDN behavior — you can put Nginx or Caddy in front of your build to replicate how a cloud edge serves your assets

The simplest setup: Nginx to serve a static build

The go-to image for serving static files in a container is nginx:alpine. It's lightweight (~8MB), has sensible defaults for static file serving, and is battle-tested.

Step 1: Build your frontend

npm run build
# Output lands in ./dist or ./build depending on your tool

Step 2: Write a docker-compose.yml

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./dist:/usr/share/nginx/html:ro

The :ro flag mounts the directory read-only — good practice for serving static assets.

Step 3: Start it

docker compose up

Open http://localhost:8080 and you're serving your build.

Heads up: If your app uses client-side routing (React Router, Vue Router, etc.), Nginx will return a 404 on direct navigation to /about or /settings. You'll need a custom Nginx config to handle this — covered in the next section.

Handling client-side routing

Single-page apps need the server to fall back to index.html for any unmatched path. Here's how to configure that.

Create a file called nginx.conf in your project root:

server {
  listen 80;
  root /usr/share/nginx/html;
  index index.html;

  location / {
    try_files $uri $uri/ /index.html;
  }

  # Optional: cache hashed assets aggressively
  location ~* \.(js|css|png|jpg|svg|woff2)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
  }
}

Then reference it in your docker-compose.yml:

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./dist:/usr/share/nginx/html:ro
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro

Now Nginx falls back to index.html for any route your SPA handles, and caches static assets with a one-year TTL — matching what most CDNs do in production.

Multi-stage Dockerfile pattern (build + serve in one image)

If you want the build step itself to happen inside Docker — useful for CI consistency or onboarding contributors who don't have Node installed — use a multi-stage Dockerfile:

# Stage 1: build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: serve
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

Your docker-compose.yml then builds from this file:

services:
  web:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:80"

Run docker compose up --build to trigger a fresh build inside the container before serving.

This pattern is great for local testing parity, but keep in mind that the first docker compose up --build is slower than a native npm run build — it downloads Node, installs packages, then compiles. Use layer caching wisely by copying package*.json before your source files, so npm ci is cached on subsequent builds when dependencies haven't changed.

Serving a full stack: frontend + API + database

Where Docker Compose really earns its place is when your frontend build needs to talk to a local backend. Here's a realistic example for a React frontend + Node.js API + PostgreSQL:

services:
  web:
    build:
      context: ./frontend
    ports:
      - "8080:80"
    depends_on:
      - api

  api:
    build:
      context: ./api
    ports:
      - "3001:3001"
    environment:
      DATABASE_URL: postgres://user:password@db:5432/myapp
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  pgdata:

A few things worth noting here:

  • depends_on with condition: service_healthy ensures the API doesn't try to connect to Postgres before it's accepting connections — a common source of confusing startup failures
  • The pgdata named volume persists your database between docker compose down / up cycles (use docker compose down -v to wipe it intentionally)
  • The API container reaches the database at db:5432 because Compose creates a shared network where containers resolve each other by service name

Pointing the frontend at the API

If your React app makes requests to /api/..., you need Nginx to proxy those to the API container. Update nginx.conf:

server {
  listen 80;
  root /usr/share/nginx/html;
  index index.html;

  location /api/ {
    proxy_pass http://api:3001/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
  }

  location / {
    try_files $uri $uri/ /index.html;
  }
}

Now your frontend build can make requests to /api/users locally and they'll be proxied to the Node API — exactly how a cloud load balancer or reverse proxy would behave in production.

Using environment variables in your build

Static builds often need environment variables baked in at build time (API URLs, feature flags, etc.). Handle this with build args:

services:
  web:
    build:
      context: ./frontend
      args:
        VITE_API_URL: http://localhost:3001
    ports:
      - "8080:80"

In your Dockerfile:

FROM node:20-alpine AS builder
ARG VITE_API_URL
ENV VITE_API_URL=$VITE_API_URL
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

Vite (and Create React App with REACT_APP_ prefix) will embed these at build time. For Next.js, use NEXT_PUBLIC_ prefixed variables the same way.

Watch out: environment variables baked into a static build are visible in the browser bundle. Never put secrets, API keys, or anything sensitive in build-time env vars.

Common issues and how to fix them

Container starts but the page is blank

Check that your dist or build directory actually exists and isn't empty. Run ls dist/ before docker compose up. Also check that your volume mount path matches your build tool's output directory — Vite defaults to dist/, Create React App to build/.

Changes to source files aren't reflected

If you're volume-mounting the build output (not rebuilding the image), you need to re-run your build tool on the host first. For development workflows where you want hot-reload, you're better off running the dev server directly (npm run dev) and reserving Docker Compose for testing the production build.

Port already in use

Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:8080 -> 0.0.0.0:0: listen tcp4 0.0.0.0:8080: bind: address already in use

Change the host port mapping: "8081:80". Or find what's using the port with lsof -i :8080 on macOS/Linux.

Nginx 403 on Linux

File permission issues can cause Nginx to return 403 even when the dist folder looks fine. Try adding user nginx; to your nginx.conf or explicitly setting permissions on the mounted directory. Alternatively, running Nginx as root (not recommended for production) bypasses the issue: override the user in docker-compose.yml with user: root.

What about serving a build in production?

The Docker Compose patterns above are solid for local development and testing. When you're ready to deploy, you have a few options:

  • Self-managed VPS — copy your Compose file to a server, run docker compose up -d. Works, but you're now managing OS updates, SSL certs, and uptime yourself.
  • Managed container platforms — services like NoVPS let you push a Dockerized app and get a running deployment without provisioning servers. They handle the underlying infrastructure, so the same Dockerfile you tested locally goes straight to production without a DevOps detour. Worth considering if you're on a tight timeline and don't want to become an Nginx-on-EC2 expert.
  • Static hosting — if your app is a fully static build with no SSR, platforms like Cloudflare Pages, Netlify, or Vercel will serve it faster (global CDN edge) with less configuration than a containerized Nginx setup.

The right choice depends on how dynamic your app is, what your backend looks like, and how much infrastructure you want to own.

Quick reference

GoalCommand
Serve existing ./dist folderdocker compose up (volume-mount approach)
Rebuild and serve in one stepdocker compose up --build
Run in backgrounddocker compose up -d
Tail logsdocker compose logs -f web
Stop and remove containersdocker compose down
Stop and wipe volumesdocker compose down -v

Serving a build with Docker Compose is one of those things that feels over-engineered until the first time it catches a production bug locally — a missing env var, a misconfigured proxy, an asset path that only breaks when served from a container. The setup is a one-time cost, and the reproducibility pays for itself.

Be first in line for updates
and special pricing

Get early access to new features and exclusive discounts delivered straight to your inbox

Legal

Privacy PolicyTerms and ConditionsAcceptable Use Policy
NoVPS

© 2026 NoVPS Cloud LTD

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.