NoVPS
PricingFAQDocumentationBlog
Sign InSign Up
Tutorials

Dockerfile basics for deploying your first app in 2026

Mark Hayes

Sun, Apr 5, 2026

Main picture

You've built something. Now you need to ship it — fast, reliably, and without becoming a DevOps engineer overnight. That's exactly what a Dockerfile is for.

This guide walks you through the core concepts, real syntax, and practical patterns you need to containerize and deploy your first application. No fluff, no unnecessary theory.

What a Dockerfile actually does

A Dockerfile is a plain text file that describes how to build a Docker image — a portable, self-contained snapshot of your application and everything it needs to run: code, runtime, dependencies, config, and environment variables.

When you run docker build, Docker reads your Dockerfile line by line and produces an image. That image runs identically on your laptop, a teammate's machine, a CI/CD pipeline, or a cloud provider's infrastructure.

Key insight: The Dockerfile doesn't just bundle your code — it defines a reproducible environment. "Works on my machine" stops being an excuse.

Your first Dockerfile, line by line

Here's a minimal but production-aware Dockerfile for a Node.js app. We'll dissect every instruction.

# 1. Base image
FROM node:20-alpine

# 2. Set working directory inside the container
WORKDIR /app

# 3. Copy dependency files first (layer caching trick)
COPY package*.json ./

# 4. Install dependencies
RUN npm ci --only=production

# 5. Copy the rest of your application code
COPY . .

# 6. Expose the port your app listens on
EXPOSE 3000

# 7. Default command to start the app
CMD ["node", "server.js"]

Breaking it down

FROM node:20-alpine

Every Dockerfile starts with a base image. node:20-alpine pulls the official Node.js 20 image built on Alpine Linux — a minimal distro that keeps your final image small (around 50MB vs ~300MB for the full Debian-based image). Always pin a specific version, not latest, so your builds stay reproducible.

WORKDIR /app

Sets the working directory for all subsequent instructions. Without this, files scatter to the root filesystem. /app is a convention — it can be anything.

COPY package*.json ./ then RUN npm ci

This ordering is deliberate. Docker builds images in layers, and each instruction creates a new layer. If you copy your full codebase first, then install dependencies, Docker re-installs dependencies every time any file changes — even a one-line README edit.

By copying only the package files first, Docker can reuse the cached dependency layer until package.json actually changes. This makes rebuilds dramatically faster.

npm ci vs npm install

npm ci (clean install) is stricter and faster in automated environments. It respects package-lock.json exactly, fails if it's out of sync with package.json, and never modifies your lockfile. Use it in Dockerfiles.

EXPOSE 3000

This is documentation, not a security rule. It signals which port the app uses, but doesn't publish it to the host. Publishing happens at runtime with -p 3000:3000.

CMD ["node", "server.js"]

The default command that runs when a container starts. Use the exec form (JSON array) rather than shell form (CMD node server.js) — it avoids spawning a shell process and handles signals correctly, which matters for graceful shutdown.

Python / Django example

If you're working in Python, the pattern is similar:

FROM python:3.12-slim

WORKDIR /app

# Install dependencies first for layer caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

EXPOSE 8000
CMD ["gunicorn", "myapp.wsgi:application", "--bind", "0.0.0.0:8000"]

Key differences:

  • python:3.12-slim is the equivalent of -alpine — slimmed-down Debian, better library compatibility
  • --no-cache-dir prevents pip from storing a download cache inside the image, saving space
  • Use gunicorn (or uvicorn for FastAPI) instead of the Django dev server in production

The .dockerignore file — don't skip this

A .dockerignore file works like .gitignore. It tells Docker which files to exclude from the build context sent to the daemon.

node_modules/
.git/
.env
*.log
__pycache__/
.pytest_cache/
dist/

Without this, Docker sends your entire node_modules directory (often hundreds of MB) to the build context — even though the Dockerfile reinstalls them anyway. This slows every build.

Rule of thumb: If it's in your .gitignore, it probably belongs in .dockerignore too.

Multi-stage builds: keeping images lean

Once you start shipping compiled applications (TypeScript, Go, Java, bundled React frontends), multi-stage builds become essential. They let you use a fat build environment but ship a lean production image.

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build # Compile TypeScript, bundle assets, etc.

FROM node:20-alpine AS production

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Only copy the compiled output from the build stage
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/server.js"]

The final image contains only the production dependencies and compiled output — no TypeScript compiler, no dev dependencies, no source maps unless you want them. Images can go from 600MB+ to under 100MB.

Environment variables and secrets

Never hardcode credentials in a Dockerfile. The image can be inspected, pushed to registries, and shared — everything baked in becomes public.

The right approach:

# Declare variables the container expects
ENV NODE_ENV=production
ENV PORT=3000

# Don't do this:
# ENV DATABASE_URL=postgres://user:password@host/db ❌

Inject secrets at runtime:

# Via docker run
docker run -e DATABASE_URL="postgres://..." myapp

# Via an env file
docker run --env-file .env.production myapp

When deploying to a managed platform, environment variables are configured in the platform's dashboard — never in the image itself.

Common Dockerfile mistakes (and how to avoid them)

Running as root
By default, containers run as root. That's a security risk. Add a non-root user:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

Put this before CMD, after copying files.

Not handling signals properly
If your CMD uses shell form (CMD node server.js), the Node/Python process runs as a child of /bin/sh, which doesn't forward signals like SIGTERM. Your container won't shut down gracefully. Use exec form: CMD ["node", "server.js"].

Bloated base images
Using ubuntu or node:20 (full Debian) when node:20-alpine or node:20-slim is available. Bigger images mean slower pulls, larger attack surface, more storage cost.

Installing unnecessary tools
Don't add curl, vim, or build tools in production images unless your app genuinely needs them at runtime. If you need them for the build step, use a multi-stage build and leave them in the builder stage.

Ignoring layer order
Instructions that change frequently (like copying source code) should come after instructions that change rarely (like installing dependencies). Docker caches layers from top to bottom and invalidates everything below a changed layer.

Building and running locally

# Build an image and tag it
docker build -t myapp:latest .

# Run it locally
docker run -p 3000:3000 --env-file .env myapp:latest

# Check what's inside (useful for debugging)
docker run --rm -it myapp:latest sh

# View image size and layers
docker images myapp

docker history myapp:latest

From Dockerfile to deployment

Once your Dockerfile works locally, the deployment path is:

  1. Push to a container registry — Docker Hub, GitHub Container Registry, or a platform's built-in registry
  2. Deploy to your infrastructure — a VPS, Kubernetes cluster, or a managed platform

If you're an early-stage founder who wants to skip the infrastructure management entirely, platforms like NoVPS are built specifically for this. You push your Dockerized app, connect a managed database, and get a running deployment — no Nginx config, no SSH, no load balancer setup. That's hours saved at the exact point in your runway where hours matter most.

The broader ecosystem includes AWS ECS, Google Cloud Run, Railway, Render, and Fly.io — each with different trade-offs on pricing, control, and complexity.

A production-ready Dockerfile checklist

Before you deploy, run through this:

  • Pinned base image version — not latest
  • Non-root user — don't run as root in production
  • .dockerignore configured — no secrets, no node_modules, no .git
  • Dependencies installed before code copied — layer caching optimized
  • npm ci / pip install without dev dependencies
  • Multi-stage build if you have a compile step
  • No secrets in the Dockerfile — injected at runtime
  • CMD in exec form — proper signal handling
  • Image tested locally with docker run before pushing

What's next

A solid Dockerfile is the foundation, but it's only the first step. Once you're comfortable here, the natural progression is:

  • Docker Compose — define multi-container apps (app + database + Redis) in a single docker-compose.yml
  • Health checksHEALTHCHECK instruction lets orchestrators know when your container is actually ready
  • CI/CD pipelines — automatically build and push images on every git push using GitHub Actions or GitLab CI
  • Container orchestration — Kubernetes or managed equivalents if you outgrow single-host deployments

But for getting your first app into production? The Dockerfile above is enough. Ship it.

Be first in line for updates
and special pricing

Get early access to new features and exclusive discounts delivered straight to your inbox

Legal

Privacy PolicyTerms and ConditionsAcceptable Use Policy
NoVPS

© 2026 NoVPS Cloud LTD

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.