NoVPS
PricingFAQDocumentationBlog
Sign InSign Up
Community

Fix docker start container errors: causes and solutions 2026

Kate Baker

Mon, Apr 13, 2026

Main picture

You run docker start container, and nothing happens. Or worse — it exits immediately with a cryptic error you've never seen before. The container worked yesterday. Now it doesn't.

This is one of the most common frustrations in day-to-day Docker usage. The good news is that the causes are predictable. Most docker start failures fall into a handful of categories, and once you know the diagnostic pattern, you can resolve them in minutes.

This guide covers every major reason a container fails to start and gives you concrete steps to fix each one.

How docker start actually works

Before jumping into fixes, it helps to understand what happens when you run docker start:

  1. Docker reads the container's stored configuration (image, command, ports, volumes, environment variables)
  2. It allocates resources (network, filesystem layers, cgroups)
  3. It executes the container's entrypoint/command
  4. If the process exits immediately, the container goes back to a stopped state

The critical point: docker start doesn't rebuild anything. It restarts an existing container with its original configuration. If the image was deleted, the port is now taken, or a volume path disappeared — the start fails.

docker start reuses the exact configuration from when the container was created with docker run. You cannot change ports, volumes, or the image by just restarting the container.

Quick diagnostic workflow

When a container won't start, run these three commands before anything else:

# 1. Check the container's current state and exit code
docker inspect --format='{{.State.Status}} exit:{{.State.ExitCode}}' 

# 2. Read the logs from the last run attempt
docker logs --tail 50 

# 3. Review the full container configuration
docker inspect  | head -100

The exit code tells you a lot:

Exit codeMeaning
0Process exited normally (but maybe too quickly)
1Application error — bad config, missing file, uncaught exception
125Docker daemon error — couldn't start the container at all
126Command found but not executable (permission issue)
127Command not found (entrypoint/cmd doesn't exist)
137Killed by SIGKILL — usually OOM killer or manual docker kill
139Segfault (SIGSEGV)
143Killed by SIGTERM — graceful shutdown

With the exit code and logs in hand, find your specific scenario below.

Port conflicts: "address already in use"

Symptoms:

Error response from daemon: driver failed programming external connectivity
on endpoint myapp: Bind for 0.0.0.0:8080 failed: port is already allocated

This happens when another process — or another Docker container — is already using the port your container wants to bind to.

Fix:

# Find what's using the port
lsof -i :8080

# Or check Docker containers specifically
docker ps --format '{{.Names}}\t{{.Ports}}' | grep 8080

Then either stop the conflicting service or recreate your container with a different port:

docker rm myapp
docker run -d --name myapp -p 8081:8080 myimage

You cannot change the port mapping of an existing container. If the port conflict is permanent (say, you installed a local service that now owns port 8080), you must remove and recreate the container.

The container starts and immediately stops

This is the most common and most confusing scenario. You run docker start container, Docker reports no error, but the container status immediately goes back to Exited.

Why it happens: the main process inside the container exits. Docker containers are not VMs — they run one foreground process, and when that process ends, the container stops.

Common causes:

The command finishes instantly

If the container was created with a command like echo hello or ls, it will always exit immediately because the command completes. Check what command the container is running:

docker inspect --format='{{.Config.Cmd}}' 
docker inspect --format='{{.Config.Entrypoint}}' 

The application crashes on startup

The process starts but fails immediately due to a configuration error, missing dependency, or bad environment variable. This is where logs are essential:

docker logs 

Look for stack traces, "file not found" errors, connection failures to databases, or missing environment variables.

Fix: address the root cause in the application. If the container needs different environment variables or config, you'll need to recreate it:

docker rm 
docker run -d --name  -e DATABASE_URL=postgres://... myimage

The process is running in the background

Some applications daemonize themselves — they fork a child process and the parent exits. Since Docker tracks the PID 1 process, the container stops when the parent exits even though the child is doing work.

Common offenders: Nginx (without daemon off;), Apache, some Java apps.

Fix: configure the application to run in the foreground. For Nginx, this means:

CMD ["nginx", "-g", "daemon off;"]

Volume mount errors

Symptoms:

Error response from daemon: failed to create shim task: OCI runtime create failed:
runc create failed: unable to start container process: error during container init:
error mounting "/host/path" to rootfs at "/container/path": stat /host/path: no such file or directory

The container was created with a -v /host/path:/container/path mount, and that host path no longer exists — maybe the drive was unmounted, the directory was deleted, or you're on a different machine.

Fix:

# Check what volumes the container expects
docker inspect --format='{{json .Mounts}}'  | python3 -m json.tool

# Recreate the missing directory
mkdir -p /host/path

# Or recreate the container with the correct path
docker rm 
docker run -d --name  -v /new/path:/container/path myimage

"No such image" or missing layers

Symptoms:

Error response from daemon: No such image: myapp:v1.2.3

The image the container was built from has been removed. This can happen after a docker image prune, a docker system prune, or if someone manually deleted the image.

Fix:

# Pull or rebuild the image
docker pull myapp:v1.2.3
# or
docker build -t myapp:v1.2.3 .

# Then start the container
docker start 

If the image tag no longer exists in the registry, you're stuck — you'll need to find the Dockerfile and rebuild, or use a different tag.

OOM kills: the container is killed by the system

Symptoms: exit code 137, and this in the logs:

dmesg | grep -i "oom\|killed process"

The container exceeded its memory limit (set with --memory) or the host ran out of memory and the kernel's OOM killer targeted your container.

Fix:

# Check the container's memory limit
docker inspect --format='{{.HostConfig.Memory}}' 

# Check actual memory usage of running containers
docker stats --no-stream

If the limit is too low, recreate with a higher limit:

docker rm 
docker run -d --name  --memory=512m myimage

If there's no limit and the host is running out of RAM, you need to either optimize the application's memory usage, add swap, or move to a machine with more resources.

Exit code 137 doesn't always mean OOM. It can also mean someone ran docker kill. Check dmesg to confirm whether the OOM killer was involved.

Permission denied errors

Symptoms:

/docker-entrypoint.sh: Permission denied
exec /app/start.sh: permission denied

The entrypoint script or command doesn't have execute permissions.

Fix option 1: fix the Dockerfile and rebuild:

RUN chmod +x /docker-entrypoint.sh

Fix option 2: override the entrypoint to debug:

docker rm 
docker run -it --name  --entrypoint /bin/sh myimage
# Now you're inside the container — inspect file permissions
ls -la /docker-entrypoint.sh

Another common permission issue involves mounted volumes. On Linux, if the container runs as a non-root user but the host directory is owned by root, the application inside the container can't write to the volume:

# Fix host directory ownership to match the container user's UID
chown -R 1000:1000 /host/data

Docker daemon or socket issues

Symptoms:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

This isn't a container problem — Docker itself isn't running.

Fix:

# Check if Docker daemon is running
systemctl status docker

# Start it
sudo systemctl start docker

# If it won't start, check its logs
journalctl -u docker --no-pager --since "10 minutes ago"

On Docker Desktop (macOS/Windows), simply restarting the application usually resolves this.

Filesystem and disk space issues

Symptoms: vague errors about failing to create layers, or containers that start but crash immediately because they can't write logs or temp files.

# Check Docker's disk usage
docker system df

# Check host disk space
df -h

Fix:

# Remove unused containers, images, networks, and build cache
docker system prune

# For more aggressive cleanup, remove unused volumes too
docker system prune --volumes

Be careful with --volumes — it deletes data in unnamed volumes that aren't currently attached to a running container.

Networking issues that prevent startup

Some containers depend on specific Docker networks that may have been removed:

Error response from daemon: network mynetwork not found

Fix:

# Check what network the container expects
docker inspect --format='{{json .NetworkSettings.Networks}}' 

# Recreate the network
docker network create mynetwork

# Then start the container
docker start 

The nuclear option: recreate the container

Sometimes the fastest path is to stop debugging the existing container and recreate it. This works when:

  • The container's configuration is wrong and can't be changed without recreation
  • The container state is corrupted
  • You've upgraded Docker and the container format is incompatible

Before destroying it, capture its configuration:

# Save the full container config
docker inspect  > container-backup.json

# Check what image it was using
docker inspect --format='{{.Config.Image}}' 

# Check environment variables
docker inspect --format='{{json .Config.Env}}' 

# Check port mappings
docker inspect --format='{{json .HostConfig.PortBindings}}' 

# Check volumes
docker inspect --format='{{json .Mounts}}' 

Then recreate it with the correct settings. If you're juggling complex docker run commands with dozens of flags, this is a good sign you should be using Docker Compose — it keeps your container configuration in a version-controlled YAML file, so recreation is just docker compose up -d.

When the problem isn't the container — it's the infrastructure

Sometimes you fix the docker start container error only to find the next one. The port conflict is resolved, but now there's a disk space issue. That's fixed, and now the database container won't connect to the network. Managing containers in production means managing the entire stack around them — the host OS, networking, storage, monitoring, and restarts.

This is exactly the kind of operational burden that managed platforms eliminate. NoVPS, for instance, runs your dockerized applications on managed infrastructure with built-in container registry, databases, and storage. You push your image, and the platform handles port allocation, networking, restarts, and resource limits. If a container fails, you get logs and metrics without SSH-ing into a server and running dmesg. It doesn't replace knowing how Docker works — but it removes the class of problems where the container is fine and the infrastructure around it isn't.

Prevention: stop debugging the same issues repeatedly

Most docker start failures are preventable with a few habits:

  • Use Docker Compose. It codifies your container configuration so you never forget a flag or port mapping.
  • Set restart policies. Use --restart unless-stopped or restart: unless-stopped in Compose to automatically recover from transient failures.
  • Pin image tags. Don't use :latest in production. A docker pull that updates :latest can break a container that worked fine before.
  • Monitor disk space. Set up a cron job or alert for when Docker's storage exceeds 80%.
  • Use health checks. Add a HEALTHCHECK instruction to your Dockerfile so Docker can distinguish between "running" and "running correctly."
# docker-compose.yml example with good defaults
services:
  app:
    image: myapp:1.4.2
    restart: unless-stopped
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 5s
      retries: 3
    deploy:
      resources:
        limits:
          memory: 512M

Summary

When docker start container fails, the fix almost always follows the same pattern: check the exit code, read the logs, inspect the configuration, and match the symptoms to one of the known causes — port conflicts, missing volumes, OOM kills, permission errors, disk space, or application crashes.

The most important thing to remember is that docker start reuses the original container configuration. If the configuration was wrong from the beginning, or if the environment has changed since the container was created, no amount of restarting will help. Recreate the container with the correct settings and move forward.

Be first in line for updates
and special pricing

Get early access to new features and exclusive discounts delivered straight to your inbox

Legal

Privacy PolicyTerms and ConditionsAcceptable Use Policy
NoVPS

© 2026 NoVPS Cloud LTD

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.