Docker in Production: A Guide for Web Developers

Docker in Production: A Guide for Web Developers
Containerising your app is step one. Running containers reliably in production is a different challenge entirely. This guide covers the essentials for non-DevOps developers.

Docker has become the lingua franca of software deployment — and for good reason. Containers solve the "works on my machine" problem definitively and make deployments consistent and reproducible. But there's a canyon between "Docker works in development" and "Docker works reliably in production." This guide closes that gap.

Writing a Production-Ready Dockerfile

dockerfile Dockerfile
# Multi-stage build — keeps production image lean
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# Production stage — only what's needed
FROM node:20-alpine AS runner
WORKDIR /app
# Never run as root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=deps --chown=appuser:appgroup /app/node_modules ./node_modules
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]

The Non-Negotiable Production Checklist

  • <strong>Never run as root.</strong> Use a non-root user in your Dockerfile (USER appuser). Root containers are a critical security risk.
  • <strong>Use multi-stage builds.</strong> Your production image should contain only your compiled app and runtime dependencies — not dev tools, compilers, or build artifacts. Aim for < 200MB.
  • <strong>Specify exact base image tags.</strong> Never FROM node:latest — pin to node:20.11.0-alpine3.19. Avoid supply chain surprises.
  • <strong>Set resource limits.</strong> Always set CPU and memory limits in docker-compose or Kubernetes. An OOM container should crash, not starve other services.
  • <strong>Handle SIGTERM gracefully.</strong> Your app must catch SIGTERM and finish in-flight requests before exiting. This prevents dropped connections during deployments.

Docker Compose for Local Development

yaml docker-compose.yml
version: '3.9'
services:
  app:
    build: .
    ports: ['3000:3000']
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/myapp
      REDIS_URL: redis://cache:6379
    depends_on:
      db: { condition: service_healthy }
      cache: { condition: service_started }
    volumes:
      - ./src:/app/src  # hot reload in dev

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U user']
      interval: 5s
      retries: 5
    volumes:
      - pgdata:/var/lib/postgresql/data

  cache:
    image: redis:7-alpine

volumes:
  pgdata:
The Next Step: Kubernetes

Once you're comfortable with Docker in production, the natural progression is Kubernetes for container orchestration. K8s gives you auto-scaling, self-healing, rolling deployments, and service discovery. For most teams, managed K8s (EKS, GKE, AKS) is the right starting point — don't self-manage the control plane.

Got a project in mind?

I work directly with founders and CTOs to build reliable, scalable software. Let's have a conversation about your goals.

Get a Quote