Before Docker, deploying a web application meant wrestling with server configuration — installing the right Node version, setting up environment variables, managing process restarts, and hoping the production server matched your local setup. Docker solves all of this by packaging your application and everything it needs into a portable container that runs identically everywhere. This guide takes you from zero Docker knowledge to a live, HTTPS-secured application on a real server.
What Is Docker?
Docker is a containerization platform. A container is a lightweight, isolated process that includes the application code, runtime, libraries, and configuration. Unlike a virtual machine, it does not include a full OS — it shares the host kernel. This makes containers start in milliseconds and use far less memory than VMs.
The key concepts: a Dockerfile is a recipe for building an image. An image is a read-only snapshot (like a template). A container is a running instance of an image. Docker Compose orchestrates multiple containers (app + database + Nginx) with a single configuration file.
Installation
On Ubuntu (the most common server OS), install Docker Engine and Docker Compose v2 in one step:
# Install using the convenience script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add your user to the docker group (avoid using sudo every time)
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker --version
docker compose version
On macOS or Windows, install Docker Desktop from the official Docker website — it includes everything you need, including a GUI for managing containers.
Dockerfile Basics
A Dockerfile is a text file with instructions. Each instruction creates a layer in the image. The most important instructions are:
FROM— base image to build from (e.g.,node:22-alpine)WORKDIR— set the working directory inside the containerCOPY— copy files from your machine into the imageRUN— execute a shell command during the buildENV— set environment variablesEXPOSE— declare which port the container listens on (documentation only, does not open ports)CMD— the default command to run when the container starts
# A minimal Node.js Dockerfile
FROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Build the image with docker build -t my-app:latest . and run it with docker run -p 3000:3000 my-app:latest. The -p 3000:3000 flag maps port 3000 on your machine to port 3000 inside the container.
Building Images Efficiently
Docker builds images layer by layer, and layers are cached. If a layer has not changed since the last build, Docker reuses the cached layer. This means the order of instructions matters: put things that change rarely (installing dependencies) before things that change often (your source code).
Always add a .dockerignore file to exclude node_modules, .env, and build artifacts from being copied into the image:
# .dockerignore
node_modules
.env
.env.local
dist
.git
*.log
coverage
Docker Compose — Multi-Service Setup
Most real applications need more than one container: the app, a database, maybe a Redis cache. Docker Compose defines all these services in a single docker-compose.yml file:
# docker-compose.yml
version: '3.9'
services:
api:
build: .
restart: unless-stopped
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=mongodb://mongo:27017/mydb
depends_on:
- mongo
networks:
- app-network
mongo:
image: mongo:7
restart: unless-stopped
volumes:
- mongo-data:/data/db
networks:
- app-network
volumes:
mongo-data:
networks:
app-network:
Start everything with docker compose up -d (the -d flag runs in the background). Stop with docker compose down. View logs with docker compose logs -f api.
Dockerizing a Node.js API
For a production Node.js API, use a multi-stage build to keep the final image small. The builder stage compiles TypeScript; the runner stage only includes the compiled JavaScript and production dependencies:
# Multi-stage Dockerfile for Node.js + TypeScript
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json tsconfig.json ./
RUN npm ci
COPY src ./src
RUN npm run build
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=builder /app/dist ./dist
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]
Notice USER node — never run Node.js as root inside a container. The node user is built into the official Node.js images for this purpose.
Dockerizing a Next.js App
Next.js requires a slightly different approach because of its standalone output mode. Enable standalone output in next.config.js to create a self-contained build that does not need node_modules at runtime:
// next.config.js
module.exports = {
output: 'standalone',
};
# Dockerfile for Next.js
FROM node:22-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:22-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production PORT=3000
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
EXPOSE 3000
USER node
CMD ["node", "server.js"]
Environment Variables and Secrets
Never bake secrets into a Docker image. Instead, pass them at runtime. In production, use a .env file that lives only on the server (never in your git repository) and reference it in your Compose file:
# docker-compose.yml (production)
services:
api:
env_file: .env.production # on the server only, never in git
# OR pass individual vars:
environment:
- NODE_ENV=production
- JWT_SECRET=${JWT_SECRET} # from the host environment
Deploying to DigitalOcean
A $12/month DigitalOcean Droplet (2 vCPU, 2GB RAM) handles most small-to-medium web applications comfortably. Here is the deployment workflow:
- Create a Droplet with Ubuntu 24.04 LTS and enable SSH key authentication.
- SSH in and install Docker:
curl -fsSL https://get.docker.com | sh - Point your domain's A record to the Droplet's IP address.
- Copy your project files to the server using
scpor clone from GitHub. - Create your
.env.productionfile with all secrets on the server. - Run
docker compose up -dto start the app. - Set up Nginx as a reverse proxy (see next section).
Nginx Reverse Proxy
Nginx sits in front of your containers, handles HTTPS termination, and routes traffic. Add an Nginx service to your Compose file:
# nginx/conf.d/app.conf
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location / {
proxy_pass http://api:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
SSL with Let's Encrypt
The easiest way to add free HTTPS is with Certbot inside an Nginx container. A simpler approach for beginners is nginx-proxy + acme-companion — they auto-provision and renew certificates for any container with the right environment variables:
# docker-compose.yml with auto-SSL
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
ports: ["80:80", "443:443"]
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
acme-companion:
image: nginxproxy/acme-companion
volumes_from: [nginx-proxy]
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- acme:/etc/acme.sh
api:
build: .
environment:
- VIRTUAL_HOST=yourdomain.com
- LETSENCRYPT_HOST=yourdomain.com
- LETSENCRYPT_EMAIL=you@example.com
volumes:
certs:
html:
vhost:
acme:
Monitoring and Logging
Check running containers with docker ps. View logs with docker compose logs -f --tail=100. For resource usage, docker stats shows CPU, memory, and network in real time. For persistent log aggregation, add a Loki + Promtail + Grafana stack (all available as Docker images) — this gives you a searchable, visualized log dashboard at no extra cost beyond the server resources.
Common Mistakes
The mistakes I see most often from developers new to Docker: running containers as root (always use a non-root user), copying node_modules into the image from the host (always run npm ci inside the Dockerfile), exposing database ports to the public internet (keep databases on an internal network), storing secrets in environment variables baked into the image (use runtime env files or Docker Secrets), and not setting restart: unless-stopped (containers stop after a reboot without this). Avoid these five and most deployments will go smoothly.