What is Compose?

Docker Compose is a tool for defining and running multi-container applications on a single host. Instead of running a series of docker run commands with flags you have to remember, you describe the entire application — all its services, networks, and volumes — in a single YAML file. Compose reads that file and translates it into Docker API calls.

Compose = declarative multi-container orchestration on one host. You describe what should be running. Compose figures out how to get there.

The key word is declarative. Rather than writing steps (“create this network, then run this container, then run that one”), you write the desired end state and let Compose handle the sequencing.


Core Concepts

ConceptWhat it is
ProjectEverything in one Compose file — the unit of isolation. Two projects can have services with the same name without conflict.
ServiceA container definition — the image, config, ports, volumes. One service can run as multiple containers (replicas).
ContainerThe running instance of a service.
Desired stateWhat the file declares should be running. docker compose up reconciles reality with this state.

File Structure

A docker-compose.yml for a typical Django app with Postgres and Redis:

services:
  web:
    build: .                        # build from local Dockerfile (see image-building.md)
    image: mydjango:latest
    ports:
      - "8000:8000"                 # HOST:CONTAINER
    volumes:
      - ./:/app                     # bind mount source code
      - static-files:/app/staticfiles
    environment:
      - DJANGO_SETTINGS_MODULE=myproject.settings
      - DB_HOST=db
    depends_on:
      - db
      - cache
    restart: unless-stopped
 
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myproject
    volumes:
      - db-data:/var/lib/postgresql/data
 
  cache:
    image: redis:7-alpine
 
networks:
  default:              # Compose creates this automatically
    driver: bridge
 
volumes:
  db-data:
  static-files:

What each field maps to under the hood

Compose fieldDocker equivalentNotes
servicesdocker runOne entry per container type
builddocker buildRuns before docker run if no image exists
imagedocker pullPulled if not present locally
ports-p HOST:CONTAINERExposes container ports on the host
volumes-v / --mountBind mounts or named volumes
environment-e KEY=VALUEEnvironment variables
networksdocker network createCompose creates and wires these up
depends_on(manual ordering)Controls startup order, not readiness
restart--restart policyno, always, unless-stopped, on-failure
commandOverrides CMDReplaces the Dockerfile’s default command
entrypointOverrides ENTRYPOINTReplaces the Dockerfile’s entrypoint

Networking — the Most Useful Part of Compose

Compose networking is where most of the manual docker work disappears.

When you run docker compose up, Compose automatically:

  1. Creates a bridge network named <project>_default
  2. Attaches every service to it
  3. Registers each service name as a DNS hostname on that network

This means your web container can reach db and cache simply by using those names as hostnames — no IP addresses, no manual network creation, no port mapping between containers.

# settings.py — just use the service name as the host
DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.postgresql",
        "HOST": "db",        # resolves to the db container
        "NAME": "myproject",
        "USER": "postgres",
        "PASSWORD": os.environ["POSTGRES_PASSWORD"],
    }
}
 
CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.redis.RedisCache",
        "LOCATION": "redis://cache:6379",   # resolves to the cache container
    }
}

Under the hood this is the same Linux bridge + DNS resolution described in the networking fundamentals — Compose just automates all of it. Networks are also isolated per project, so two different Compose projects won’t interfere with each other even if they have services with the same names.


Lifecycle Commands

# Start everything (build if needed, create networks/volumes, start containers)
docker compose up
 
# Start in background
docker compose up -d
 
# Stop and remove containers and networks (volumes preserved by default)
docker compose down
 
# Stop and remove everything including volumes (wipes the database)
docker compose down -v
 
# Restart one or all services
docker compose restart web
 
# Show running containers in this project
docker compose ps
 
# Follow logs (all services or one)
docker compose logs -f web
 
# Run Django management commands inside the running container
docker compose exec web python manage.py migrate
docker compose exec web python manage.py createsuperuser
docker compose exec web python manage.py collectstatic
 
# Rebuild images without cache
docker compose build --no-cache
 
# Pull latest images
docker compose pull

What up actually does, in order

  1. Reads the Compose file and resolves the desired state
  2. Creates any declared networks that don’t exist
  3. Creates any declared volumes that don’t exist
  4. Builds images where build: is specified (if not already built)
  5. Pulls images where image: is specified (if not present locally)
  6. Creates and starts containers in dependency order

down does the reverse: stops containers, removes them, removes networks. Volumes are left intact unless you pass -v — this is intentional so your database survives a down.


depends_on — Startup Order vs Readiness

This is a common source of confusion with Django, because Django will crash on startup if it can’t connect to the database.

web:
  depends_on:
    - db

This tells Compose: start db before web. It does not mean: wait until Postgres is ready to accept connections. The database container starting and Postgres being ready to accept queries are two different things — Postgres takes a few seconds to initialise on first run.

For true readiness waiting, use health checks:

services:
  db:
    image: postgres:15
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
      interval: 5s
      retries: 5
 
  web:
    depends_on:
      db:
        condition: service_healthy   # wait until pg_isready passes

This is the correct pattern for Django — it guarantees Postgres is accepting connections before manage.py or gunicorn starts.


Environment Variables — Three Ways to Pass Them

There are three distinct mechanisms, and they are easy to confuse because they look similar. Understanding the difference matters for security.

1. .env file — variable substitution in the Compose file itself

A .env file in the same directory as your docker-compose.yml is read by Compose before the file is parsed. Its values are used to fill in ${VARIABLE} placeholders in the YAML.

# .env
POSTGRES_VERSION=15
DJANGO_PORT=8000
# docker-compose.yml
services:
  db:
    image: postgres:${POSTGRES_VERSION}    # becomes postgres:15
  web:
    ports:
      - "${DJANGO_PORT}:8000"              # becomes 8000:8000

Important: .env substitutes values into the Compose file structure. It does not automatically inject anything into the container’s environment. It is a templating tool for the YAML, not a secrets mechanism.

.env should be in .gitignore. It is a local override file, not something to commit.


2. environment: keyword — variables injected into the container at runtime

The environment: key sets environment variables inside the running container. This is what Django reads via os.environ or django-environ.

services:
  web:
    environment:
      # Explicit value — hardcoded in the Compose file
      DJANGO_SETTINGS_MODULE: myproject.settings.production
 
      # Value pulled from the host shell's environment or .env file
      SECRET_KEY: ${SECRET_KEY}
 
      # No value — passes through whatever the host shell has set
      # If the host doesn't have it, the variable is unset in the container
      SENTRY_DSN:

The three forms behave differently:

FormSourceWhat happens if missing
KEY: valueHardcoded in fileAlways set
KEY: ${VAR}Host shell or .env fileEmpty string or error
KEY: (no value)Host shell onlyUnset in container

In Django, these are typically read in settings.py:

import os
 
SECRET_KEY = os.environ["SECRET_KEY"]           # raises KeyError if missing — intentional
DEBUG = os.environ.get("DEBUG", "false") == "true"
ALLOWED_HOSTS = os.environ.get("ALLOWED_HOSTS", "localhost").split(",")

3. env_file: — load variables from a file into the container

env_file: reads a file of KEY=VALUE pairs and injects them directly into the container’s environment. Unlike .env, this file is not used for YAML substitution — it goes straight into the container.

services:
  web:
    env_file:
      - .env.local        # loaded for all environments
      - .env.production   # can stack multiple files; later files win
# .env.production
DJANGO_SETTINGS_MODULE=myproject.settings.production
SECRET_KEY=your-secret-key-here
DATABASE_URL=postgres://user:password@db:5432/myproject
REDIS_URL=redis://cache:6379
ALLOWED_HOSTS=myapp.com,www.myapp.com

When variable injection actually happens

This is the critical point: environment variables are injected at container start, not at image build time.

docker compose up
      ↓
Compose reads docker-compose.yml (substituting .env values)
      ↓
Compose calls docker run with -e KEY=VALUE flags
      ↓
Kernel creates the gunicorn process with those variables in its environment
      ↓
Django reads them via os.environ at settings import time

The image itself contains none of these values. The same image can be run with DJANGO_SETTINGS_MODULE=myproject.settings.development locally and myproject.settings.production in prod. This is by design — the image is environment-agnostic; configuration is injected at runtime.

What this means for secrets: Never bake secrets into an image via ENV in a Dockerfile. They will be visible in docker inspect, image history, and any registry you push to. Always inject at runtime via environment: or env_file:.


The security hierarchy

From least to most secure:

Hardcoded in docker-compose.yml        ← worst: committed to git
Committed .env file                    ← bad: in version control
env_file: pointing to gitignored file  ← acceptable for local dev
environment: from host shell           ← better: set by CI/CD or operator
AWS Secrets Manager / Vault / SSM      ← best: never touches disk as plaintext

For local development, a gitignored .env file is the pragmatic choice. For production, secrets should come from a secrets manager and be injected by your CI/CD pipeline — never stored in files on disk.


Dev vs Production Compose Files

A common real-world pattern is to maintain a base Compose file and override it per environment. Compose has native support for this via file merging.

# Dev
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
 
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up

When multiple files are specified, Compose deep-merges them — the override file adds to or replaces keys in the base, service by service.

Base file — docker-compose.yml

Shared config that is true in all environments:

services:
  web:
    image: mydjango:${IMAGE_TAG:-latest}
    environment:
      DJANGO_SETTINGS_MODULE: myproject.settings
      DB_HOST: db
      CACHE_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
 
  db:
    image: postgres:15
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
      interval: 5s
      retries: 5
    volumes:
      - db-data:/var/lib/postgresql/data
 
  cache:
    image: redis:7-alpine
 
volumes:
  db-data:
  static-files:

Dev override — docker-compose.dev.yml

In development you want to build locally, mount source code so changes are reflected without rebuilding, and expose the database port for tools like TablePlus or pgAdmin:

services:
  web:
    build:
      context: .
      dockerfile: Dockerfile
      target: development            # build only up to the dev stage
    volumes:
      - .:/app                       # mount source code — changes reflect instantly
      - /app/.venv                   # anonymous volume keeps container's virtualenv intact
    environment:
      DJANGO_SETTINGS_MODULE: myproject.settings.development
      DEBUG: "true"
    command: python manage.py runserver 0.0.0.0:8000   # Django dev server with auto-reload
    ports:
      - "8000:8000"
 
  db:
    ports:
      - "5432:5432"                  # expose db for local GUI tools
    environment:
      POSTGRES_PASSWORD: devpassword
      POSTGRES_DB: myproject_dev
      POSTGRES_USER: postgres

Production override — docker-compose.prod.yml

In production you use a pre-built pinned image, run gunicorn instead of the dev server, collect static files, and inject secrets from the environment:

services:
  web:
    image: mydjango:${IMAGE_TAG}     # pinned tag from CI/CD — no 'build' key
    restart: unless-stopped
    command: >
      gunicorn myproject.wsgi:application
        --bind 0.0.0.0:8000
        --workers 4
        --timeout 120
    environment:
      DJANGO_SETTINGS_MODULE: myproject.settings.production
      SECRET_KEY: ${SECRET_KEY}                        # injected by CI/CD
      DATABASE_URL: ${DATABASE_URL}
      ALLOWED_HOSTS: ${ALLOWED_HOSTS}
    volumes:
      - static-files:/app/staticfiles                  # shared with nginx
    deploy:
      resources:
        limits:
          memory: 512m
          cpus: "0.5"
    ports:
      - "8000:8000"
 
  db:
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}          # never hardcoded
    deploy:
      resources:
        limits:
          memory: 1g
    # No ports: — db is not exposed to the host in production

The Dockerfile

The dev override uses build: { target: development } and the prod override uses a pre-built image: tag — no build: key at all. The Dockerfile itself uses a multi-stage build with separate development and production stages to support both patterns from one file.

See image-building.md for the full Dockerfile, multi-stage build patterns, and layer caching.

Summary: dev vs prod differences

ConcernDevelopmentProduction
Image sourcebuild: from local DockerfilePre-built image with pinned tag
Source codeBind-mounted (- .:/app)Inside the pre-built image
Servermanage.py runserver (auto-reload)gunicorn (multi-worker)
PortsAll exposed (db on 5432, app on 8000)Only app port exposed
SecretsGitignored .env fileInjected by CI/CD / secrets manager
Restart policyNone (you restart manually)unless-stopped
Resource limitsNonecgroup limits via deploy.resources
Static filesServed by Django dev serverCollected into volume, served by nginx
Rebuild triggerFile save (auto-reload)CI/CD pipeline on git push

What Compose Does Not Do

Compose is a single-host, single-operator tool. It does not:

  • Auto-scale — you can set replicas: 3 but Compose won’t add more gunicorn workers when traffic increases
  • Self-heal across hosts — if the server dies, nothing restarts your containers elsewhere
  • Schedule across a cluster — all containers run on one machine
  • Perform traffic-based health routing — it won’t stop sending traffic to a failing container

These are the problems that Kubernetes (and similar orchestrators) solve. Compose and Kubernetes are not competing tools — they solve different problems at different scales.


When to Use What

SituationTool
Running a single containerdocker run
Running multiple cooperating containers on one machineCompose
Local Django dev environment with Postgres, Redis, CeleryCompose
Distributed system across multiple hostsKubernetes / Swarm
Production at scale with auto-scaling and self-healingKubernetes

A common pattern is to use Compose for local development (Django + Postgres + Redis + Celery worker) and Kubernetes for production. The same images are used in both — only the orchestration layer differs.


See Also