What is Compose?
Docker Compose is a tool for defining and running multi-container applications on a single host. Instead of running a series of docker run commands with flags you have to remember, you describe the entire application — all its services, networks, and volumes — in a single YAML file. Compose reads that file and translates it into Docker API calls.
Compose = declarative multi-container orchestration on one host. You describe what should be running. Compose figures out how to get there.
The key word is declarative. Rather than writing steps (“create this network, then run this container, then run that one”), you write the desired end state and let Compose handle the sequencing.
Core Concepts
| Concept | What it is |
|---|---|
| Project | Everything in one Compose file — the unit of isolation. Two projects can have services with the same name without conflict. |
| Service | A container definition — the image, config, ports, volumes. One service can run as multiple containers (replicas). |
| Container | The running instance of a service. |
| Desired state | What the file declares should be running. docker compose up reconciles reality with this state. |
File Structure
A docker-compose.yml for a typical Django app with Postgres and Redis:
services:
web:
build: . # build from local Dockerfile (see image-building.md)
image: mydjango:latest
ports:
- "8000:8000" # HOST:CONTAINER
volumes:
- ./:/app # bind mount source code
- static-files:/app/staticfiles
environment:
- DJANGO_SETTINGS_MODULE=myproject.settings
- DB_HOST=db
depends_on:
- db
- cache
restart: unless-stopped
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myproject
volumes:
- db-data:/var/lib/postgresql/data
cache:
image: redis:7-alpine
networks:
default: # Compose creates this automatically
driver: bridge
volumes:
db-data:
static-files:What each field maps to under the hood
| Compose field | Docker equivalent | Notes |
|---|---|---|
services | docker run | One entry per container type |
build | docker build | Runs before docker run if no image exists |
image | docker pull | Pulled if not present locally |
ports | -p HOST:CONTAINER | Exposes container ports on the host |
volumes | -v / --mount | Bind mounts or named volumes |
environment | -e KEY=VALUE | Environment variables |
networks | docker network create | Compose creates and wires these up |
depends_on | (manual ordering) | Controls startup order, not readiness |
restart | --restart policy | no, always, unless-stopped, on-failure |
command | Overrides CMD | Replaces the Dockerfile’s default command |
entrypoint | Overrides ENTRYPOINT | Replaces the Dockerfile’s entrypoint |
Networking — the Most Useful Part of Compose
Compose networking is where most of the manual docker work disappears.
When you run docker compose up, Compose automatically:
- Creates a bridge network named
<project>_default - Attaches every service to it
- Registers each service name as a DNS hostname on that network
This means your web container can reach db and cache simply by using those names as hostnames — no IP addresses, no manual network creation, no port mapping between containers.
# settings.py — just use the service name as the host
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"HOST": "db", # resolves to the db container
"NAME": "myproject",
"USER": "postgres",
"PASSWORD": os.environ["POSTGRES_PASSWORD"],
}
}
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.redis.RedisCache",
"LOCATION": "redis://cache:6379", # resolves to the cache container
}
}Under the hood this is the same Linux bridge + DNS resolution described in the networking fundamentals — Compose just automates all of it. Networks are also isolated per project, so two different Compose projects won’t interfere with each other even if they have services with the same names.
Lifecycle Commands
# Start everything (build if needed, create networks/volumes, start containers)
docker compose up
# Start in background
docker compose up -d
# Stop and remove containers and networks (volumes preserved by default)
docker compose down
# Stop and remove everything including volumes (wipes the database)
docker compose down -v
# Restart one or all services
docker compose restart web
# Show running containers in this project
docker compose ps
# Follow logs (all services or one)
docker compose logs -f web
# Run Django management commands inside the running container
docker compose exec web python manage.py migrate
docker compose exec web python manage.py createsuperuser
docker compose exec web python manage.py collectstatic
# Rebuild images without cache
docker compose build --no-cache
# Pull latest images
docker compose pullWhat up actually does, in order
- Reads the Compose file and resolves the desired state
- Creates any declared networks that don’t exist
- Creates any declared volumes that don’t exist
- Builds images where
build:is specified (if not already built) - Pulls images where
image:is specified (if not present locally) - Creates and starts containers in dependency order
down does the reverse: stops containers, removes them, removes networks. Volumes are left intact unless you pass -v — this is intentional so your database survives a down.
depends_on — Startup Order vs Readiness
This is a common source of confusion with Django, because Django will crash on startup if it can’t connect to the database.
web:
depends_on:
- dbThis tells Compose: start db before web. It does not mean: wait until Postgres is ready to accept connections. The database container starting and Postgres being ready to accept queries are two different things — Postgres takes a few seconds to initialise on first run.
For true readiness waiting, use health checks:
services:
db:
image: postgres:15
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 5s
retries: 5
web:
depends_on:
db:
condition: service_healthy # wait until pg_isready passesThis is the correct pattern for Django — it guarantees Postgres is accepting connections before manage.py or gunicorn starts.
Environment Variables — Three Ways to Pass Them
There are three distinct mechanisms, and they are easy to confuse because they look similar. Understanding the difference matters for security.
1. .env file — variable substitution in the Compose file itself
A .env file in the same directory as your docker-compose.yml is read by Compose before the file is parsed. Its values are used to fill in ${VARIABLE} placeholders in the YAML.
# .env
POSTGRES_VERSION=15
DJANGO_PORT=8000# docker-compose.yml
services:
db:
image: postgres:${POSTGRES_VERSION} # becomes postgres:15
web:
ports:
- "${DJANGO_PORT}:8000" # becomes 8000:8000Important: .env substitutes values into the Compose file structure. It does not automatically inject anything into the container’s environment. It is a templating tool for the YAML, not a secrets mechanism.
.env should be in .gitignore. It is a local override file, not something to commit.
2. environment: keyword — variables injected into the container at runtime
The environment: key sets environment variables inside the running container. This is what Django reads via os.environ or django-environ.
services:
web:
environment:
# Explicit value — hardcoded in the Compose file
DJANGO_SETTINGS_MODULE: myproject.settings.production
# Value pulled from the host shell's environment or .env file
SECRET_KEY: ${SECRET_KEY}
# No value — passes through whatever the host shell has set
# If the host doesn't have it, the variable is unset in the container
SENTRY_DSN:The three forms behave differently:
| Form | Source | What happens if missing |
|---|---|---|
KEY: value | Hardcoded in file | Always set |
KEY: ${VAR} | Host shell or .env file | Empty string or error |
KEY: (no value) | Host shell only | Unset in container |
In Django, these are typically read in settings.py:
import os
SECRET_KEY = os.environ["SECRET_KEY"] # raises KeyError if missing — intentional
DEBUG = os.environ.get("DEBUG", "false") == "true"
ALLOWED_HOSTS = os.environ.get("ALLOWED_HOSTS", "localhost").split(",")3. env_file: — load variables from a file into the container
env_file: reads a file of KEY=VALUE pairs and injects them directly into the container’s environment. Unlike .env, this file is not used for YAML substitution — it goes straight into the container.
services:
web:
env_file:
- .env.local # loaded for all environments
- .env.production # can stack multiple files; later files win# .env.production
DJANGO_SETTINGS_MODULE=myproject.settings.production
SECRET_KEY=your-secret-key-here
DATABASE_URL=postgres://user:password@db:5432/myproject
REDIS_URL=redis://cache:6379
ALLOWED_HOSTS=myapp.com,www.myapp.comWhen variable injection actually happens
This is the critical point: environment variables are injected at container start, not at image build time.
docker compose up
↓
Compose reads docker-compose.yml (substituting .env values)
↓
Compose calls docker run with -e KEY=VALUE flags
↓
Kernel creates the gunicorn process with those variables in its environment
↓
Django reads them via os.environ at settings import time
The image itself contains none of these values. The same image can be run with DJANGO_SETTINGS_MODULE=myproject.settings.development locally and myproject.settings.production in prod. This is by design — the image is environment-agnostic; configuration is injected at runtime.
What this means for secrets: Never bake secrets into an image via ENV in a Dockerfile. They will be visible in docker inspect, image history, and any registry you push to. Always inject at runtime via environment: or env_file:.
The security hierarchy
From least to most secure:
Hardcoded in docker-compose.yml ← worst: committed to git
Committed .env file ← bad: in version control
env_file: pointing to gitignored file ← acceptable for local dev
environment: from host shell ← better: set by CI/CD or operator
AWS Secrets Manager / Vault / SSM ← best: never touches disk as plaintext
For local development, a gitignored .env file is the pragmatic choice. For production, secrets should come from a secrets manager and be injected by your CI/CD pipeline — never stored in files on disk.
Dev vs Production Compose Files
A common real-world pattern is to maintain a base Compose file and override it per environment. Compose has native support for this via file merging.
# Dev
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml upWhen multiple files are specified, Compose deep-merges them — the override file adds to or replaces keys in the base, service by service.
Base file — docker-compose.yml
Shared config that is true in all environments:
services:
web:
image: mydjango:${IMAGE_TAG:-latest}
environment:
DJANGO_SETTINGS_MODULE: myproject.settings
DB_HOST: db
CACHE_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
db:
image: postgres:15
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 5s
retries: 5
volumes:
- db-data:/var/lib/postgresql/data
cache:
image: redis:7-alpine
volumes:
db-data:
static-files:Dev override — docker-compose.dev.yml
In development you want to build locally, mount source code so changes are reflected without rebuilding, and expose the database port for tools like TablePlus or pgAdmin:
services:
web:
build:
context: .
dockerfile: Dockerfile
target: development # build only up to the dev stage
volumes:
- .:/app # mount source code — changes reflect instantly
- /app/.venv # anonymous volume keeps container's virtualenv intact
environment:
DJANGO_SETTINGS_MODULE: myproject.settings.development
DEBUG: "true"
command: python manage.py runserver 0.0.0.0:8000 # Django dev server with auto-reload
ports:
- "8000:8000"
db:
ports:
- "5432:5432" # expose db for local GUI tools
environment:
POSTGRES_PASSWORD: devpassword
POSTGRES_DB: myproject_dev
POSTGRES_USER: postgresProduction override — docker-compose.prod.yml
In production you use a pre-built pinned image, run gunicorn instead of the dev server, collect static files, and inject secrets from the environment:
services:
web:
image: mydjango:${IMAGE_TAG} # pinned tag from CI/CD — no 'build' key
restart: unless-stopped
command: >
gunicorn myproject.wsgi:application
--bind 0.0.0.0:8000
--workers 4
--timeout 120
environment:
DJANGO_SETTINGS_MODULE: myproject.settings.production
SECRET_KEY: ${SECRET_KEY} # injected by CI/CD
DATABASE_URL: ${DATABASE_URL}
ALLOWED_HOSTS: ${ALLOWED_HOSTS}
volumes:
- static-files:/app/staticfiles # shared with nginx
deploy:
resources:
limits:
memory: 512m
cpus: "0.5"
ports:
- "8000:8000"
db:
restart: unless-stopped
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} # never hardcoded
deploy:
resources:
limits:
memory: 1g
# No ports: — db is not exposed to the host in productionThe Dockerfile
The dev override uses build: { target: development } and the prod override uses a pre-built image: tag — no build: key at all. The Dockerfile itself uses a multi-stage build with separate development and production stages to support both patterns from one file.
See image-building.md for the full Dockerfile, multi-stage build patterns, and layer caching.
Summary: dev vs prod differences
| Concern | Development | Production |
|---|---|---|
| Image source | build: from local Dockerfile | Pre-built image with pinned tag |
| Source code | Bind-mounted (- .:/app) | Inside the pre-built image |
| Server | manage.py runserver (auto-reload) | gunicorn (multi-worker) |
| Ports | All exposed (db on 5432, app on 8000) | Only app port exposed |
| Secrets | Gitignored .env file | Injected by CI/CD / secrets manager |
| Restart policy | None (you restart manually) | unless-stopped |
| Resource limits | None | cgroup limits via deploy.resources |
| Static files | Served by Django dev server | Collected into volume, served by nginx |
| Rebuild trigger | File save (auto-reload) | CI/CD pipeline on git push |
What Compose Does Not Do
Compose is a single-host, single-operator tool. It does not:
- Auto-scale — you can set
replicas: 3but Compose won’t add more gunicorn workers when traffic increases - Self-heal across hosts — if the server dies, nothing restarts your containers elsewhere
- Schedule across a cluster — all containers run on one machine
- Perform traffic-based health routing — it won’t stop sending traffic to a failing container
These are the problems that Kubernetes (and similar orchestrators) solve. Compose and Kubernetes are not competing tools — they solve different problems at different scales.
When to Use What
| Situation | Tool |
|---|---|
| Running a single container | docker run |
| Running multiple cooperating containers on one machine | Compose |
| Local Django dev environment with Postgres, Redis, Celery | Compose |
| Distributed system across multiple hosts | Kubernetes / Swarm |
| Production at scale with auto-scaling and self-healing | Kubernetes |
A common pattern is to use Compose for local development (Django + Postgres + Redis + Celery worker) and Kubernetes for production. The same images are used in both — only the orchestration layer differs.