Docker is a platform for building, shipping, and running applications inside containers — lightweight, isolated environments that package your code together with everything it needs to run (libraries, config, runtime). This means your app behaves the same whether it’s running on your laptop, a colleague’s machine, or a production server.
The core problem Docker solves: “It works on my machine.” By bundling the application and its environment together, Docker eliminates dependency conflicts and environment drift.
How Docker Works — The Big Picture
Docker has three main component layers:
You
│
▼
[docker CLI] ← you type commands here
│
▼
[dockerd daemon] ← does the actual work (runs as a background service)
│
▼
[Registry] ← remote store for images (e.g. Docker Hub)
- CLI (
docker) — the interface you interact with - Daemon (
dockerd) — the engine that manages containers, images, networking, and storage - Registry — a repository of pre-built images you can pull down and run
Core Concepts
Docker separates concerns into two phases: build time (images) and run time (containers).
Images — Build Time
An image is an immutable, read-only package containing your application’s filesystem and configuration. Think of it as a snapshot or template.
| Concept | What it is |
|---|---|
| Image | The static package — code, runtime, dependencies |
| Dockerfile | The recipe that defines how to build the image |
| Layer | Each Dockerfile instruction creates a cached filesystem diff; layers stack to form the image |
| Base image | The starting point (FROM ubuntu:22.04) — usually an OS or runtime |
| Tag | A human-readable label (nginx:1.25) — mutable, can be reassigned |
| Digest | The true identity — a content hash (sha256:abc123) that never changes |
Key commands:
docker build -t app . # build an image from a Dockerfile
docker pull <image> # download an image from a registry
docker push <image> # upload an image to a registry
docker images # list locally available imagesContainers — Run Time
A container is a live, running instance of an image. The image is the recipe; the container is the meal. You can run many containers from the same image simultaneously.
| Concept | What it is |
|---|---|
| Container | An isolated, running process with its own filesystem, network, and PID namespace |
| PID 1 | The main process inside the container — if it exits, the container stops |
| Lifecycle | create → start → stop → rm |
Key commands:
docker run <image> # create and start a container
docker ps # list running containers
docker stop <container> # stop a running container
docker exec <container> # run a command inside a running container
docker logs <container> # view container output
docker rm <container> # delete a stopped containerStorage
Containers are ephemeral by default — any data written inside is lost when the container is removed. Docker provides three ways to persist or share data:
| Type | Description | Use when |
|---|---|---|
| Volume | Docker-managed storage (/var/lib/docker/volumes) | Default choice — portable and easy to back up |
| Bind mount | Maps a host path directly into the container | Local development — see live file changes |
| tmpfs | RAM-only, disappears on stop | Sensitive or temporary data you don’t want on disk |
docker volume create <name> # create a named volumeNetworking
Docker provides several network drivers depending on how containers need to communicate:
| Driver | Description |
|---|---|
| Bridge (default) | Containers on the same bridge can reach each other by name; isolated from host |
| Host | Removes network isolation — container shares the host’s network stack directly |
| Overlay | Spans multiple Docker hosts; used in Swarm/cluster deployments |
Port mapping exposes a container’s port on the host:
docker run -p 8080:80 nginx # HOST:CONTAINER — access via localhost:8080DNS: Docker automatically resolves container names to IPs within the same network.
docker network create <name> # create a custom networkInspection & Operations
Useful tools for monitoring and housekeeping:
| Command | What it does |
|---|---|
docker inspect <target> | Dumps full JSON config for a container, image, or network |
docker stats | Live view of CPU, memory, and network usage per container |
docker system df | Shows disk usage broken down by images, containers, and volumes |
docker prune | Cleans up unused objects (stopped containers, dangling images, etc.) |
Installing Docker (Ubuntu)
Docker publishes its own apt repository to ensure you get the latest version rather than Ubuntu’s older packaged version.
Step 1 — Remove any conflicting packages:
sudo apt remove docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runcUbuntu’s default repos ship older, unofficial Docker packages — this clears them out first.
Step 2 — Add Docker’s apt repository:
# Install prerequisites
sudo apt update
sudo apt install ca-certificates curl
# Add Docker's official GPG key (verifies package authenticity)
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/docker.asc
EOF
sudo apt updateStep 3 — Install Docker packages:
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginThe five packages installed:
| Package | Role |
|---|---|
docker-ce | The Docker engine |
docker-ce-cli | The docker CLI |
containerd.io | The low-level container runtime |
docker-buildx-plugin | Extended build capabilities |
docker-compose-plugin | docker compose support |