At the fundamental OS level, a Docker image is just a directory tree — a collection of files and folders that looks like a Linux root filesystem (/bin, /lib, /etc, /usr, etc.). Nothing magic about it.
What makes it a Docker image is how that directory tree is structured and stored:
Layers = tarballs stacked on top of each other
Each layer in an image is a tar archive (a compressed snapshot of filesystem changes). When Docker builds or pulls an image, it:
- Unpacks each layer tarball in order
- Applies them as overlays — later layers can add, modify, or delete files from earlier ones
- The result is a single merged view of the filesystem
The technology doing this merging is called OverlayFS (overlay filesystem) — a Linux kernel feature. It presents the stacked layers as one unified directory without actually copying files.
When a container runs
The kernel sees:
- Lower dirs (read-only) → the image layers
- Upper dir (read-write) → a fresh, empty layer just for this container
- Merged dir → what the container process actually sees
Writes go to the upper layer only — the image beneath is never modified. This is called copy-on-write (CoW): a file is only copied to the upper layer the moment it’s written to. That’s why containers start instantly and why ten containers from the same image share the image’s disk blocks rather than duplicating them.
What the “OS” actually is
When you do FROM ubuntu:22.04, you’re not getting a kernel — you’re just getting Ubuntu’s userspace files (apt, bash, libc, etc.). The container shares the host’s kernel. This is the fundamental difference from a VM: there’s no guest OS, no bootloader, no hardware emulation — just a process running in an isolated namespace, pointed at a different root filesystem.
So in one sentence: a Docker image is a layered, read-only root filesystem, and a container is that filesystem brought to life as an isolated process on the host kernel.