Docker networking is built on a handful of standard Linux kernel features stitched together. Nothing Docker-specific in the kernel — it’s all primitives that existed before Docker.

The core building blocks

1. Virtual Ethernet pairs (veth)

When a container starts, the kernel creates a veth pair — two virtual network interfaces that are connected like opposite ends of a pipe. Whatever goes in one end comes out the other:

container end: eth0  ↔  host end: vethXXXXXX

The container end (eth0) is moved into the container’s net namespace, so the container sees it as its only NIC. The host end stays in the host namespace.

2. Bridge (docker0)

The host ends of all veth pairs are plugged into a Linux bridgedocker0 by default. A bridge is a virtual Layer 2 switch, implemented entirely in the kernel. It forwards Ethernet frames between attached interfaces based on MAC addresses, just like a physical switch.

        docker0 (bridge, 172.17.0.1)
        /        \        \
vethAAA      vethBBB    vethCCC
   |              |          |
container1    container2   container3
(172.17.0.2) (172.17.0.3) (172.17.0.4)

Containers on the same bridge can reach each other through it. The bridge itself has an IP (172.17.0.1) which becomes the container’s default gateway.

3. iptables / netfilter

The kernel’s netfilter framework (managed via iptables) handles two critical jobs:

  • NAT (masquerade) — when a container sends traffic to the internet, iptables rewrites the source IP from the container’s private IP to the host’s public IP. Replies are translated back. Standard SNAT, no different from a home router.
  • Port mapping-p 8080:80 is just a DNAT rule: packets arriving at the host on port 8080 get their destination rewritten to 172.17.0.x:80 before hitting the bridge.

Docker writes these iptables rules automatically when containers start.

4. Network namespaces

Each container gets its own net namespace — its own private view of the network stack: interfaces, routing table, iptables rules, ports. Port 80 inside one container doesn’t conflict with port 80 inside another because they’re in completely separate namespaces.


How it fits together — a packet’s journey

A request from container1 to the internet:

container1 eth0
    → vethAAA (veth pair)
        → docker0 (bridge)
            → host routing table
                → iptables MASQUERADE (SNAT: rewrite src IP)
                    → host's physical NIC
                        → internet

A request from the host to container1 on mapped port 8080:

host:8080
    → iptables DNAT (rewrite dst to 172.17.0.2:80)
        → docker0 (bridge)
            → vethAAA (veth pair)
                → container1 eth0:80

The other network modes at OS level

ModeOS mechanism
Bridge (default)veth pair + Linux bridge + iptables NAT
HostNo net namespace created — container process shares host’s network stack directly
Overlayveth + bridge + VXLAN tunnel (encapsulates L2 frames in UDP packets across hosts)
NoneNet namespace created but nothing attached — loopback only

Host mode is the simplest possible thing: the container just doesn’t get a net namespace. Its processes bind directly to the host’s interfaces. Zero isolation, zero overhead.

Overlay adds one more layer: VXLAN, a kernel feature that wraps Ethernet frames in UDP and ships them between hosts, making containers on different machines appear to be on the same L2 network.


One sentence: a Docker network is a Linux bridge with veth pairs connecting container net namespaces to it, and iptables rules handling NAT and port mapping — all standard kernel primitives, no Docker-specific kernel code involved.