Kubernetes - Overview
    Kubernetes - Objects
    Kubernetes - API Extensions
    Kubernetes - apimachinery
    Kubernetes - Container Runtimes (CRI)
    Kubernetes - Storage (CSI)
    Kubernetes - Networking (CNI)
    Kubernetes - Ecosystem
    Kubernetes - Tools
    Kubernetes - kubectl
    Kubernetes - client-go
    Kubernetes - Source Code
    Kubernetes - Kind
    Kubernetes - kubeconfig
    Kubernetes - Versus

Kubernetes - Networking

Updated: 2022-08-14
  • "IP-per-pod" model: each Pod has a unique IP; a Pod is equivalent to a VM or a host (which have unique IPs). Pods can be load-balanced.
  • Containers within a Pod use networking to communicate via loopback. Container to container: use localhost.
  • Pods can communicate with all other pods on any other node without NAT.
  • Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node.
  • Isolation (restricting what each pod can communicate with) is defined using NetworkPolicy. Network policies are implemented by the network (CNI) plugins.
  • Upper Networking: kubernetes defined networking

Kubernetes Networking

There are two components to networking.

  • Kubernetes cluster networking, i.e., pod-to-pod connectivity can be provided by bundling Cilium (eBPF based programmable dataplane) (run Cilium in overlay mode if you do not require L2 connectivity between all nodes or an SDN).
  • L4 load balancing: can be provided by bundling MetalLB.


Since Kubernetes 1.24, management of the CNI is no longer in scope for kubelet. CNI plugins are managed by a Container Runtime (e.g. containerd).

CNI is used by container runtimes: The container/pod initially has no network interface. CNI is used so the pods can accept traffic directly, which keeps network latency as low as possible.

CNI flow:

  • When the container runtime expects to perform network operations on a container, kubelet calls the CNI plugin with the desired command.
  • The container runtime also provides related network configuration and container-specific data to the plugin.
  • The CNI plugin performs the required operations and reports the result.



  • Flannel and Weavenet: easy setup and configuration.
  • Calico: better for performance since it uses an underlay network through BGP.
  • Cilium: utilizes a completely different application-layer filtering model through BPF and is more geared towards enterprise security. Can replace kube-proxy (with iptables). eBPF is superior to iptables.


Every Service and Pod defined in the cluster (including the DNS server itself) is assigned a DNS name. You can contact Services with consistent DNS names instead of IP addresses.

Since kubeadm v1.24, the only supported cluster DNS application is CoreDNS. (Support for kube-dns was removed.)

Load Balancing

  • L4 Load Balancer: e.g. via metalLB for workloads and keepalived+haproxy for control plane nodes. Should support TCP/UDP loadbalancing and high availability. It should work in a world where nodes are in different L2 subnets.
  • L7 Load Balancer: e.g. via Istio ingress gateway