Kubernetes - Networking
- "IP-per-pod" model: each
Pod
has a unique IP; aPod
is equivalent to a VM or a host (which have unique IPs). Pods can be load-balanced. - Containers within a Pod use networking to communicate via loopback. Container to container: use
localhost
. - Pods can communicate with all other pods on any other node without NAT.
- Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node.
- Isolation (restricting what each pod can communicate with) is defined using
NetworkPolicy
. Network policies are implemented by the network (CNI) plugins. - Upper Networking: kubernetes defined networking.
- pod to internet traffic needs to go through a NAT Gateway.
Kubernetes Networking
There are two components to networking.
- Kubernetes cluster networking, i.e., pod-to-pod connectivity, or "data plane". Can be provided by bundling Cilium (eBPF based programmable dataplane) (run Cilium in overlay mode if you do not require L2 connectivity between all nodes or an SDN).
- L4 load balancing: can be provided by bundling MetalLB.
Kubernetes Engine clusters are provisioned with an IP range that is automatically determined during cluster creation, in an existing project subnet.
CNI
Since Kubernetes 1.24, management of the CNI is no longer in scope for kubelet
. CNI plugins are managed by a Container Runtime (e.g. containerd
).
CNI is used by container runtimes: The container/pod initially has no network interface. CNI is used so the pods can accept traffic directly, which keeps network latency as low as possible.
CNI flow:
- When the container runtime expects to perform network operations on a container,
kubelet
calls the CNI plugin with the desired command. - The container runtime also provides related network configuration and container-specific data to the plugin.
- The CNI plugin performs the required operations and reports the result.
https://github.com/containernetworking/cni
Implementations
- Flannel and Weavenet: easy setup and configuration.
- Calico: better for performance since it uses an underlay network through BGP.
- Cilium: utilizes a completely different application-layer filtering model through BPF and is more geared towards enterprise security. Can replace kube-proxy (with iptables). eBPF is superior to iptables.
GKE Dataplane V2 migrated from Calico (Calico CNI and Calico Network Policies, rely heavily on IPtables functionality in the Linux kernel, IPtables provide a flexible, but not programmable datapath that enables K8s networking functions) to a programmable datapath. based on eBPF/Cilium.
anetd
is the networking controller, which replaces calico-*
and kube-proxy-*
pods. anetd
/cilium holds metadata (network policies, configured k8s services and their endpoints) and accounts metrics (conntrack entries, dropped, forwarded traffic) related to all networking in the nodes.
Load Balancing
- L4 Load Balancer: should support TCP/UDP loadbalancing and high availability. It should work in a world where nodes are in different L2 subnets. E.g.
- MetalLB for workloads.
- keepalived + haproxy for control plane nodes.
- L7 Load Balancer: e.g. via Istio ingress gateway
Note about Load Balancers in k8s:
- Kubernetes ships glue code that calls out to various IaaS platforms (GCP, AWS, Azure...)
- Kubernetes does NOT offer an implementation of network load balancers for bare-metal clusters. Use MetalLB for bare-metal clusters.
MetalLB
MetalLB receives requests from the outside of the cluster, and balances them across the load balancer(s) in the cluster.
Without MetalLB, nginx ingress service in bare metal stays in pending state because it has no IP assigned to it. MetalLB does the job of assigning nginx an external IP.
LoadBalancer
Service
specifies no address, it requires metallb to specify an IPAddressPool
and use L2Advertisement
to advertize.
SNMP
Get metrics from the network switches
snmp-exporter
to expose the SNMP data to Prometheus.
SNMP is a known resource hog.
Gateway
https://gateway-api.sigs.k8s.io/
(Ingress api version 2)
The Gateway API is a SIG-Network project being built to improve and standardize service networking in Kubernetes.
https://istio.io/latest/blog/2022/gateway-api-beta/
Gateway
=> VirtualService
=> Service
apiVersion: networking.istio.io/v1beta1
kind: Gateway
Gateway=proxy=load balancer
Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.
Defines exposed ports and protocols
Every Gateway
is backed by a Service
of type LoadBalancer
.
VirtualService
configure the L7 LoadBalancing or reverse proxy, split traffic e.g. if uri prefix is /api/
, go to service 1, if /
go to service 2.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
A VirtualService
can be bound to a gateway to control the forwarding of traffic arriving at a particular host or gateway port.
Dualstack
k8s dualstack: both ipv4 and ipv6: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
Cilium
Cilium is a network addon.
Metallb is the LoadBalancer for bare metal clusters (in contrast to AWS / GCP, etc) that works with Cilium.
eBPF is used to safely and efficiently extend the capabilities of the kernel without requiring to change kernel source code or load kernel modules.
- eBPF in kernel.
- Cilium, pods are in user space.
Cilium installs eBPF programs; traffic betwen pods go through eBPF. The eBPF programs installed in the kernel decide how to route the packet. Unlike IPTables, eBPF programs have access to Kubernetes-specific metadata including network policy information.
APIs:
ciliumclusterwidenetworkpolicies.cilium.io
ciliumegressgatewaypolicies.cilium.io
ciliumegressnatpolicies.cilium.io
ciliumendpoints.cilium.io
ciliumexternalworkloads.cilium.io
ciliumidentities.cilium.io
ciliumnetworkpolicies.cilium.io
ciliumnodes.cilium.io