Kubernetes - Components
How are components running?
Components (e.g. binaries) can run in different ways:
- Not containerized: run as
systemdservices.- e.g.
kubelet,containerd,docker. - config:
systemdunit files. - start:
systemctl start. - check status:
systemctl status. - check logs:
journalctl.
- e.g.
- Containerized: static pods: managed directly by
kubelet.- config:
/etc/kubernetes/manifests. - start / update: by
kubelet(watching/etc/kubernetes/manifests). - check status:
kubectl get(mirror pods are automatically created, so they are visible onkube-apiserver, but not modifiable). - check logs:
kubectl logs.
- config:
- Containerized: (normal) pods: managed by apiserver.
- config: any yaml files.
- start / update: watched by
kube-apiserver;kube-apiservertalks tokubeletto manage pods. - check status:
kubectl get. - check logs:
kubectl logs.
Control Plane Components
Must-have Kubernetes components.
| Name | Using static pods | kubernetes-the-hard-way |
|---|---|---|
kubelet |
systemd service | (no kubelet) |
kube-apiserver |
static pod | systemd service |
kube-controller-manager |
static pod | systemd service |
kube-scheduler |
static pod | systemd service |
etcd |
static pod | systemd service |
kube-proxy |
daemonset | |
container runtime (e.g. containerd) |
systemd service | systemd service |
Details:
kubelet- if
kube-apiserverand other binaries are running as static pods,kubeletruns the static pods according to/etc/kubernetes/manifests.kubeletwrites log tojournald, check byjournalctl -u kubelet. - if
kube-apiserverand other binaries are deployed as systemd services,kubeletis not running on control plane nodes (as illustrated in kubernetes-the-hard-way).
- if
kube-apiserver: the API Server.etcd: Most of the kubernetes components are stateless and state of each component comes from theetcddb files.kube-controller-manager: Controller Manager of the built-in controllers.cloud-controller-manager: embeds cloud-specific control logic.kube-scheduler: Scheduler.containerd: The kubelet always directs your container runtime to write logs into directories within/var/log/pods.
Worker Node Components
Worker Node: virtual or physical machines, managed by the control plane and contains the services necessary to run Pods.
kubelet: Talks to API Server.kube-proxy: responsible for implementing a virtual IP mechanism forServices of type other thanExternalName.- Container Runtime: e.g.
containerd, a daemon on worker nodes. Manages the container lifecycle. - monitoring / logging:
supervisord,fluentd
The Pod Lifecycle Event Generator or PLEG is a daemon on each node that ensures the cluster's current state matches the desired state of the cluster. This might mean restarting containers or scaling the number of replicas but its possible for it to encounter issues.
The kubelet monitors resources like memory, disk space, and filesystem inodes on your cluster's nodes.
Communications
kubelettalks to CSI to have the storage ready.kubelettalks to container runtime by grpc.kubeletinstructs the Container Runtime to spin up a container.- container runtime talks to CNI plugin to get network setup.
- (if use static pods) Control Plane node
kubeletruns API Server - API Server talks to worker node
kubelets - API Server clients wihin Control Plane: controllers, scheduler, etcd.
- between API Server and human users:
kubectl,kubeadm, REST API, client libraries. - between API Server and Nodes:
kubelet. - Other API Server clients: CI/CD (Jenkins), Dashboard / UI
Access management:
authentication -> authorization -> admission control ("mutating" / "validating" admission controllerss)
the API server implements a push-based notification stream of state changes (events), also known as Watch
One of the reasons why watches are so efficient is because they’re implemented via the gRPC streaming APIs.
apiserver <=> (worker node) kubelet
Communications between the apiserver and kubelet are bi-directional. For some functions the kubelet calls the apiserver and for others the apiserver calls the kubelet.
The kubelet has a REST API, the API exposes info about the pods running on a node, the logs from those pods, and execution of commands in every container running on the node; typically exposed on TCP port 10250, which the API server calls for some functions. These connections terminate at the kubelet's HTTPS endpoint.
apiserver => (worker node) kubelet for:
- Fetching logs for pods.
- Attaching (usually through
kubectl) to running pods. - Providing the
kubelet's port-forwarding functionality.
kubelet watches apiserver: apiserver supports a "watch" mode, which uses the WebSocket protocol. In this way the kubelet is notified of any change to Pods with the Hostname equal to the hostname of the kubelet.
- new pods with specific node labels (the labels are added by the scheduler). When it sees a pod for the node it's running on , it co-ordinates with the Container runtime on the node (usually over a UNIX socket) to start the appropriate containers.
Certs
- apiserver server:
/etc/kubernetes/pki/apiserver.crt - apiserver client:
/etc/kubernetes/pki/apiserver-kubelet-client.crt - kubelet server:
/var/lib/kubelet/pki/kubelet-server-current.pem - kubelet client:
/var/lib/kubelet/pki/kubelet-client-current.pem
Life of a deployment (Put everything together)
- User submits a
deployment.yamlto API Server. - A
Deploymentis stored inetcd; only API Server can accessetcd. kube-controller-managersees theDeloymentfrom the API Server and create correspondingPods.kube-scheduler: assigns aPodto aNode.kubelettalks to the API Server and read the schedule, runs thePods.- End-users calls the running pods through
kube-proxy(kube-proxycalls API Server to getServices).