Kubernetes - Components
How are components running?
Components (e.g. binaries) can run in different ways:
- Not containerized: run as
systemd
services.- e.g.
kubelet
,containerd
,docker
. - config:
systemd
unit files. - start:
systemctl start
. - check status:
systemctl status
. - check logs:
journalctl
.
- e.g.
- Containerized: static pods: managed directly by
kubelet
.- config:
/etc/kubernetes/manifests
. - start / update: by
kubelet
(watching/etc/kubernetes/manifests
). - check status:
kubectl get
(mirror pods are automatically created, so they are visible onkube-apiserver
, but not modifiable). - check logs:
kubectl logs
.
- config:
- Containerized: (normal) pods: managed by apiserver.
- config: any yaml files.
- start / update: watched by
kube-apiserver
;kube-apiserver
talks tokubelet
to manage pods. - check status:
kubectl get
. - check logs:
kubectl logs
.
Control Plane Components
Must-have Kubernetes components.
Name | Using static pods | kubernetes-the-hard-way |
---|---|---|
kubelet |
systemd service | (no kubelet) |
kube-apiserver |
static pod | systemd service |
kube-controller-manager |
static pod | systemd service |
kube-scheduler |
static pod | systemd service |
etcd |
static pod | systemd service |
kube-proxy |
daemonset | |
container runtime (e.g. containerd ) |
systemd service | systemd service |
Details:
kubelet
- if
kube-apiserver
and other binaries are running as static pods,kubelet
runs the static pods according to/etc/kubernetes/manifests
.kubelet
writes log tojournald
, check byjournalctl -u kubelet
. - if
kube-apiserver
and other binaries are deployed as systemd services,kubelet
is not running on control plane nodes (as illustrated in kubernetes-the-hard-way).
- if
kube-apiserver
: the API Server.etcd
: Most of the kubernetes components are stateless and state of each component comes from theetcd
db files.kube-controller-manager
: Controller Manager of the built-in controllers.cloud-controller-manager
: embeds cloud-specific control logic.kube-scheduler
: Scheduler.containerd
: The kubelet always directs your container runtime to write logs into directories within/var/log/pods
.
Worker Node Components
Worker Node: virtual or physical machines, managed by the control plane and contains the services necessary to run Pods.
kubelet
: Talks to API Server.kube-proxy
: responsible for implementing a virtual IP mechanism forService
s of type other thanExternalName
.- Container Runtime: e.g.
containerd
, a daemon on worker nodes. Manages the container lifecycle. - monitoring / logging:
supervisord
,fluentd
The Pod Lifecycle Event Generator or PLEG is a daemon on each node that ensures the cluster's current state matches the desired state of the cluster. This might mean restarting containers or scaling the number of replicas but its possible for it to encounter issues.
The kubelet
monitors resources like memory, disk space, and filesystem inodes on your cluster's nodes.
Component Details
kube-apiserver
K8s API Server provides REST API.
The kubernetes
service (in default
namespace) is configured with a virtual IP address that is redirected (via kube-proxy
) to the HTTPS endpoint on the API server.
kube-scheduler
The Kubernetes scheduler ensures that there are enough resources for all the Pods on a Node.
Node object tracks Node's resource capacity.
The scheduler is a kind of controller. why separate from controler manager? big enough; easy to use an alternative scheduler.
kubelet
kubelet files:
/var/lib/kubelet/pki/
/etc/kubernetes/kubelet.conf
kubelet.conf
has this
users:
- name: default-auth
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
/var/lib/kubelet/pki/kubelet-client-current.pem
is used when talking to the api server. The cert has Subject: O = system:nodes, CN = system:node:<node_name>
kubelet
is deployed as a systemd
service; check status: $ systemctl status kubelet
.
kubelet
needs a kubeconfig to authenticate itself to the API server.
If you start kubelet with --register-node=false
, you need to manually create Node
object; if true it will create Node
object on the api server.
if set the kubelet --authorization-mode
flag to Webhook. Webhook mode uses the SubjectAccessReview
API to determine authorization.
kube-proxy
Handling load balancing, and service discovery: When you expose pods using a Service (ClusterIP), Kube-proxy creates network rules to send traffic to the backend pods (endpoints) grouped under the Service object.
Deployed as a DaemonSet
, NOT as a static pod.
Configs: kube-proxy
ConfigMap
.
kube-proxy
modes: iptables
or ipvs
. Query the kube-proxy mode:
$ curl http://localhost:10249/proxyMode
iptables
kube-proxy
watches api server for Service
and EndpointSlice
, capture traffic to the Service
's clusterIP
and port
, and redirect that traffic to one of the Service
's backend sets.
- modify rules:
kube-apiserver
-> create/updateService
->kube-proxy
(iptables mode) installs iptables rules; or (ipvs mode) calls netlink interface to create IPVS rules. - redirect according to the rules: incoming traffic -> Service's ip:port -> kube-proxy based on iptables -> backend Pod
Communications
kubelet
talks to container runtime by grpc.- Control Plane node
kubelet
runs API Server; API Server talks to worker nodekubelet
s - API Server clients wihin Control Plane: controllers, scheduler, etcd.
- between API Server and human users:
kubectl
,kubeadm
, REST API, client libraries. - between API Server and Nodes:
kubelet
. - Other API Server clients: CI/CD (Jenkins), Dashboard / UI
Access management:
authentication -> authorization -> admission control ("mutating" / "validating" admission controllerss)
the API server implements a push-based notification stream of state changes (events), also known as Watch
One of the reasons why watches are so efficient is because they’re implemented via the gRPC streaming APIs.
apiserver <=> (worker node) kubelet
Communications between the API server and Kubelet are bi-directional. For some functions the Kubelet calls the API server and for others the API server calls the Kubelet.
apiserver => (worker node) kubelet for: (The Kubelet has a REST API, the API exposes info about the pods running on a node, the logs from those pods, and execution of commands in every container running on the node; typically exposed on TCP port 10250, which the API server calls for some functions. These connections terminate at the kubelet's HTTPS endpoint)
- Fetching logs for pods.
- Attaching (usually through kubectl) to running pods.
- Providing the kubelet's port-forwarding functionality.
Kubelet watches the API server
- new pods with specific node labels (the labels are added by the scheduler). When it sees a pod for the node it's running on , it co-ordinates with the Container runtime on the node (usually over a UNIX socket) to start the appropriate containers.
Certs
- apiserver server: /etc/kubernetes/pki/apiserver.crt
- apiserver client: /etc/kubernetes/pki/apiserver-kubelet-client.crt
- kubelet server: /var/lib/kubelet/pki/kubelet-server-current.pem
- kubelet client: /var/lib/kubelet/pki/kubelet-client-current.pem
Life of a deployment (Put everything together)
- user submit a
deployment.yaml
to API Server. deployment.yaml
is stored in etcd; only API Server can access etcd.controller-manager
sees thedeloyment.yaml
from the API Server and create corresponding pods.scheduler
: assigns a pod to a node.kubelet
talks to the API Server and read the schedule, runs the pods.- end-users calls the running pods through
kube-proxy
(kube-proxy
calls API Server to get services).