logo

Kubernetes - kube-apiserver

When we talk about the "Control Plane", sometimes we are referring to the API Server, as it is the only visible part of the Control Plane.

K8s API Server provides REST API.

The kubernetes service (in default namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the API server.

Check audit logging: /var/log/apiserver/*

Explore Kubernetes APIs

Use kubectl proxy to create a proxy:

$ kubectl proxy --port=8081 &

From another terminal

curl http://localhost:8081/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "172.19.0.2:6443"
    }
  ]
}

Get the full list of APIs

$ curl http://localhost:8081/apis

# OpenAPI:
$ curl http://localhost:8081/openapi/v2 | jq | less
$ curl http://localhost:8081/openapi/v3 | jq | less

OpenAPI support is in a separate repo: k8s.io/kube-openapi. The api server (k8s.io/apiserver) depends on kube-openapi

How to access API Server / Control Plane

  • REST API.
    • discovery endpoints:
      • /openapi/v2, /openapi/v3
      • /api, /apis
    • OpenAPI v3 spec: /openapi/v3/apis/<group>/<version>?hash=<hash>
    • APIs:
      • /api/v1
      • /apis/rbac.authorization.k8s.io/v1alpha1
  • CLI: kubectl, kubeadm etc.
  • Client Libraries: e.g. client-go (CLIs like kubectl, kubeadm also depend on client-go).

How to load balance multiple API Servers

If you have 3 control plane nodes, there will be 3 kube-apiservers running, how to decide which kube-apiserver to call?

On Cloud

On a public cloud (AWS, Google Cloud) you can use the load balancing service provided by the cloud to distribute traffic across the 3 API servers. E.g. Google Network Load Balancer.

On Prem

Read more on haproxy+keepalived