Tutorials - Kubernetes Basics
Setup
Tools you need for this tutorial:
Golang
Check your GOPATH, if it is not explicitly set, it will use the default value, which is /Users/<user>/go
go env GOPATH
add this to your ~/.bashrc
or ~/.zshrc
:
export PATH=$PATH:~/go/bin
Activate the setting: . ~/.bashrc
or . ~/.zshrc
Docker
If you are running on a Linux box, you can either install Docker Engine or Docker Desktop; if you are use mac or windows, you have to install the Docker Desktop.
kind and minikube
Install
$ go install sigs.k8s.io/[email protected]
Your package will be installed in
~/go/pkg/mod/sigs.k8s.io
https://minikube.sigs.k8s.io/docs/start/
$ brew install minikube
kubectl
https://kubernetes.io/docs/tasks/tools/
k9s
https://k9scli.io/topics/install/
Optional: yq
$ brew install yq
https://github.com/mikefarah/yq
Create a cluster
Creating a cluster is simple with kind:
$ kind create cluster
If you see the error below, your docker is not correctly setup and running.
ERROR: failed to create cluster: failed to list nodes: command "docker ps -a --filter label=io.x-k8s.kind.cluster=kind --format '{{.Names}}'" failed with error: exit status 1
Command Output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Explore the kind cluster
Nodes
From your local machine, run docker ps
to check the running containers, you should find one named kind-control-plane
:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7a8a0d4d817 kindest/node:v1.25.3 "/usr/local/bin/entr…" 2 minutes ago Up About a minute 127.0.0.1:55958->6443/tcp kind-control-plane
Note that this is NOT the container running on your Kubernetes cluster, instead this is a node of your Kubernetes cluster. In a prodoction environment, a node is more likely to be a virtual machine (VM) or a bare-metal machine (i.e. physical server). For learning, we use kind, which uses Docker containers as nodes.
Use kubectl cluster-info
to check basic info of your cluster:
$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:55958
CoreDNS is running at https://127.0.0.1:55958/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubeconfig
How does kubectl
know how to talk to the cluster? The answer is kubeconfig
. Use kubectl config view
to check the kubeconfig
:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:55958
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
If you wonder which kubeconfig
file it is using, use a higher verbose
level:
$ kubectl config view -v6
I0218 10:37:52.696306 44057 loader.go:374] Config loaded from file: ~/.kube/config
...
Here kubectl
is using the default kubeconfig
in ~/.kube/config
. If you need to use a different kubeconfig
(i.e. to talk to a different k8s cluster), you can set it explicitly when calling kubectl
:
$ kubectl --kubeconfig /path/to/kubeconfig ...
Pods
Pod
is the smallest unit in the Kubernetes world. For examples, here are some higher level abstrations that manage a set of Pod
s:
Service
->Deployment
->ReplicaSet
->Pod
DaemonSet
->Pod
StatefulSet
->Pod
Job
->Pod
CronJob
->Pod
Each Pod
has one or more containers. docker
knows nother about pods, but crictl
actually tells you which pod does the container belong to.
root@kind-control-plane:/# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
de9711ea222ec b19406328e70d 2 hours ago Running coredns 0 9dc4f5f91a396 coredns-565d847f94-qwtzx
1fd40faa0dc81 7902f9a1c54fa 2 hours ago Running local-path-provisioner 0 5d1c64918e293 local-path-provisioner-684f458cdd-djxnq
5949012ca4046 b19406328e70d 2 hours ago Running coredns 0 eab2a7cebd475 coredns-565d847f94-jhlfc
31ebb66024927 7ba9b35cf55e6 2 hours ago Running kindnet-cni 0 2c5bece7db8b7 kindnet-xhb9p
e5db9c26f932c aa31a9b19ccdf 2 hours ago Running kube-proxy 0 239a534173d3c kube-proxy-8rb7j
b48afb2969db7 8e041a3b0ba8b 2 hours ago Running etcd 0 e6920fc1d6989 etcd-kind-control-plane
9c9c33353ccfb feafd6a91eb52 2 hours ago Running kube-apiserver 0 a3cb165124a26 kube-apiserver-kind-control-plane
0226a18b2b288 05b17bba8656e 2 hours ago Running kube-controller-manager 0 cf039abcca2c6 kube-controller-manager-kind-control-plane
dea8cfb00522f 253d0aeea8c69 2 hours ago Running kube-scheduler 0 94edbd4b90daa kube-scheduler-kind-control-plane
To get a full list of pods:
root@kind-control-plane:/# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-565d847f94-jhlfc 1/1 Running 0 132m
kube-system coredns-565d847f94-qwtzx 1/1 Running 0 132m
kube-system etcd-kind-control-plane 1/1 Running 0 132m
kube-system kindnet-xhb9p 1/1 Running 0 132m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 132m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 132m
kube-system kube-proxy-8rb7j 1/1 Running 0 132m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 132m
local-path-storage local-path-provisioner-684f458cdd-djxnq 1/1 Running 0 132m
Notice that this list is effectively the same as the list returned by crictl ps
.
Explore other objects
To get a list of built-in objects:
$ kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
To get a list of Custom Objects, check crd
, or CustomResourceDefinition
:
$ kubectl get crd
We do not have any CRDs at this time though.
Find a few objects to look into.
root@kind-control-plane:/# kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 84m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 84m
Get the details of a service:
$ kubectl get service kubernetes -o yaml
apiVersion: v1
kind: Service
metadata:
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
spec:
clusterIP: 10.96.0.1
clusterIPs:
- 10.96.0.1
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https
port: 443
protocol: TCP
targetPort: 6443
type: ClusterIP
status:
loadBalancer: {}
Explore the control plane node
Jump to the control plane node. Find the hash in docker ps
.
$ docker exec -it b7a8a0d4d817 bash
The control plane is running on Linux, which is the kindest/node
image we saw earlier.
root@kind-control-plane:/# uname -a
Linux kind-control-plane 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
You will find some yaml files under a special folder /etc/kubernetes/manifests/
. These are static pods.
root@kind-control-plane:/# ls /etc/kubernetes/manifests/
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
Static Pods are managed directly by the kubelet
daemon on a specific node, without the API server observing them.
Check kubelet
:
root@kind-control-plane:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 20XX-XX-XX 17:41:44 UTC; 1h 6min ago
To check kubelet
logs: journalctl -u kubelet
Working with multiple clusters
Create another cluster using minikube.
$ minikube start
Now if you check your ~/.kube/config
file, you will find 2 contexts:
$ yq ".contexts" ~/.kube/config
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
- context:
cluster: minikube
namespace: default
user: minikube
name: minikube
Basically context = cluster + user
. By choosing the context, you can talk to different clusters as different users.
The default context is now minikube
:
$ yq ".current-context" ~/.kube/config
minikube
To talk to kind cluster, add --context kind-kind
:
kubectl get pod -A --context kind-kind
If you use k9s, type :ctx
then choose between the kind cluster and the minikube cluster.
To get on minikube cluster node:
$ minikube ssh
Your turn: choose a different driver to start minikube cluster, compare that with the kind cluster. https://minikube.sigs.k8s.io/docs/drivers/
Next Step
Now you are familiar with the out-of-box components of Kubernetes, next we can learn how to deploy your workloads on it.