Kubernetes - KubeVirt
What is KubeVirt?
KubeVirt: manage VMs in k8s. KubeVirt enables KVM-based virtual machine workloads to be managed as pods in Kubernetes.
KubeVirt reached 1.0 in 2023. https://kubevirt.io/
Why KubeVirt?
- For teams that want to adopt k8s but have legacy VM based workloads.
- Cost savings, from eliminating hypervisor license and efficient resource utilization across containers and VMs.
What does KubeVirt offer:
- auto provision new volume via k8s CSI.
- Containerized Data Importer (CDI) copies data to boot disk from source (supported sources: HTTP endpoint, container registry, clone from another PVC, upload from a client, etc).
Components
- Control plane:
virt-controller
,virt-api
. - Worker node:
virt-handler
, aDaemonSet
. - Per VMI: qemu->
libvirtd
->virt-launcher
. - CLI:
virtctl
.
APIs: (Group: kubevirt.io/v1
)
KubeVirt
: KubeVirt top level CR; for configs + can check if all components are ready.VirtualMachine
:- The
VirtualMachine
contains the template to create theVirtualMachineInstance
. - In contrast to
VirtualMachineInstances
, it has a running state.
- The
VirtualMachineInstance
:- Every
VirtualMachineInstance
represents a single virtual machine instance. - No running state: every VMI that is defined in the cluster is expected to be running; deleting a VMI is equivalent to shutting it down.
- Every
DataVolume
: monitor and orchestrate the import/upload/clone of the data into the PVC.
How does KubeVirt work?
Under the hood KubeVirt uses KVM + QEMU + libvirt
. With QEMU stuck in a cgroup (i.e. a container is used as a sandbox for running virtual machine / QEMU processes.).
TL;DR: VirtualMachine
-(owns)-> VirtualMachineInstance
-(owns)-> virt-launcher
pods -> (creates the VMs)
Details:
- Users create
VirtualMachine
s as the input. virt-controller
creates a correspondingVirtualMachineInstance
once aVirtualMachine
is set to start.- For each VMI, a pod is created; KubeVirt launches QEMU in a container using
virt-launcher
. QEMU is the actual process to give you a VM. virt-controller
-> API server ->virt-handler
DeamonSet
->virt-launcher
Pod
.- How to keep VM alive if they are on pod? Live migration to other pod.
Where
kubevirt can run on bare metal cluster, or on public cloud instances (VMs on VMs, negative performance impact).
User Interfaces
No UI, only CLI or API.
VM vs Pod
- VM needs a static ip address and a mac address; needs L2 connectivity to external network.
- kubevirt deploys VM inside a pod; VM uses a
macvtap
interface to get direct connection to the physical network, and get static mac address and static ip address.
Extra
- Snapshot Controller to provide snapshot capabilities to the VMs and referenced volumes
- Containerized Data Importer (CDI) to facilitate enabling persistent volume claims (PVCs) to be used as disks for VMs (as DataVolumes).
- Multus to provide virtual local area network (VLAN) network access to virtual machines
Containerized-Data-Importer (CDI)
Containerized-Data-Importer (CDI) is a persistent storage management add-on for Kubernetes. It's primary goal is to provide a declarative way to build Virtual Machine Disks on PVC
s for Kubevirt VM.
CDI provides the ability to populate PVCs with VM images or other data upon creation.
CLI
virsh
is the CLI for libvirt
(already installed in the compute container on the virt-launcher
pod); virtctl
is the CLI for KubeVirt.
Install KubeVirt virtctl
via Krew:
$ kubectl krew install virt
virtctl
can be used to manipulate VirtualMachine CR (e.g. to start and stop a VirtualMachine):
# Start the virtual machine:
$ virtctl start vm
# Stop the virtual machine:
$ virtctl stop vm
It will make sure that a VirtualMachineInstance
will be removed from the cluster if spec.running
is set to false
.
The serial console of a virtual machine can be accessed by using the console command:
$ virtctl console testvm
To access the graphical console of a virtual machine the VNC protocol is typically used. This requires remote-viewer to be installed. Once the tool is installed, you can access the graphical console using:
$ virtctl vnc testvm
If you only want to open a vnc-proxy without executing the remote-viewer command, it can be accomplished with:
$ virtctl vnc --proxy-only testvm