Your resources: compute, storage, networking.
HCI: uses software to abstract, pool and manage those resources. (i.e. just another term for virtualized / software-defined compute, storage and networking.)
- virtualized computing: hypervisor
- virtualized storage: software defined storage (SDS)
- virtualized networking: software defined networking (SDN)
- Full Virtualization (HVM): uses a hypervisor, which directly communicates with a physical server's disk space and CPU. Each virtual server is independent and unaware of the other virtual servers.
- Para-Virtualization (PV): uses a hypervisor, each OS on the virtual servers is aware of one another.
- OS-Level Virtualization (Containers): does not use a hypervisor. The virtualization capability is part of the physical server OS (e.g.
- VMware: vSphere
Hypervisor = Virtual Machine Manager (VMM).
- Type-1: native or bare metal hypervisor
- hypervisor runs directly on the host's hardware.
- e.g. XEN, Hyper-V (Windows; used by WSL), VMware ESXi (renamed from ESX; i for integrated)
- Type-2: hosted hypervisor
- hypervisor runs on a conventional operating system, a guest operating system runs as a process on the host.
- e.g. VMware Workstation, VirtualBox, QEMU
KVM can be considered as both Type-1 and Type-2.
Some of the most popular hypervisors:
- Xen: an external hypervisor; it assumes control of the machine and divides resources among guests.
- KVM: part of Linux and uses the regular Linux scheduler and memory management. This means that KVM is much smaller and simpler to use; it also provides some features not available in Xen. For example, KVM can swap guests to disk in order to free RAM.
- It consists of a loadable kernel module,
kvm.ko, that provides the core virtualization infrastructure and a processor specific module,
- QEMU is the default VMM (Virtual Machine Manager) of KVM, but can be replaced. QEMU is a generic and open source machine emulator and virtualizer. The Android emulator is built on QEMU.
- KVM: kernel side; QEMU: userspace.
- It consists of a loadable kernel module,
- Cloud Hypervisor: a special-purposed VMM, doesn’t aim to be a all-functioning emulator (like QEMU), but only concerns the use case of cloud workloads.
- "Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) implemented in Rust that focuses on running modern, cloud workloads, with minimal hardware emulation."
- Website: https://www.cloudhypervisor.org
- Source code: https://github.com/cloud-hypervisor/cloud-hypervisor
Used in clouds:
- Nitro Hypervisor: a modified KVM. For new kinds of EC2 instances.
- GCE: KVM
- Cloud Run: gVisor
- Azure: Windows Hyper-V
- VMware: ESXi
- Oracle VM: Xen
- Redhat: Red Hat Virtualization (RHV), based on KVM.
KVM has included LM (Live Migration) for over a decade.
KubeVirt has been supporting Live Migration functionality out of the box since CY2020 (see https://kubevirt.io/2020/Live-migration.html)
- Paravirtualization: guest OS knows that it is running on a hypervisor instead of base hardware, recognizes that other virtual machines are running on the same machine
- Hardware Virtual Machine (HVM): guest OS thinks that it is running directly on the hardware
Xen supports 2 virtualization types; Amazon supports 2 types as it runs on Xen
- An OS or Kernel called Hypervisor is installed on the hardware.
- Dom0 is called the "privileged domain" which can issue commands to the hypervisor.
- Stability/Performance is close to the real servers and hardware virtualization.
- Overhead is very low
- Implementation is tough.
- Both the host & guest kernels has to be patched.
- Supports Linux only
- can’t change the OS options during install.
- Can’t compile and install a custom kernel
- Stands for Hardware-assisted virtual machine.
- Provides complete hardware isolation. The hardware provides support to run independently for each OS
- Can run Linux and Windows
- Complete secure hardware isolation
- Resembles close to a physical server.
- Greater stability
- Low performance, because of the overheads at the hardware level
- a communications protocol, allows a server to tell network switches where to send packets.
- used between the switch and controller on a secure channel
- program data plane, to allow control plane to scale separately from data plane
- an enabler of SDN
Software-defined host networking stack: essentially an alternative to the kernel TCP/IP stack and the BSD stream sockets interface.
Redfish: specs for APIs
E.g. Netapp ONTAP: Compute nodes are all RedFish API-compatible.