logo

Virtualization

Last Updated: 2023-02-14

Hyperconverged Infrastructure (HCI)

Your resources: compute, storage, networking.

HCI: uses software to abstract, pool and manage those resources. (i.e. just another term for virtualized / software-defined compute, storage and networking.)

  • virtualized computing: hypervisor
  • virtualized storage: software defined storage (SDS)
  • virtualized networking: software defined networking (SDN)

Computation / Server Virtualization

3 kinds:

  • Full Virtualization (HVM): uses a hypervisor, which directly communicates with a physical server's disk space and CPU. Each virtual server is independent and unaware of the other virtual servers.
  • Para-Virtualization (PV): uses a hypervisor, each OS on the virtual servers is aware of one another.
  • OS-Level Virtualization (Containers): does not use a hypervisor. The virtualization capability is part of the physical server OS (e.g. cgroup).

Examples:

  • VMware: vSphere

Hypervisor

Hypervisor = Virtual Machine Manager (VMM).

2 Types:

  • Type-1: native or bare metal hypervisor
    • hypervisor runs directly on the host's hardware.
    • e.g. XEN, Hyper-V (Windows; used by WSL), VMware ESXi (renamed from ESX; i for integrated)
  • Type-2: hosted hypervisor
    • hypervisor runs on a conventional operating system, a guest operating system runs as a process on the host.
    • e.g. VMware Workstation, VirtualBox, QEMU

KVM can be considered as both Type-1 and Type-2.

Some of the most popular hypervisors:

  • Xen: an external hypervisor; it assumes control of the machine and divides resources among guests.
  • KVM: part of Linux and uses the regular Linux scheduler and memory management. This means that KVM is much smaller and simpler to use; it also provides some features not available in Xen. For example, KVM can swap guests to disk in order to free RAM.
    • It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.
    • QEMU is the default VMM (Virtual Machine Manager) of KVM, but can be replaced. QEMU is a generic and open source machine emulator and virtualizer. The Android emulator is built on QEMU.
    • KVM: kernel side; QEMU: userspace. QEMU can use KVM and host CPU to accelerate: $ qemu-system-x86_64 accel=kvm ...
    • The kernel component of KVM is included in mainline Linux, as of 2.6.20. The userspace component of KVM is included in mainline QEMU, as of 1.3.
  • Cloud Hypervisor: a special-purposed VMM, doesn’t aim to be a all-functioning emulator (like QEMU), but only concerns the use case of cloud workloads.

Used in clouds:

  • AWS:
    • Xen
    • Nitro Hypervisor: a modified KVM. For new kinds of EC2 instances.
  • Google:
    • GCE: KVM
    • Cloud Run: gVisor
  • Azure: Windows Hyper-V
  • VMware: ESXi
  • Oracle VM: Xen
  • Redhat: Red Hat Virtualization (RHV), based on KVM.

Live Migration

KVM has included LM (Live Migration) for over a decade.

KubeVirt has been supporting Live Migration functionality out of the box since CY2020 (see https://kubevirt.io/2020/Live-migration.html)

Azure: https://docs.microsoft.com/en-us/azure/virtual-machines/maintenance-and-updates

Paravirtualization(PV) vs Hardware Virtual Machine (HVM)

  • Paravirtualization: guest OS knows that it is running on a hypervisor instead of base hardware, recognizes that other virtual machines are running on the same machine
  • Hardware Virtual Machine (HVM): guest OS thinks that it is running directly on the hardware

Xen supports 2 virtualization types; Amazon supports 2 types as it runs on Xen

PV

  • An OS or Kernel called Hypervisor is installed on the hardware.
  • Dom0 is called the "privileged domain" which can issue commands to the hypervisor.

Pros

  • Stability/Performance is close to the real servers and hardware virtualization.
  • Overhead is very low

Cons

  • Implementation is tough.
  • Both the host & guest kernels has to be patched.
  • Supports Linux only
  • can’t change the OS options during install.
  • Can’t compile and install a custom kernel

HVM

  • Stands for Hardware-assisted virtual machine.
  • Provides complete hardware isolation. The hardware provides support to run independently for each OS

Pros

  • Can run Linux and Windows
  • Complete secure hardware isolation
  • Resembles close to a physical server.
  • Greater stability

Cons

  • Low performance, because of the overheads at the hardware level

Software Defined Networking (SDN)

OpenFlow

  • a communications protocol, allows a server to tell network switches where to send packets.
  • used between the switch and controller on a secure channel
  • program data plane, to allow control plane to scale separately from data plane
  • an enabler of SDN

Software-defined host networking stack: essentially an alternative to the kernel TCP/IP stack and the BSD stream sockets interface.

Software Defined Data Center (SDDC)

Redfish: specs for APIs

E.g. Netapp ONTAP: Compute nodes are all RedFish API-compatible.

Virtual Firewall

Palo Alto Networks vsys: Each virtual system (vsys) is an independent, separately-managed firewall with its traffic kept separate from the traffic of other virtual systems.