Kubernetes - Container Runtimes
Think of "container" as just another packaging format.
.iso files for disk images,
.rpm for linux packages, or
.tgz for binary or arbitrary files.
The ecosystem is more than just a format, it includes:
Unlike traditional virtualization, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with
vserver and more recently
A container image is a tar file containing tar files. Each of the tar file is a layer.
Read more: Containers vs VMs
TL;DR: OCI vs CRI
- OCI for low-level specs (think of containers,
- CRI for high-level specs (think of k8s,
Defines important specs, so different tools can be used to pack/unpack and run by different runtimes:
- the Runtime Specification(runtime-spec)
- the Image Specification(image-spec)
- the Distribution Specification(distribution-spec)
runc (https://github.com/opencontainers/runc) is a CLI tool for spawning and running containers according to the OCI specification.
Defines an API between Kubernetes and the container runtime (defined by OCI).
- Docker: an open source Linux containerization technology. Package, distribute and runtime solution.
containerd: Container daemon. Docker spun out the container runtime and donated it to CNCF. Now containerd is a graduated CNCF project. Using
runcas runtime. Used by Docker, Kubernetes, AWS ECS, etc.
- cgroup: limits and isolates resources(CPU, memory, disk I/O, network, etc)
- gVisor: a user-space kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects. It leverages existing host kernel functionality and runs as a normal user-space process. For running untrusted workloads. Lower memory and startup overhead compared to a full VM.
In 2020, Kubernetes deprecated Docker as a container runtime after version 1.20, in favor of runtimes that use the Container Runtime Interface (CRI):
CRI-O. (Note that Docker is still a useful tool for building containers, and the images that result from running docker build can still run in your Kubernetes cluster.)
runc: This is the low-level container runtime (the thing that actually creates and runs containers). It includes
libcontainer, a native Go-based implementation for creating containers. Docker donated
containerd: CNCF graduated project, contributers: Google, Microsoft, Alibaba, etc, came from docker and made CRI compliant.
- CRI-O: CNCF incubating project, contributers: RedHat, IBM, Intel etc, created from the ground up for K8s.
Docker's default runtime:
$ docker run --runtime=runc ...
gVisor can be integrated with Docker by changing
runsc("run sandboxed container)
$ docker run --runtime=runsc ...
gVisor runs slower than default docker runtime due to the "sandboxing": https://github.com/google/gvisor/issues/102
- Linux Containers (LXC): on top of
cgroups, operating system–level virtualization technology for running multiple isolated Linux systems (containers) on a single control host.
cgroups: provides namespace isolation and abilities to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups
- LXD: similar to LXC, but a REST API on top of
- Docker: application container; LXC/LXD: system container; Docker initially used
liblxcbut later changed to
Well it is gaining momentum and popularity. Many companies are adopting it.
Two notable exceptions are: Google and Facebook
Google has its own packaging format: MPM. MPM on Borg is similar to container on Kubernetes, and Kubernetes is the open-source version of Borg.
Facebook use Tupperware. Why not docker? They didn't exist then.
- defines the main gRPC protocol for the communication between the cluster components
kubeletand container runtime.
- implemented by container runtimes (e.g.
- CRI = RuntimeService + ImageService: https://github.com/kubernetes/cri-api/blob/master/pkg/apis/runtime/v1/api.proto
- The kubelet acts as a client when connecting to the container runtime via gRPC. The runtime and image service endpoints have to be available in the container runtime.
containerd view logs journalctl -u containerd
crictl image =
ctr -n=k8s.io images ls
kind load uses "ctr", "--namespace=k8s.io", "images", "import", "--digests", "--snapshotter="+snapshotter, "-"
ctr: containerd CLI, not related to k8s.
crictl: CRI Compatible container runtime command line interface, related to k8s.
OCI bundle can be loaded into Harbor without further processing.
- Deduplication of layers of images in releases; saving space.
- File verification thru SHA digests stored in Index and Manifests against file corruption.
- Listing Image Manifest of the bundle (before storing); full transparency to the customer.
- OCI image bundle can be nested.
oci-layoutfile specifying the layout version:
- Root level must have:
MediaTypeis either one of:
- Artifacts are stored in
MediaType is flexible(Some are JSON some are binary)
- References are all done by digest
- "file whose digest is
- instead of "file whose name is
- "file whose digest is
- Merkle DAG (Directed Acyclic Graph)
- content dedup by digest
- immutable - tamper proof
- no circular dependency
OCI Image -> OCI Runtime Spec bundle
- Start with an OCI image
- Apply the filesystem layers in the order specified in manifest
- OCI Runtime Spec bundle is formed,
runcnow has enough information to run
- apply cgroup/namespace etc on Linux host