Ceph Cheatsheet

Last Updated: 2024-01-28


# Check status
ceph -s
ceph status

# Help
ceph -h

If Ceph is not deployed in cluster:

# Check status
$ systemctl status ceph

# Restart Ceph
$ systemctl restart ceph

# Check logs
$ journalctl -u ceph

Related Objects:

apiVersion: storage.k8s.io/v1
kind: CSIDriver

apiVersion: ceph.rook.io/v1
kind: CephCluster

CephCluster is showing the same info as in ceph -s.

Ceph vs Rook

  • Ceph is the Storage.
  • Rook is the Storage Operator: It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.

Block, FS, Object

RADOS: Reliable Autonomous Distributed Object Store. A low-level data store that provides a common backend for multiple user-consumable services. Ceph's foundation.

  • RBD: RADOS Block Device. The block storage which can be assigned to pods. Usually RWO.
  • OSD: Object Storage Device: Ceph OSD daemons to provide RADOS storage.
  • CephFS: shared filesystem volumes, usually RWX.


  • Object Storage Devices (OSDs). An OSD in RADOS is always a folder within an existing filesystem. All OSDs together form the object store proper, and the binary objects that RADOS generates from the files to be stored reside in the store. The hierarchy within the OSDs is flat: files with UUID-style names but no subfolders. ceph-osd is the object storage daemon.
  • Ceph Monitor (MON): MONs form the interface to the RADOS store and support access to the objects within the store. They handle communication with all external applications and work in a decentralized way.
  • Ceph Manager (MGR)
  • Ceph Metadata Server (MDS)
  • Ceph Object Gateway (RGW)

Deployment in Kubernetes

2 ways to use Ceph

  • (in cluster) as containerized workload: replication x3, ha,
  • (external) as non-containerized: directly run on the machine, deployed as part of systemd.

Either way

  • rook-ceph-operator will manage damonsets like csi-rbdplugin.
  • default namespace: rook-ceph.

In cluster

(Ceph requires 3 nodes to run in containerized mode.)

  • rook-ceph-operator
  • rook-ceph-mgr
  • rook-ceph-mon
  • rook-ceph-osd


An external cluster is a Ceph configuration that is managed outside of the local K8s cluster. The external cluster could be managed by cephadm.

How to check if Ceph is external? Check if CephCluster has .spec.external.enable: true.

If external, you will only find this in the cluster:

  • rook-ceph-operator


If you are seeing issues provisioning the PVC then you need to check the network connectivity from the provisioner pods.

  • For CephFS PVCs, check network connectivity from the csi-cephfsplugin container of the csi-cephfsplugin-provisioner pods
  • For Block PVCs, check network connectivity from the csi-rbdplugin container of the csi-rbdplugin-provisioner pods

Placement Group

PG = “placement group”. When placing data in the cluster, objects are mapped into PGs, and those PGs are mapped onto OSDs.

Why the indirection? "so that we can group objects, which reduces the amount of per-object metadata we need to keep track of and processes we need to run".

Source Code