diff --git a/images/screenshot/gui-ceph-config.png b/images/screenshot/gui-ceph-config.png new file mode 100644 index 0000000..4e246ca Binary files /dev/null and b/images/screenshot/gui-ceph-config.png differ diff --git a/images/screenshot/gui-ceph-disks.png b/images/screenshot/gui-ceph-disks.png new file mode 100644 index 0000000..d12d301 Binary files /dev/null and b/images/screenshot/gui-ceph-disks.png differ diff --git a/images/screenshot/gui-ceph-log.png b/images/screenshot/gui-ceph-log.png new file mode 100644 index 0000000..ee7a268 Binary files /dev/null and b/images/screenshot/gui-ceph-log.png differ diff --git a/images/screenshot/gui-ceph-monitor.png b/images/screenshot/gui-ceph-monitor.png new file mode 100644 index 0000000..c8031d9 Binary files /dev/null and b/images/screenshot/gui-ceph-monitor.png differ diff --git a/images/screenshot/gui-ceph-osd-status.png b/images/screenshot/gui-ceph-osd-status.png new file mode 100644 index 0000000..a4b8578 Binary files /dev/null and b/images/screenshot/gui-ceph-osd-status.png differ diff --git a/images/screenshot/gui-ceph-pools.png b/images/screenshot/gui-ceph-pools.png new file mode 100644 index 0000000..0b0b8c3 Binary files /dev/null and b/images/screenshot/gui-ceph-pools.png differ diff --git a/images/screenshot/gui-ceph-status.png b/images/screenshot/gui-ceph-status.png new file mode 100644 index 0000000..cf3eb0f Binary files /dev/null and b/images/screenshot/gui-ceph-status.png differ diff --git a/pveceph.adoc b/pveceph.adoc index 4d270d4..41e28e5 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -23,6 +23,8 @@ Manage Ceph Services on Proxmox VE Nodes :pve-toplevel: endif::manvolnum[] +[thumbnail="gui-ceph-status.png"] + {pve} unifies your compute and storage systems, i.e. you can use the same physical nodes within a cluster for both computing (processing VMs and containers) and replicated storage. The traditional silos of @@ -76,6 +78,8 @@ This sets up an `apt` package repository in Creating initial Ceph configuration ----------------------------------- +[thumbnail="gui-ceph-config.png"] + After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (`10.10.10.0/24` in the following example) dedicated for Ceph: @@ -95,6 +99,8 @@ Ceph commands without the need to specify a configuration file. Creating Ceph Monitors ---------------------- +[thumbnail="gui-ceph-monitor.png"] + On each node where a monitor is requested (three monitors are recommended) create it by using the "Ceph" item in the GUI or run. @@ -108,6 +114,8 @@ pveceph createmon Creating Ceph OSDs ------------------ +[thumbnail="gui-ceph-osd-status.png"] + via GUI or via CLI as follows: [source,bash] @@ -158,6 +166,8 @@ highly recommended if you expect good performance. Ceph Pools ---------- +[thumbnail="gui-ceph-pools.png"] + The standard installation creates per default the pool 'rbd', additional pools can be created via GUI. @@ -165,6 +175,8 @@ additional pools can be created via GUI. Ceph Client ----------- +[thumbnail="gui-ceph-log.png"] + You can then configure {pve} to use such pools to store VM or Container images. Simply use the GUI too add a new `RBD` storage (see section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).