add ceph screenshots
copied from wiki
BIN
images/screenshot/gui-ceph-config.png
Normal file
After Width: | Height: | Size: 88 KiB |
BIN
images/screenshot/gui-ceph-disks.png
Normal file
After Width: | Height: | Size: 73 KiB |
BIN
images/screenshot/gui-ceph-log.png
Normal file
After Width: | Height: | Size: 158 KiB |
BIN
images/screenshot/gui-ceph-monitor.png
Normal file
After Width: | Height: | Size: 58 KiB |
BIN
images/screenshot/gui-ceph-osd-status.png
Normal file
After Width: | Height: | Size: 88 KiB |
BIN
images/screenshot/gui-ceph-pools.png
Normal file
After Width: | Height: | Size: 59 KiB |
BIN
images/screenshot/gui-ceph-status.png
Normal file
After Width: | Height: | Size: 86 KiB |
12
pveceph.adoc
@ -23,6 +23,8 @@ Manage Ceph Services on Proxmox VE Nodes
|
|||||||
:pve-toplevel:
|
:pve-toplevel:
|
||||||
endif::manvolnum[]
|
endif::manvolnum[]
|
||||||
|
|
||||||
|
[thumbnail="gui-ceph-status.png"]
|
||||||
|
|
||||||
{pve} unifies your compute and storage systems, i.e. you can use the
|
{pve} unifies your compute and storage systems, i.e. you can use the
|
||||||
same physical nodes within a cluster for both computing (processing
|
same physical nodes within a cluster for both computing (processing
|
||||||
VMs and containers) and replicated storage. The traditional silos of
|
VMs and containers) and replicated storage. The traditional silos of
|
||||||
@ -76,6 +78,8 @@ This sets up an `apt` package repository in
|
|||||||
Creating initial Ceph configuration
|
Creating initial Ceph configuration
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
|
[thumbnail="gui-ceph-config.png"]
|
||||||
|
|
||||||
After installation of packages, you need to create an initial Ceph
|
After installation of packages, you need to create an initial Ceph
|
||||||
configuration on just one node, based on your network (`10.10.10.0/24`
|
configuration on just one node, based on your network (`10.10.10.0/24`
|
||||||
in the following example) dedicated for Ceph:
|
in the following example) dedicated for Ceph:
|
||||||
@ -95,6 +99,8 @@ Ceph commands without the need to specify a configuration file.
|
|||||||
Creating Ceph Monitors
|
Creating Ceph Monitors
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
|
[thumbnail="gui-ceph-monitor.png"]
|
||||||
|
|
||||||
On each node where a monitor is requested (three monitors are recommended)
|
On each node where a monitor is requested (three monitors are recommended)
|
||||||
create it by using the "Ceph" item in the GUI or run.
|
create it by using the "Ceph" item in the GUI or run.
|
||||||
|
|
||||||
@ -108,6 +114,8 @@ pveceph createmon
|
|||||||
Creating Ceph OSDs
|
Creating Ceph OSDs
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
|
[thumbnail="gui-ceph-osd-status.png"]
|
||||||
|
|
||||||
via GUI or via CLI as follows:
|
via GUI or via CLI as follows:
|
||||||
|
|
||||||
[source,bash]
|
[source,bash]
|
||||||
@ -158,6 +166,8 @@ highly recommended if you expect good performance.
|
|||||||
Ceph Pools
|
Ceph Pools
|
||||||
----------
|
----------
|
||||||
|
|
||||||
|
[thumbnail="gui-ceph-pools.png"]
|
||||||
|
|
||||||
The standard installation creates per default the pool 'rbd',
|
The standard installation creates per default the pool 'rbd',
|
||||||
additional pools can be created via GUI.
|
additional pools can be created via GUI.
|
||||||
|
|
||||||
@ -165,6 +175,8 @@ additional pools can be created via GUI.
|
|||||||
Ceph Client
|
Ceph Client
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
|
[thumbnail="gui-ceph-log.png"]
|
||||||
|
|
||||||
You can then configure {pve} to use such pools to store VM or
|
You can then configure {pve} to use such pools to store VM or
|
||||||
Container images. Simply use the GUI too add a new `RBD` storage (see
|
Container images. Simply use the GUI too add a new `RBD` storage (see
|
||||||
section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
|
section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
|
||||||
|