reflect new PVE features for ceph pool configuration

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
This commit is contained in:
Alwin Antreich 2017-10-23 09:21:34 +02:00 committed by Fabian Grünbichler
parent b5ccaf2d21
commit 78f02fed81

View File

@ -19,7 +19,7 @@ storage, and you get the following advantages:
* full snapshot and clone capabilities * full snapshot and clone capabilities
* self healing * self healing
* no single point of failure * no single point of failure
* scalable to the exabyte level * scalable to the exabyte level
* kernel and user space implementation available * kernel and user space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph NOTE: For smaller deployments, it is also possible to run Ceph
@ -36,7 +36,8 @@ This backend supports the common storage properties `nodes`,
monhost:: monhost::
List of monitor daemon IPs. List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
PVE cluster.
pool:: pool::
@ -44,18 +45,18 @@ Ceph pool name.
username:: username::
RBD user Id. RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
krbd:: krbd::
Access rbd through krbd kernel module. This is required if you want to Access rbd through krbd kernel module. This is required if you want to
use the storage for containers. use the storage for containers.
.Configuration Example (`/etc/pve/storage.cfg`) .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
---- ----
rbd: ceph3 rbd: ceph-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22 monhost 10.1.1.20 10.1.1.21 10.1.1.22
pool ceph3 pool ceph-external
content images content images
username admin username admin
---- ----
@ -65,8 +66,8 @@ TIP: You can use the `rbd` utility to do low-level management tasks.
Authentication Authentication
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
If you use `cephx` authentication, you need to copy the keyfile from If you use `cephx` authentication, you need to copy the keyfile from your
Ceph to Proxmox VE host. external Ceph cluster to a Proxmox VE host.
Create the directory `/etc/pve/priv/ceph` with Create the directory `/etc/pve/priv/ceph` with
@ -79,6 +80,9 @@ Then copy the keyring
The keyring must be named to match your `<STORAGE_ID>`. Copying the The keyring must be named to match your `<STORAGE_ID>`. Copying the
keyring generally requires root privileges. keyring generally requires root privileges.
If Ceph is installed locally on the PVE cluster, this is done automatically by
'pveceph' or in the GUI.
Storage Features Storage Features
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~