cephfs: reduce duplication of RBD docs and reword

No point in having half of RBD section duplicated here.
Reference pveceph-chapter, note that upstream considers snapshots
still not completely stable and remind user that he needs to setup a
ceph user secret...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Thomas Lamprecht 2018-07-05 17:09:55 +02:00
parent 669bce8b0e
commit 6a8897ca46

View File

@ -8,22 +8,15 @@ endif::wiki[]
Storage pool type: `cephfs` Storage pool type: `cephfs`
http://ceph.com[Ceph] is a distributed object store and file system designed to CephFS implements a POSIX-compliant filesystem using a http://ceph.com[Ceph]
provide excellent performance, reliability and scalability. CephFS implements a storage cluster to store its data. As CephFS builds on Ceph it shares most of
POSIX-compliant filesystem storage, with the following advantages: its properties, this includes redundancy, scalability, self healing and high
availability.
* thin provisioning TIP: {pve} can xref:chapter_pveceph[manage ceph setups], which makes
* distributed and redundant (striped over multiple OSDs) configuring a CephFS storage easier. As recent hardware has plenty of CPU power
* snapshot capabilities and RAM, running storage services and VMs on same node is possible without a
* self healing big performance impact.
* no single point of failure
* scalable to the exabyte level
* kernel and user space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph
services directly on your {pve} nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible.
[[storage_cephfs_config]] [[storage_cephfs_config]]
Configuration Configuration
@ -34,8 +27,8 @@ This backend supports the common storage properties `nodes`,
monhost:: monhost::
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the List of monitor daemon addresses. Optional, only needed if Ceph is not running
PVE cluster. on the PVE cluster.
path:: path::
@ -43,7 +36,8 @@ The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`.
username:: username::
Ceph user Id. Optional, only needed if Ceph is not running on the PVE cluster. Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster
where it defaults to `admin`.
subdir:: subdir::
@ -62,12 +56,14 @@ cephfs: cephfs-external
content backup content backup
username admin username admin
---- ----
NOTE: Don't forget to setup the client secret key file if cephx was not turned
off.
Authentication Authentication
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
If you use `cephx` authentication, you need to copy the secret from your If you use the, by-default enabled, `cephx` authentication, you need to copy
external Ceph cluster to a Proxmox VE host. the secret from your external Ceph cluster to a Proxmox VE host.
Create the directory `/etc/pve/priv/ceph` with Create the directory `/etc/pve/priv/ceph` with
@ -79,9 +75,11 @@ Then copy the secret
The secret must be named to match your `<STORAGE_ID>`. Copying the The secret must be named to match your `<STORAGE_ID>`. Copying the
secret generally requires root privileges. The file must only contain the secret generally requires root privileges. The file must only contain the
secret itself, opposed to the `rbd` backend. secret key itself, opposed to the `rbd` backend which also contains a
`[client.userid]` section.
If Ceph is installed locally on the PVE cluster, this is done automatically. If Ceph is installed locally on the PVE cluster, i.e., setup with `pveceph`,
this is done automatically.
Storage Features Storage Features
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
@ -92,8 +90,10 @@ The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster.
[width="100%",cols="m,m,3*d",options="header"] [width="100%",cols="m,m,3*d",options="header"]
|============================================================================== |==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones |Content types |Image formats |Shared |Snapshots |Clones
|vztmpl iso backup |none |yes |yes |no |vztmpl iso backup |none |yes |yes^[1]^ |no
|============================================================================== |==============================================================================
^[1]^ Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as
they lack testing.
ifdef::wiki[] ifdef::wiki[]