mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-04-30 18:44:56 +00:00
consistently capitalize Ceph
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
parent
58141566d7
commit
f226da0ef4
@ -48,9 +48,9 @@ Hyper-Converged Infrastructure: Storage
|
|||||||
infrastructure. You can, for example, deploy and manage the following two
|
infrastructure. You can, for example, deploy and manage the following two
|
||||||
storage technologies by using the web interface only:
|
storage technologies by using the web interface only:
|
||||||
|
|
||||||
- *ceph*: a both self-healing and self-managing shared, reliable and highly
|
- *Ceph*: a both self-healing and self-managing shared, reliable and highly
|
||||||
scalable storage system. Checkout
|
scalable storage system. Checkout
|
||||||
xref:chapter_pveceph[how to manage ceph services on {pve} nodes]
|
xref:chapter_pveceph[how to manage Ceph services on {pve} nodes]
|
||||||
|
|
||||||
- *ZFS*: a combined file system and logical volume manager with extensive
|
- *ZFS*: a combined file system and logical volume manager with extensive
|
||||||
protection against data corruption, various RAID modes, fast and cheap
|
protection against data corruption, various RAID modes, fast and cheap
|
||||||
|
@ -109,9 +109,9 @@ management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/ope
|
|||||||
Ceph client configuration (optional)
|
Ceph client configuration (optional)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Connecting to an external ceph storage doesn't always allow setting
|
Connecting to an external Ceph storage doesn't always allow setting
|
||||||
client-specific options in the config DB on the external cluster. You can add a
|
client-specific options in the config DB on the external cluster. You can add a
|
||||||
`ceph.conf` beside the ceph keyring to change the ceph client configuration for
|
`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
|
||||||
the storage.
|
the storage.
|
||||||
|
|
||||||
The ceph.conf needs to have the same name as the storage.
|
The ceph.conf needs to have the same name as the storage.
|
||||||
|
@ -636,7 +636,7 @@ pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
|
|||||||
----
|
----
|
||||||
|
|
||||||
TIP: Do not forget to add the `keyring` and `monhost` option for any external
|
TIP: Do not forget to add the `keyring` and `monhost` option for any external
|
||||||
ceph clusters, not managed by the local {pve} cluster.
|
Ceph clusters, not managed by the local {pve} cluster.
|
||||||
|
|
||||||
Destroy Pools
|
Destroy Pools
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
@ -761,7 +761,7 @@ ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class
|
|||||||
[frame="none",grid="none", align="left", cols="30%,70%"]
|
[frame="none",grid="none", align="left", cols="30%,70%"]
|
||||||
|===
|
|===
|
||||||
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
|
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
|
||||||
|<root>|which crush root it should belong to (default ceph root "default")
|
|<root>|which crush root it should belong to (default Ceph root "default")
|
||||||
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|
||||||
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|
||||||
|===
|
|===
|
||||||
@ -943,7 +943,7 @@ servers.
|
|||||||
pveceph fs destroy NAME --remove-storages --remove-pools
|
pveceph fs destroy NAME --remove-storages --remove-pools
|
||||||
----
|
----
|
||||||
+
|
+
|
||||||
This will automatically destroy the underlying ceph pools as well as remove
|
This will automatically destroy the underlying Ceph pools as well as remove
|
||||||
the storages from pve config.
|
the storages from pve config.
|
||||||
|
|
||||||
After these steps, the CephFS should be completely removed and if you have
|
After these steps, the CephFS should be completely removed and if you have
|
||||||
|
Loading…
Reference in New Issue
Block a user