mirror of
https://git.proxmox.com/git/pve-docs
synced 2025-06-18 16:04:08 +00:00
pveceph: add section about erasure code pools
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This commit is contained in:
parent
7af2edf96a
commit
cbb265a3d0
65
pveceph.adoc
65
pveceph.adoc
@ -547,6 +547,71 @@ operation footnote:[Ceph pool operation
|
|||||||
manual.
|
manual.
|
||||||
|
|
||||||
|
|
||||||
|
[[pve_ceph_ec_pools]]
|
||||||
|
Erasure Coded (EC) Pools
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Erasure coded (EC) pools can offer more usable space for the price of
|
||||||
|
performance. In replicated pools, multiple replicas of the data are stored
|
||||||
|
('size'). In erasure coded pool, data is split into 'k' data chunks with
|
||||||
|
additional 'm' coding chunks. The coding chunks can be used to recreate data
|
||||||
|
should data chunks be missing. The number of coding chunks, 'm', defines how
|
||||||
|
many OSDs can be lost without losing any data. The total amount of objects
|
||||||
|
stored is 'k + m'.
|
||||||
|
|
||||||
|
The default 'min_size' of an EC pool depends on the 'm' parameter. If 'm = 1',
|
||||||
|
the 'min_size' of the EC pool will be 'k'. The 'min_size' will be 'k + 1' if
|
||||||
|
'm > 1'. The Ceph documentation recommends a conservative 'min_size' of 'k + 2'
|
||||||
|
footnote:[Ceph Erasure Coded Pool Recovery
|
||||||
|
{cephdocs-url}/rados/operations/erasure-code/#erasure-coded-pool-recovery].
|
||||||
|
|
||||||
|
If there are less than 'min_size' OSDs available, any IO to the pool will be
|
||||||
|
blocked until there are enough OSDs available again.
|
||||||
|
|
||||||
|
NOTE: When planning an erasure coded pool, keep an eye on the 'min_size' as it
|
||||||
|
defines how many OSDs need to be available. Otherwise, IO will be blocked.
|
||||||
|
|
||||||
|
For example, an EC pool with 'k = 2' and 'm = 1' will have 'size = 3',
|
||||||
|
'min_size = 2' and will stay operational if one OSD fails. If the pool is
|
||||||
|
configured with 'k = 2', 'm = 2', it will have a 'size = 4' and 'min_size = 3'
|
||||||
|
and stay operational if one OSD is lost.
|
||||||
|
|
||||||
|
To create a new EC pool, run the following command:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
----
|
||||||
|
pceveph pool create <pool name> --erasure-coding k=2,m=1
|
||||||
|
----
|
||||||
|
|
||||||
|
Optional parameters are 'failure-domain' and 'device-class'. If you
|
||||||
|
need to change any EC profile settings used by the pool, you will have to
|
||||||
|
create a new pool with a new profile.
|
||||||
|
|
||||||
|
This will create a new EC pool plus the needed replicated pool to store the RBD
|
||||||
|
omap and other metadata. In the end, there will be a '<pool name>-data' and
|
||||||
|
'<pool name>-metada' pool. The default behavior is to create a matching storage
|
||||||
|
configuration as well. If that behavior is not wanted, you can disable it by
|
||||||
|
providing the '--add_storages 0' parameter. When configuring the storage
|
||||||
|
configuration manually, keep in mind that the 'data-pool' parameter needs to be
|
||||||
|
set. Only then will the EC pool be used to store the data objects. For example:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
----
|
||||||
|
pvesm add rbd <storage name> --pool <replicated pool> --data-pool <ec pool>
|
||||||
|
----
|
||||||
|
|
||||||
|
If there is a need to further customize the EC profile, you can do so by
|
||||||
|
creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile
|
||||||
|
{cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and
|
||||||
|
specify the profile to use with the 'profile' parameter.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
[source,bash]
|
||||||
|
----
|
||||||
|
pceveph pool create <pool name> --erasure-coding profile=<profile name>
|
||||||
|
----
|
||||||
|
|
||||||
|
|
||||||
Destroy Pools
|
Destroy Pools
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user