ceph: ec: fix quoting of params and special values

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Thomas Lamprecht 2022-05-04 07:32:57 +02:00
parent 1273007159
commit e9d331c5d8

View File

@ -553,27 +553,27 @@ Erasure Coded (EC) Pools
Erasure coded (EC) pools can offer more usable space for the price of Erasure coded (EC) pools can offer more usable space for the price of
performance. In replicated pools, multiple replicas of the data are stored performance. In replicated pools, multiple replicas of the data are stored
('size'). In erasure coded pool, data is split into 'k' data chunks with (`size`). In erasure coded pool, data is split into `k` data chunks with
additional 'm' coding chunks. The coding chunks can be used to recreate data additional `m` coding chunks. The coding chunks can be used to recreate data
should data chunks be missing. The number of coding chunks, 'm', defines how should data chunks be missing. The number of coding chunks, `m`, defines how
many OSDs can be lost without losing any data. The total amount of objects many OSDs can be lost without losing any data. The total amount of objects
stored is 'k + m'. stored is `k + m`.
The default 'min_size' of an EC pool depends on the 'm' parameter. If 'm = 1', The default `min_size` of an EC pool depends on the `m` parameter. If `m = 1`,
the 'min_size' of the EC pool will be 'k'. The 'min_size' will be 'k + 1' if the `min_size` of the EC pool will be `k`. The `min_size` will be `k + 1` if
'm > 1'. The Ceph documentation recommends a conservative 'min_size' of 'k + 2' `m > 1`. The Ceph documentation recommends a conservative `min_size` of `k + 2`
footnote:[Ceph Erasure Coded Pool Recovery footnote:[Ceph Erasure Coded Pool Recovery
{cephdocs-url}/rados/operations/erasure-code/#erasure-coded-pool-recovery]. {cephdocs-url}/rados/operations/erasure-code/#erasure-coded-pool-recovery].
If there are less than 'min_size' OSDs available, any IO to the pool will be If there are less than `min_size` OSDs available, any IO to the pool will be
blocked until there are enough OSDs available again. blocked until there are enough OSDs available again.
NOTE: When planning an erasure coded pool, keep an eye on the 'min_size' as it NOTE: When planning an erasure coded pool, keep an eye on the `min_size` as it
defines how many OSDs need to be available. Otherwise, IO will be blocked. defines how many OSDs need to be available. Otherwise, IO will be blocked.
For example, an EC pool with 'k = 2' and 'm = 1' will have 'size = 3', For example, an EC pool with `k = 2` and `m = 1` will have `size = 3`,
'min_size = 2' and will stay operational if one OSD fails. If the pool is `min_size = 2` and will stay operational if one OSD fails. If the pool is
configured with 'k = 2', 'm = 2', it will have a 'size = 4' and 'min_size = 3' configured with `k = 2`, `m = 2`, it will have a `size = 4` and `min_size = 3`
and stay operational if one OSD is lost. and stay operational if one OSD is lost.
To create a new EC pool, run the following command: To create a new EC pool, run the following command:
@ -583,22 +583,22 @@ To create a new EC pool, run the following command:
pceveph pool create <pool name> --erasure-coding k=2,m=1 pceveph pool create <pool name> --erasure-coding k=2,m=1
---- ----
Optional parameters are 'failure-domain' and 'device-class'. If you Optional parameters are `failure-domain` and `device-class`. If you
need to change any EC profile settings used by the pool, you will have to need to change any EC profile settings used by the pool, you will have to
create a new pool with a new profile. create a new pool with a new profile.
This will create a new EC pool plus the needed replicated pool to store the RBD This will create a new EC pool plus the needed replicated pool to store the RBD
omap and other metadata. In the end, there will be a '<pool name>-data' and omap and other metadata. In the end, there will be a `<pool name>-data` and
'<pool name>-metada' pool. The default behavior is to create a matching storage `<pool name>-metada` pool. The default behavior is to create a matching storage
configuration as well. If that behavior is not wanted, you can disable it by configuration as well. If that behavior is not wanted, you can disable it by
providing the '--add_storages 0' parameter. When configuring the storage providing the `--add_storages 0` parameter. When configuring the storage
configuration manually, keep in mind that the 'data-pool' parameter needs to be configuration manually, keep in mind that the `data-pool` parameter needs to be
set. Only then will the EC pool be used to store the data objects. For example: set. Only then will the EC pool be used to store the data objects. For example:
NOTE: The optional parameters '--size', '--min_size' and '--crush_rule' will be NOTE: The optional parameters `--size`, `--min_size` and `--crush_rule` will be
used for the replicated metadata pool, but not for the erasure coded data pool. used for the replicated metadata pool, but not for the erasure coded data pool.
If you need to change the 'min_size' on the data pool, you can do it later. If you need to change the `min_size` on the data pool, you can do it later.
The 'size' and 'crush_rule' parameters cannot be changed on erasure coded The `size` and `crush_rule` parameters cannot be changed on erasure coded
pools. pools.
[source,bash] [source,bash]
@ -609,7 +609,7 @@ pvesm add rbd <storage name> --pool <replicated pool> --data-pool <ec pool>
If there is a need to further customize the EC profile, you can do so by If there is a need to further customize the EC profile, you can do so by
creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile
{cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and {cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and
specify the profile to use with the 'profile' parameter. specify the profile to use with the `profile` parameter.
For example: For example:
[source,bash] [source,bash]