To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.
The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.
To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).
Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
avoid line bloat, use same capitalization style in warnings as (most)
of the rest of code, some style nits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adding a flag file during the Ceph installation helps to cover the time
span in which the binary is already present but the installation not yet
done.
The most noticeable effect is that the 'Next' button in the GUI will
only become active once the installation is actually finished and not
earlier.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
which is mostly a copy of the wipe_disks helper with the difference
that it also uses wipefs on the device and its partitions.
Remove the wipe_disks helper as no users remain.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since I had to look up the cause of the error-message in our source
explaining why exactly ceph-operations fail, because
/etc/ceph/ceph.conf exists.
reported via our community forum:
https://forum.proxmox.com/threads/osd-ersetzen-neu-erstellen.80793/
quickly tested on a virtual ceph cluster
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.
The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to clean service directories as well as disable and stop Ceph services.
Addtionally provide the option to remove crash and log information.
This patch is also in addtion to #2607, as the current cleanup doesn't
allow to re-configure Ceph, without manual steps during purge.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
add and change the return signature for the wantarray case, which can
safely done as this is only used once (statd), and there only the
first elemen, the full version string, is used - so no breakage
potential there
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A fixed path for the ceph.client.admin.keyring was used in the chown
command. This patch uses the ckeyring_path variable instead, to minimize
changes should the path change.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
pveceph init failed, as the copy used a wrong path with typo as
target and then the chown tried to operate on the correct keyring
path, which was then non-existing.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
normally this gets created on package installation, but could be
deleted, e.g., by a debug purge. As it costs nothing to create just
do a mkdir on it, which does not fails if it already exists..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it makes no sense to have the mon key inside the client.admin.keyring
also the order and operations did not make much sense
also create the client admin keyring when creating the config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this parameter changed sometime between luminous and nautilus
note that with this change, it is not possible to delete pools in
luminous anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
reads the sizes from the ceph config db first, then from the
ceph config, first from the osd section then global
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The new luminous release 12.2.11 cherry-picked a commit[0] which
remove some legacy fallback when parsing a pool set command.
As of now only strings (which itself can then be a string, int or
float) may be passed as a value, while older also accepted an int
directly.
So ensure that all of our "osd pool set" commands pass strings, which
means that 'min_size' and 'size' must be converted to strings before
executing the command.
Without this one cannot create a CephFS over the WebUI anymore as the
create pools fails. Interestingly, the normal create pools over WebUI
still worked, as it has those two parameters exposed in the creation
formular, and thus sents them as string to the backend, while the
cephfs API does not exposes those two values at all but sets them
directly to integers. Funny stuff. CLI invocations also had the same
issue, depending on the fact if size and min_size could and got
passed (then it worked) or where omited (then it did not worked).
[0]: c838a0096d
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When destroying an OSD over API or CLI, e.g. by executing:
'pveceph osd destroy <num> --cleanup'
all disks associated with the OSD got wiped with dd, which included
any shared and by others still in use ones, e.g., separate disks with
DB/WAL.
The patch changes 'wipe_disks' to wipe the partition instead of the
whole disk.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It makes more sense to have it there, especially since we want to
split out the service parts into a seperate file.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>