Erasure code pools cannot change certain settings after creation.
Trying to set them will cause errors on Cephs side.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.
The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.
To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).
Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to avoid a "malformed JSON string" warning when there is no old
version present (e.g. after starting a cluster).
get_node_kv will always return something that evaluates to true, so
instead, test if the result has an entry for the current node. Also,
it's enough to request the kv for the current node only.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by iterating over all of them and saving the name to the active ones
this fixes the issue that an mds that is assigned to not the first
fs in the list gets wrongly shown as offline
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
with 'get_node_kv', we get a hash which contains the value for
all nodes in the cluster (with the nodename as key), so we have to use
the value from the hash corresponding to our nodename.
also the 'str' property is inside the 'version' hash
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
avoid line bloat, use same capitalization style in warnings as (most)
of the rest of code, some style nits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adding a flag file during the Ceph installation helps to cover the time
span in which the binary is already present but the installation not yet
done.
The most noticeable effect is that the 'Next' button in the GUI will
only become active once the installation is actually finished and not
earlier.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
which is mostly a copy of the wipe_disks helper with the difference
that it also uses wipefs on the device and its partitions.
Remove the wipe_disks helper as no users remain.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since I had to look up the cause of the error-message in our source
explaining why exactly ceph-operations fail, because
/etc/ceph/ceph.conf exists.
reported via our community forum:
https://forum.proxmox.com/threads/osd-ersetzen-neu-erstellen.80793/
quickly tested on a virtual ceph cluster
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.
The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to clean service directories as well as disable and stop Ceph services.
Addtionally provide the option to remove crash and log information.
This patch is also in addtion to #2607, as the current cleanup doesn't
allow to re-configure Ceph, without manual steps during purge.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
instead of having multiple regexes in various places for the name,
define a 'SERVICE_REGEX' in PVE::Ceph::Services, and use that
everywhere in the api where we need it
additionally limit new sevices to 200 characters, since
systemd units have a limit of 256 characters[0] (including suffix), and
200 seems to be enough.
users can now create ceph services on machines with hostnames
longer than 32 characters
0: https://www.freedesktop.org/software/systemd/man/systemd.unit.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add and change the return signature for the wantarray case, which can
safely done as this is only used once (statd), and there only the
first elemen, the full version string, is used - so no breakage
potential there
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A fixed path for the ceph.client.admin.keyring was used in the chown
command. This patch uses the ckeyring_path variable instead, to minimize
changes should the path change.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
pveceph init failed, as the copy used a wrong path with typo as
target and then the chown tried to operate on the correct keyring
path, which was then non-existing.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
we map '$type addr' to '$type_addr' anyway in the ceph.conf parser,
so this is not necessary
also use 'public_addr' if it is set
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
normally this gets created on package installation, but could be
deleted, e.g., by a debug purge. As it costs nothing to create just
do a mkdir on it, which does not fails if it already exists..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it makes no sense to have the mon key inside the client.admin.keyring
also the order and operations did not make much sense
also create the client admin keyring when creating the config
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we always gave one, and the only reason why it could be undef
is that we could not connect, so it makes no sense to try again
and add unecessary time to the api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this parameter changed sometime between luminous and nautilus
note that with this change, it is not possible to delete pools in
luminous anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ceph luminous does not use the 'name' property in the metadata
everywhere, so fall back to 'id'
this makes the ceph dashboard usable while having still luminous
(relevant for upgrading)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
reads the sizes from the ceph config db first, then from the
ceph config, first from the osd section then global
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>