The API::Ceph::Pools module is deprecated. Use the new API::Ceph::Pool
(singular) module.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
when installing non-quincy versions, 'ceph-volume' is not contained in
the respective repositories and, thus, the install process would fail.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
[ T: reworded commit subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoid line bloat, use same capitalization style in warnings as (most)
of the rest of code, some style nits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adding a flag file during the Ceph installation helps to cover the time
span in which the binary is already present but the installation not yet
done.
The most noticeable effect is that the 'Next' button in the GUI will
only become active once the installation is actually finished and not
earlier.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The nvme-cli package is recommended by (our) Ceph packages, but here
--no-install-recommends is used to avoid pulling in too much.
The issue with not installing nvme-cli is that a "security
information" mail notification is triggered by sudo each time Ceph
tries to get the device health metrics. While there is a sudoers
rule for /usr/sbin/nvme, Ceph uses 'sudo nvme ...', so it does not
apply when the package is not installed.
This didn't seem to happen with sudo in buster.
It's about 1 MiB of additional packages (nvme-cli + uuid-runtime).
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To make them load the updated librados2, as else they may potentially
not be able to communicate with the potentially newer ceph monitors,
as Debian 10 ships Jewel (12.2) by default...
While we could do some more fancy signaling to the workers to reload
the lib, that is rather a PITA and complex solution for something
that happens once in a blue moon.
We may want to add a trigger in ceph for this on updates though, that
would effectively fix this too - but needs to be thought out better.
So for now lets go with the simplest solution.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Printing a lot of very detailed JSON output on the CLI is not very
useful.
Printing the `ceph -s` overview is much more suited to give an overview
of the ceph cluster status.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to clean service directories as well as disable and stop Ceph services.
Addtionally provide the option to remove crash and log information.
This patch is also in addtion to #2607, as the current cleanup doesn't
allow to re-configure Ceph, without manual steps during purge.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
The default value for "pveceph start" and "pveceph stop" is "ceph.target".
However, omitting the parameter to use the default has been forbidden.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
in nautilus there is no ceph-disk anymore and osd activation
does not use udev anymore so this service is not needed anymore
remove it and do not copy it when installing a new ceph cluster
in pve-storage.target we replace ceph.service with ceph.target
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We allow 'luminous' still for testing purpose, it could be also
useful if one already upgraded his cluster to PVE 6.0 / Buster but
not yet ceph and due to a incident needs to setup a new luminous node
on Buster to get healthy again. This is fabricated but not
unthinkable, as it costs nothing and isn't available for WebUI user
just keep it for now. Remove with a future point release though.
Use non-public repo for now, will be updated to testing soon.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>