To make them load the updated librados2, as else they may potentially
not be able to communicate with the potentially newer ceph monitors,
as Debian 10 ships Jewel (12.2) by default...
While we could do some more fancy signaling to the workers to reload
the lib, that is rather a PITA and complex solution for something
that happens once in a blue moon.
We may want to add a trigger in ceph for this on updates though, that
would effectively fix this too - but needs to be thought out better.
So for now lets go with the simplest solution.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Printing a lot of very detailed JSON output on the CLI is not very
useful.
Printing the `ceph -s` overview is much more suited to give an overview
of the ceph cluster status.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to clean service directories as well as disable and stop Ceph services.
Addtionally provide the option to remove crash and log information.
This patch is also in addtion to #2607, as the current cleanup doesn't
allow to re-configure Ceph, without manual steps during purge.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
The default value for "pveceph start" and "pveceph stop" is "ceph.target".
However, omitting the parameter to use the default has been forbidden.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
in nautilus there is no ceph-disk anymore and osd activation
does not use udev anymore so this service is not needed anymore
remove it and do not copy it when installing a new ceph cluster
in pve-storage.target we replace ceph.service with ceph.target
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We allow 'luminous' still for testing purpose, it could be also
useful if one already upgraded his cluster to PVE 6.0 / Buster but
not yet ceph and due to a incident needs to setup a new luminous node
on Buster to get healthy again. This is fabricated but not
unthinkable, as it costs nothing and isn't available for WebUI user
just keep it for now. Remove with a future point release though.
Use non-public repo for now, will be updated to testing soon.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This patch adds a success message on successful ceph.service
installation. And adds a newline to make a successful ceph package
installation more visible.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
It makes more sense to have it there, especially since we want to
split out the service parts into a seperate file.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
as we depend on ceph-fuse elsewhere (pve-storage) this gets installed
from Debians repositories with the Ceph 10 version.
So ensure that an up to date version, from our current supported Ceph
release, gets installed when doing `pveceph install` else you may
fall into certain issues which would have been already resolved with
a newer version.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add aliases for the existing ones, ignore the ones for MDS and
CephFS, they did never hit any repo.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow to create a new CephFS instance and allow to list them.
As deletion requires coordination between the active MDS and all
standby MDS next in line this needs a bit more work. One could mark
the MDS cluster down and stop the active, that should work but as
destroying is quite a sensible operation, in production not often
needed I deemed it better to document this only, and leaving API
endpoints for this to the future.
For index/list I slightly transform the result of an RADOS `fs ls`
monitor command, this would allow relative easy display of a CephFS
and it's backing metadata and data pools in a GUI.
While for now it's not enabled by default and marked as experimental,
this API is designed to host multiple CephFS instances - we may not
need this at all, but I did not want to limit us early. And anybody
liking to experiment can use it after the respective ceph.conf
settings.
When encountering errors try to rollback. As we verified at the
beginning that we did not reused pools, destroy the ones which we
created.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Allow to create, list and destroy and Ceph Metadata Server (MDS) over
the API and the CLI `pveceph` tool.
Besides setting up the local systemd service template and the MDS
data directory we also add a reference to the MDS in the ceph.conf
We note the backing host (node) from the respective MDS and set up a
'mds standby for name' = 'pve' so that the PVE created ones are a
single group. If we decide to add integration for rank/path specific
MDS (possible useful for CephFS with quite a bit of load) then this
may help as a starting point.
On create, check early if a reference already exists in ceph.conf and
abort in that case. If we only see existing data directories later
on we abort but do not remove them, they could well be from an older
manual create - where it's possible dangerous to just remove it. Let
the user handle it themself in that case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
this patch adds the create-/destroymgr commands to the api and pveceph,
so that advanced users can split monitor and manager daemons
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>