If calls aren't proxied to the selected node, which seems legit in
some cases, this will cause some misleading errors while ceph is
not installed on that node. Therefor the calls should now always get
proxied.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
It makes more sense to have it there, especially since we want to
split out the service parts into a seperate file.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
A MDS gets only active once a FS is there, and we need an MDS active
to be able to add a storage, as the CephFS plugin does an immediate
mount check. As an MDS needs some time to get active we had a
problematic time window where this mounting could fail.
Wait for a MDS to get in active state.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow to create a new CephFS instance and allow to list them.
As deletion requires coordination between the active MDS and all
standby MDS next in line this needs a bit more work. One could mark
the MDS cluster down and stop the active, that should work but as
destroying is quite a sensible operation, in production not often
needed I deemed it better to document this only, and leaving API
endpoints for this to the future.
For index/list I slightly transform the result of an RADOS `fs ls`
monitor command, this would allow relative easy display of a CephFS
and it's backing metadata and data pools in a GUI.
While for now it's not enabled by default and marked as experimental,
this API is designed to host multiple CephFS instances - we may not
need this at all, but I did not want to limit us early. And anybody
liking to experiment can use it after the respective ceph.conf
settings.
When encountering errors try to rollback. As we verified at the
beginning that we did not reused pools, destroy the ones which we
created.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>