A MDS gets only active once a FS is there, and we need an MDS active
to be able to add a storage, as the CephFS plugin does an immediate
mount check. As an MDS needs some time to get active we had a
problematic time window where this mounting could fail.
Wait for a MDS to get in active state.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow to create a new CephFS instance and allow to list them.
As deletion requires coordination between the active MDS and all
standby MDS next in line this needs a bit more work. One could mark
the MDS cluster down and stop the active, that should work but as
destroying is quite a sensible operation, in production not often
needed I deemed it better to document this only, and leaving API
endpoints for this to the future.
For index/list I slightly transform the result of an RADOS `fs ls`
monitor command, this would allow relative easy display of a CephFS
and it's backing metadata and data pools in a GUI.
While for now it's not enabled by default and marked as experimental,
this API is designed to host multiple CephFS instances - we may not
need this at all, but I did not want to limit us early. And anybody
liking to experiment can use it after the respective ceph.conf
settings.
When encountering errors try to rollback. As we verified at the
beginning that we did not reused pools, destroy the ones which we
created.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Allow to create, list and destroy and Ceph Metadata Server (MDS) over
the API and the CLI `pveceph` tool.
Besides setting up the local systemd service template and the MDS
data directory we also add a reference to the MDS in the ceph.conf
We note the backing host (node) from the respective MDS and set up a
'mds standby for name' = 'pve' so that the PVE created ones are a
single group. If we decide to add integration for rank/path specific
MDS (possible useful for CephFS with quite a bit of load) then this
may help as a starting point.
On create, check early if a reference already exists in ceph.conf and
abort in that case. If we only see existing data directories later
on we abort but do not remove them, they could well be from an older
manual create - where it's possible dangerous to just remove it. Let
the user handle it themself in that case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
We will reuse this in the future, e.g., when creating a data and
metadata pool for CephFS.
Allow to pass a $rados object (to reuse it, as initializing is not
that cheap) but also create it if it's undefined, fro convenience.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
most of this was imported by just copying without verifying if all is
actually required. Some lost its purpose as we re-used more from our
existing module code base (e.g., pve-common) but wasn't actually
removed.
As this file includes two perl modules you need to take a bit caution
when looking at this, as some things are used in one module but not
the other - simple grep'ing at this may give false positives.
Also add PVE::API2::Storage use which was missing here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
most of this was imported by just copying without verifying if all is
actually required. Some lost its purpose as we re-used more from our
existing module code base (e.g., pve-common) but wasn't actually
removed.
As this file includes two perl modules you need to take a bit caution
when looking at this, as some things are used in one module but not
the other - simple grep'ing at this may give false positives.
Also include the missing IO::File use.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this will be used for the api endpoints in the future as
PVE::API2::Scan instead of PVE::API2::Storage::Scan since it will
contain endpoints to other modules (like qemu-server for pci/usb
scanning)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This patch removes the separate storage entries for CT & VM to the same
ceph pool. Instead only one entry is made as we can now map/unmap
volumes actively in pve-container.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
workaround to keep the subscription popup on login even without 'Sys.Audit'
permissions but remove the subscription details in the GUI for unauthorized
users.
this allows the disk to be reused as ceph disk by zeroing the first 200M
of the destroyed disk. disks are iterated separately from partitions to
prevent duplicate wipes.
Signed-off-by: David Limbeck <d.limbeck@proxmox.com>
This add a new api to online reload networking configuration
with ifupdown2.
This work with native ifupdown2 modules, as ifupdown2 have
interface dependency relationships.
Some specific interfaces options can't be reloaded online
(because kernel don't implement it), it this case, we ifdown/ifup
theses interfaces. (mainly vxlan interfaces options)
btrfs is deprecated since Luminous and it will no more be tested.
If btrfs is used, you have to add an extra parameter to ceph.conf
to allow ceph-disk to activate btrfs OSD's.
In our default config this is not the case.
From Luminous release note [1]:
"We no longer test the FileStore ceph-osd backend in combination with
btrfs. We recommend against using btrfs. If you are using
btrfs-based OSDs and want to upgrade to luminous you will need to
add the follwing to your ceph.conf:
enable experimental unrecoverable data corrupting features = btrfs
The code is mature and unlikely to change, but we are only
continuing to test the Jewel stable branch against btrfs. We
recommend moving these OSDs to FileStore with XFS or BlueStore."
[1] https://ceph.com/releases/v12-2-0-luminous-released/
openvz is deprecated but can still be a return value
maxcpu can be a real number (e.g., for CT if cpulimit is 1.5 and
cores is not set), and may not be an integer
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we have the defaults documented here, so set them here too
otherwise if the default change in PVE::Tools, we probably forget to
update the api description
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since letsencrypt updates their implementation to the ACMEv2 spec [1],
we should correctly parse the order status
1: https://community.letsencrypt.org/t/acmev2-order-ready-status/62866
note that we (for now) try to be compatbile to both versions,
with and without ready state, this can be changed when all letsencrypt
apis have changed
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we inherited the import from PVE::RESTHandler but may want to get rid
of it there. So explicitly import it here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
else all non-root users get an empty dropdown box for the directories
and get no feedback why that is
with this, they can select it, but ultimately get an api error if the
permissions are not sufficient
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We defined 'default' as fallback default value for the optional
pve-acme-account-name standard option but did not honored that.
Thus we got a perl error ($account_name not defined) if we did not
passed a name. Fix that by actually falling back to 'default' in this
case.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to allow retrieval of certificate information, and uploading or removing
of custom certificate files.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for creating/ordering a new certificate and renewing respectively
revoking an existing one.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
for registering, updating, refreshing and deactiving a PVE-managed ACME
account, as well as for retrieving the (optional, but required if
available) terms of service of the ACME API provider / CA.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this currently only contains a description and the node-specific ACME
configuration, but I am sure we can find other goodies to put there.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>