on a given node (and storage).
There is no datacenter/storage fallback for the bandwidth limit, so the default
can just be returned as is. While the bandwidth limit is a root-only option when
executing the backup, it still makes sense to return it for all users, so they
can see what's going to be used.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To make them load the updated librados2, as else they may potentially
not be able to communicate with the potentially newer ceph monitors,
as Debian 10 ships Jewel (12.2) by default...
While we could do some more fancy signaling to the workers to reload
the lib, that is rather a PITA and complex solution for something
that happens once in a blue moon.
We may want to add a trigger in ceph for this on updates though, that
would effectively fix this too - but needs to be thought out better.
So for now lets go with the simplest solution.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Makes it possible to configure the RBD namespace via the GUI.
RBD namespaces must be configured manually. The most likely use case is
when connecting to an external Ceph cluster as this makes it possible to
separate client PVE clusters by namespace, not by pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Looks already OK at that size, and one gets a better overview.
We have a slightly complex layout here (to columns which should be
above each other) so we cannot just use the generic helper, but
that's OK here - it *is* a special view.
Note, not all people use full-sized windows all the time, so the
widths here must not only be considered in terms of display
resolutions...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Default to keeping the state of the archive, as that has the highest
chance to fully work, but allow to enforce either level.
It'd be good to add some more feedback of the to be restored guest,
i.e., the whole config..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we are only allowed to set autoselect the first record after load on
creation, else we may change the value by mistake which, if the admin
does not notices when changing some other setting, can be quite fatal
as it can trigger a huge rebalance, where the cause may then not even
be obvious and thus an admin be quite baffled.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The $host variable is set to "::0" by default to listen on wildcard
(with 'Domain' => PF_INET6).
If 'LISTEN_IP' is defined in /etc/default/pveproxy, that IP will be used
instead.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
and group better, avoid alternating by destroying and restoring
button (prune, file restore, remove) and place file restore and
restore together
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>