after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Like this, there will be a backup task (within the big worker task)
for such IDs, which will then visibly (i.e. also visible in the
notification mail) fail with, e.g.:
unable to find VM '123'
In get_included_guests, the key '' was chosen for the orphaned IDs,
because it cannot possibly denote a nodename.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and make the two options mutally exclusive as long
as they are specified on the same level (e.g. both
from the storage configuration). Otherwise prefer
option > storage config > default (only maxfiles has a default currently).
Defines the backup limit for prune-backups as the sum of all
keep-values.
There is no perfect way to determine whether a
new backup would trigger a removal with prune later:
1. we would need a way to include the not yet existing backup
in a 'prune --dry-run' check.
2. even if we had that check, if it's executed right before
a full hour, and the actual backup happens after the full
hour, the information from the check is not correct.
So in some cases, we allow backup jobs with remove=0, that
will lead to a removal when the next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and regular users to read all their own tasks as well as those of their
associated tokens.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Else, a user would need to renew it first before being able to revoke
it, which does not make much sense..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this normally just means that the old cert is already expired, we do
not care for that - after all: we got a new (renewed) valid cert
successfully.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If source is missing, pvesr will set it via job_status
on the next run. But the info is already present here,
so it doesn't hurt to use it.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
reload is actually preferred, and even if most of the time this even
won't reach the API, allowing to start them is still definitively
fine!
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In some situations Ceph's auto-detection doesn't recognize the device
class correctly. The option allows to set it directly on osd create,
instead of altering it afterwards. This way the cluster doesn't need to
shift data back and forth unnecessarily.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Adds a new api endpoint at cluster/backupinfo for cluster wide backup
stuff. This is necessary because cluster/backup expects a backup job ID
at the next level and thus other endpoints are hard to impossible to
implement under that hierarchy.
The only api endpoint available for now is the `not_backed_up` which
returns a list of all guests which are not covered by any backup job.
The top level index endpoint is left unsused for now to be available for
a more generic summary endpoint in the future.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This patch adds a new API endpoint that returns a list of included
guests, their volumes and whether they are included in a backup.
The output is formatted to be used with the extJS tree panel.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The `guest include` logic handling `all` and `exclude` parameters was in
the `PVE::VZDump->exec_backup()` method. Moving this logic into the
`get_included_guests` method allows us to simplify and generalize it.
This helps to make the overall logic easier to test and develop other
features around vzdump backup jobs.
The method now returns a hash with node names as keys mapped to arrays
of VMIDs on these nodes that are included in the vzdump job.
The VZDump API call to create a new backup is adapted to use the new
method to create the list of local VMIDs and the skiplist.
Permission checks are kept where they are to be able to handle missing
permissions according to the current context. The old behavior to die
on a backup job when the user is missing the permission to a guest and
the job is not an 'all' or 'exclude' job is kept.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
otherwise the ACME endpoint might return the ordered domain in lower
case and we fail to find our plugin config.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
allow users with Sys.Modify to modify custom or ACME certificates. those
users can already hose the system in plenty of ways, no reason to
restrict this in particular to being root@pam only.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As a first step to make the whole guest include logic more testable the
part from the API endpoint has been moved to its own method with as
little changes as possible.
Everything concerning `all` and `exclude` logic is still in the
PVE::VZDump->exec_backup() method.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to add the pg_autoscale_mode since its activated in Ceph Octopus by
default and emmits a waring (ceph status) if a pool has too many PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.
The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
and do a breaks on older network package as we do not depend on it
due to it being an optional/experimental feature, so reverse the
depends with the breaks.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since this API endpoint is used for the node selector in the GUI, which
causes quite widespread breakage.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we need to parse the config even if it does not exist - it will return
the 'standalone' entry that's needed to be backwards compatible with
existing setups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
which returns a list of challenge api types with the schema of their
required data (if it exists)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: adapt to my changes from proxmox-acme schema def and change
path from challengeschema to challenge-schema ]
As proxmox-acme has now a default delay for DNS challenge plugins,
which is the important one. Those are just for not overloading the
acme servers with a lot of requests, but once the challenge was
propagate they have it verified pretty quickly, so reduce delay for
checking validation after first requesting it down to 10 seconds
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>