this was partial copied over from PBS, but there we pull out the task
startime alreay when building the store.
As eslint mentions, task was unused, verify_time not defined, fix
that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
verification column only shows in the backup grid and for
pbs storages
(renderer is mostly copied from proxmox-backup)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If neither the 'remove' option of vzdump nor the 'maxfiles' option in
the storage config are set, assume a value of 0, i.e. do not delete
anything and allow unlimited backups.
Restores previous behaviour that was broken in e6946086e3.
Also fixes a warning about using '== 0' on a non-number type.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
If a guest's QEMU process is 'running', but QMP says 'shutdown' or
'prelaunch', the VM is ready to be booted anew, so we can show the
button.
The 'shutdown' button is intentionally not touched, as we always want to
give the user the ability to 'stop' a VM (and thus kill any potentially
leftover processes).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
fixes commit f354fe47fc, which changed
the logic to the newer storage prune helpers, but those are designed
in the spirits of PBS, with a keep-option not set meaning to keep
none.
This does not respects the storage special handling of maxfiles.
While in the API/CLI that option must be > 0m in can be zero when set
in a storage configuration entry, and then it means keep all. So, set
the internal remove option to false if that special condition is met.
This would have been a clearer, and less prone to changes,
implementation to begin with.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since some users don't even have a full 1500 (and some systems might
have links with bigger MTU and not require as much fragmentation).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The me variable should only be assigned from `this`, else it gets
confusing and wrong fast.
here the managed listener tried to do a this.reload, but this is
ambiguous here, and not the controller.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
they cannot really be alphabetically sorted, as else some elements
are tried to be accessed before they are defined, which makes ExtJS
do a HTTP request for them.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
The assumption that they already are sorted is no longer valid,
because of the IDs for non-existent guests.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Like this, there will be a backup task (within the big worker task)
for such IDs, which will then visibly (i.e. also visible in the
notification mail) fail with, e.g.:
unable to find VM '123'
In get_included_guests, the key '' was chosen for the orphaned IDs,
because it cannot possibly denote a nodename.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Commit 1a87db9e56 introduced
a usage of plugin before the truthiness check for plugin.
At the moment it might not be possible to trigger this anymore,
because of the guest inclusion rework that happened later on.
But to make tasks for inexistent guest IDs visibly fail again,
this check will be necessary. Also, to get the error message in
the mail, it needs to fail inside the eval block.
Thus, keep the check in the eval block and move the block of code
using the plugin to below the check.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
this fixes an issue where a rogue running backup would upload the vm
config of a later backup in a backup job
instead now that directory gets deleted and the config is not
available anymore
we cannot really keep those directories around until the end of the
backup job, since we temporarily save ct contents there, which could get
large very fast
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
as else the window is not centered if it only grows in size after
ExtJS rendered it completely once.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>