Either we're done in a few seconds anyway, or if the VM dirties lots
of pages we need quite a bit of time, and then it does not help to
output roughly the same status 10 times a second...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* use render_bytes where possible, to get quick to read and grasp
units printed
* xbzrle is only interesting if actually pages/bytes are send using
it, so only log in that case
* log if VM dirties more than we send
* log current speed we get from QEMU
In general there are less lines logged and huge integers are avoided.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the claim that QEMU limits this to 32M otherwise is bogus, at least
with any current QEMU version..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use an early die so that the rest can loose an indentation level for
the actual migration status reporting code
Extract common used members of the stat hash for shorter code.
use `git show -w --word-diff=color --word-diff-regex='\w+'` for
getting a better view of actual changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoids the possibility to die during phase3_cleanup and instead of needing to
duplicate the cleanup ourselves, benefit from phase2_cleanup doing so.
The duplicate cleanup was also very incomplete: it didn't stop the remote kvm
process (leading to 'VM already running' when trying to migrate again
afterwards), but it removed its disks, and it didn't unlock the config, didn't
close the tunnel and didn't cancel the block-dirty bitmaps.
Since migrate_cancel should do nothing after the (non-storage) migrate process
has completed, even that cleanup step is fine here.
Since phase3 is empty at the moment, the order of operations is still the same.
Also add a test, that would complain about finish_tunnel not being called before
this patch. That test also checks that local disks are not already removed
before finishing the block jobs.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Namely, those migrated with storage_migrate by using the information from
volume_map. Call cleanup_remotedisks in phase1_cleanup as well, because that's
where we end if sync_offline_local_volumes fails, and some disks might already
have been transfered successfully. Note that the local disks are still here, so
this is fine.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This also changes the behavior to remove the local copies of offline migrated
volumes only after the migration has finished successfully (this is relevant
for mixed settings, e.g. online migration with unused/vmstate disks).
local_volumes contains both, the volumes previously in $self->{volumes}
and the volumes in $self->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The case with:
1. no generic 'migration' limit from the storage plugin
2. a migrate_speed limit in the VM config
was broken. It would assign 0 to migrate_speed when picking the minimum value
and then default to the default value. Fix it by checking if bwlimit is 0
before picking the minimum.
Also, make it a bit more readable by avoiding the trick of //-assigning bwlimit
before the units match up and relying on getting back the original bwlimit value
as the minimum. Instead, only ||-assign after the units match up and don't rely
on other things.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by using the information obtained in the first scan. This
also makes sure we only scan local storages.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by making local_volumes class-accessible. One functions is for scanning all local
volumes and one is for actually syncing offline volumes via storage_migrate. The
exception is replicated volumes, this still happens during the scan for now.
Also introduce a filter_local_volumes helper, to makes life easier.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
except for migration, where it would be subtly backwards-incompatible. Since
there is a scan_volids call for migration, we can't default to filtering in
scan_volids just yet.
Also allows to get rid of the existing filtering hack in rescan().
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Pinned machine versions like "pc-i440fx-4.2+pve2.pxe" would otherwise
get a second "+pve0" suffix, which is incorrect.
Also deal with non-pve pinned versions correctly, i.e.
"pc-i440fx-5.2.pxe" becomes "pc-i440fx-5.2+pve0.pxe".
Handle .pxe suffixes in Machine.pm as well, and add two test cases.
Co-developed-by: Luca Berneking <luca@berneking.net>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
When checking whether a volume is still referenced by a snapshot, the volid
itself is first checked. When the volid is different, we fall back to comparing
the path.
As the first value to be compared is a volume's path, the second value better be
a volume's path too, and not a snapshot's path.
See also 77019edfe0 for historical context.
The error that led me here:
* had a VM with ZFS over iSCSI storage with an exsiting snapshot
* add new unused drive
* try to remove the unsued drive
* fails, because ZFS (not Pool!) Plugin does not support snapshot paths.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
If, why ever, got "not-ready" again we'd log again the next round.
Improves the behavior for multiple disks, especially on migration
where we mirrored the local disks one by one, but kept reporting on
prev. ones.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
orient on the backup output which got reworked for PVE 6.2/6.3
Avoid overwhelming the user with redundant information, and use human
readable units.
before:
> restore-drive-scsi5: transferred: 167772160 bytes remaining: 8422162432 bytes total: 8589934592 bytes progression: 1.95 % busy: 1 ready: 0
after:
> restore-drive-scsi0: transferred 720.0 MiB of 32.0 GiB (2.20%) in 12s
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Similar to backups, prevent QEMU from being killed by qmeventd during
the live-restore, so a guest can shut itself down without aborting the
restore operation.
Note that the 'close' is only to be explicit, the handle will also be
closed in case an operation errors (i.e. when the 'eval' is left).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Enables live-restore functionality using the 'alloc-track' QEMU driver.
This allows starting a VM immediately when restoring from a PBS
snapshot. The snapshot is mounted into the VM, so it can boot from that,
while guest reads and a 'block-stream' job handle the restore in the
background.
If an error occurs, the VM is deleted and all data written during the
restore is lost.
The VM remains locked during the restore, which automatically prohibits
any modifications to the config while restoring. Some modifications
might potentially be safe, however, this is experimental enough that I
believe this would cause more bad stuff(tm) than actually satisfy any
use cases.
Pool handling is slightly adjusted so the VM can be added to the pool
before the restore starts.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the custom 'alloc-track' filter node to redirect writes to the
original drives target, while unwritten blocks will be read from the
specified PBS snapshot.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
...so it works with other block jobs as well. Intended use case is
block-stream, which also requires a new "auto" (wait only) completion
mode, since it finishes automatically anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
cloud-init's SLAAC option was disabled in 2018 because there was no
support for it. Now that cloud-init 19.4 or newer versions are more
widespread, we can finally reenable it.
Also include minimum required cloud-init version for SLAAC support in
format description.
Tested on Ubuntu 20.04 (ci 20.4), CentOS 8 (ci 19.4), Debian 10 (ci
20.2).
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>