This should not be needed since we call 'block-job-complete' before
in qemu_drive_mirror_monitor(), and after benchmarking it does not
appear to be needed nor provide a measurable improvement when shutting
down the source.
and only transfer state and switch direction if there
actually are any replicated volumes.
once we add support for live-migration with replicated
volumes, adding a set-replication-state command to the
tunnel and using that probably makes sense.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
because we want commands to return meaningful errors, and
print them on the client/source side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
qm mtunnel was deemed as deprecated but still in use here.
Switch over to pvecm's mtunnel to allow removing the qm variant in
PVE 5.1
Also use an absolute path so we do not depended on the targets
environment variables
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This reverts commit 63d02c7074.
The commit changes the configuration before the VM is actually
migrated, so it is possible to have a wrong configuration when
migration fails for some reason. Also, I am quite unsure if
this automatic target change is really wanted. The patch also
contains wrong refereces to $self->{opts}->{node}.
if we define a different target storeid for remote node,
and that storage is not available on source node
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
since Qemu 2.9, block device write access is limited to one
writer unless shared_rw is set to true. there is an
exception for live-migrating local disks via NBD as long as
the VM is suspended.
stop the NBD server before resuming the VM accordingly to
unbreak local disk live-migration.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As Fabian as required,
add an extra flag "with-local-disks" to enable live storage migration with localdisk.
default target storage is same sid than source, this can be overrided with
"targetstorage" option.
I will try improve this later, with optionnal mapping, disk by disk.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This allow to migrate disks on local storage to a remote node storage.
When the target node start, a new volumes are created and exposed through qemu embedded nbd server.
qemu drive-mirror is launch on source vm for each disk with nbd server as target.
when drive-mirror reach 100% of 1 disk, we don't complete the block jobs and begin mirror of next disk.
(mirroring are parralel, but we try to mirroring them 1 by 1 to avoid storage && network overload)
Then we live migrate the vm to destination node. (drive-mirror still occur at the same time).
We the vm is livemigrate (source vm paused, target vm pause), we complete the block jobs mirror.
When is done we stop the source vm and resume the target vm
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
we can migrate local snapshots when on zfs or dir storage with qcow2,
but the check was incorrect
we checked for if (zfs && !qcow2) instead of if (zfs || qcow2)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Without this patch we use the network were the cluster traffic runs
for sending migration traffic. This is not ideal as it may hinder
cluster traffic. Further some users have a powerful network which
would be perfect for migrations, with this patch they can run the
migration traffic over such a network without having the corosync
traffic on the same network.
The network is configurable through /etc/pve/datacenter.cfg which
got a new property, namely migration. migration has two
subproperties: type (replaces the old migration_unsecure property)
and network.
For the case of a network failure or that a VM has to be moved over
another network for arbitrary other reasons I added the
migration_type and migration_network parameters to qm migrate (and
respectively vm_start as this gets used on migration).
They allow overwriting the datacenter.cfg settings.
Fixes bug #1177
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
foreach_volid recurses over snapshots as well, resulting in
lots of repeated checks (especially for VMs with lots of
snapshots and disks).
a potential vmstate volume must be checked explicitly,
because foreach_drive does not care about those.