as the nbd server could have been stopped by something else.
Further, it makes no sense to die and mark the migration thus as
failed, just because of a NBD server stop issue.
At this point the migration hand off to the target was done already,
so normally we're good, if it fails we have other (followup) problems
anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and refactor the test_volid closure. Like this get_replicatable_volumes doesn't
need a separate loop for unused volumes anymore. For get_vm_volumes, which is used
for activation/deactivation of volumes at migration and deactivation in vm_stop_cleanup,
includes those volumes now. For migration it's an improvement, because those volumes
might need to be migrated and for vm_stop_cleanup it shouldn't hurt. The last user
of foreach_volid is check_vm_disks_local used by migrate_vm_precondition,
where information about the additional volumes doesn't hurt either.
Note that replicate is (still) set by default, so the behavior for
get_replicatable_volumes for unused volumes should not change.
Hibernation vmstate files are now also included and recognized as 'is_vmstate'.
The 'size' attribute will not be overwritten by subsequent iterations for the
same volid anymore (a volid may appear both in the config and in snapshots),
so the size from the current config is now preferred.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
when a backup includes a cloudinit disk on a non-existent storage,
the restore fails with 'storage' does not exist
this happens because we want to get the format of the disk, by
checking the source storage
we fix this by using the target storage first and only the source as
fallback
this will still fail if neither storage exists
(which is ok, since we cannot restore then anyway)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Some OVF files to not declare 'rasd' as a default namespace (in the
top level Envelope element), but inline in each element (e.g.
<rasd:HostResource xmlns:rasd="foo">...</rasd:HostResource>)
This trips up our relative findvalue with
> XPath error : Undefined namespace prefix
To avoid this, search in the global XPathContext (where we register
those namespaces ourselves) and pass the item_node as context
parameter.
This works then for both cases
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
sometimes vendors do not put the 'rasd' namespaces in the top level
Envelope, but in every 'rasd' element this adds a test for this
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When one of the ovf tests fails to parse at all, we just get the
'die' message of the failing component, but not which file actually
failed to parse.
To get better output, convert the parsing also to a test and ok() and
fail() respectively and then printing the error.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is only used for migration via 'qm mtunnel', regular users should
never need to resume a VM that does not logically belong to the node it
is running on
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by counting only local volumes that will be live-migrated via qemu_drive_mirror,
i.e. those listed in $self->{online_local_volumes}.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
With Qemu 4.2 a new `audiodev` property was introduced [0] to explicitly
specify the backend to be used for the audio device. This is accompanied
with a warning that the fallback to the default audio backend is
deprecated.
[0] https://wiki.qemu.org/ChangeLog/4.2#Audio
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
If storage_migrate dies, the error message might not include the
volume ID or the target storage ID, but those might be good to know.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This makes it possible to migrate a VM with volumes store1:vm-123-disk-0
store2:vm-123-disk-0 to some targetstorage. Also prevents migration failure
when there is an orphaned disk with the same volid on the target.
To avoid confusion, the name should not change for 'vmstate'-volumes.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It was necessary to move foreach_volid back to QemuServer.pm
In VZDump/QemuServer.pm and QemuMigrate.pm the dependency on
QemuConfig.pm was already there, just the explicit "use" was missing.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Upstream marks these as having a micro-version of >=90, unfortunately the
machine versions are bumped earlier so testing them is made unnecessarily
difficult, since the version checking code would abort on migrations etc...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
[ Thomas: do so refactor ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so that pve-container and qemu-server use the same one, in preparation
for moving it to JSONSchema and having a bridgepair format.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Requires a mock CPU-model config, which is given as a raw string to also
test parsing capabilities. Also tests defaulting behaviour.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Ebner <f.ebner@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).
QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
If a cputype is custom (check via prefix), try to load options from the
custom CPU model config, and set values accordingly.
While at it, extract currently hardcoded values into seperate sub and add
reasonings.
Since the new flag resolving outputs flags in sorted order for
consistency, adapt the test cases to not break. Only the order is
changed, not which flags are present.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Ebner <f.ebner@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
To avoid hardcoding even more CPU-flag related things for custom CPU
models, introduce a dynamic approach to resolving flags.
resolve_cpu_flags takes a list of hashes (as documented in the
comment) and resolves them to a valid "-cpu" argument without
duplicates. This also helps by providing a reason why specific CPU flags
have been added, and thus allows for useful warning messages should a
flag be overwritten by another.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Ebner <f.ebner@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.
Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.
No functional change with non-custom CPU types intended.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This is required to support custom CPU models, since the
"cpu-models.conf" file is not versioned, and can be changed while a VM
using a custom model is running. Changing the file in such a state can
lead to a different "-cpu" argument on the receiving side.
This patch fixes this by passing the entire "-cpu" option (extracted
from /proc/.../cmdline) as a "qm start" parameter. Note that this is
only done if the VM to migrate is using a custom model (which we can
check just fine, since the <vmid>.conf *is* versioned with pending
changes), thus not breaking any live-migration directionality.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
in addition to printing it. preparation for remote cluster migration,
where we want to return this in a structured fashion over the migration
tunnel instead of parsing stdout via SSH.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
into one sub that retrieves the local disks, and the actual NBD
allocation. that way, remote incoming migration can just call the NBD
allocation with a custom list of volume names/storages/..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
both where previously missing. the existing 'check_storage_access'
helper is not applicable here since it operates on a full set of VM
config options, not just storage IDs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the syntax is backwards compatible, providing a single storage ID or '1'
works like before. the new helper ensures consistent behaviour at all
call sites.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>