vIOMMU enables the option to passthrough pci devices to L2 VMs in L1
VMs via Nested Virtualisation and adds an extra isolation.
Uses the new property-string from the "config: define machine schema
as property-string"-commit to add the viommu option to the machine
parameter.
Currently there are two vIOMMU implementation in QEMU to choose:
intel or virtio
Virtio-iommu is more recent but less used in production than intel-iommu.
The assert_valid_machine_property function prevents using intel-iommu with
i440fx.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[ TL: tiny coding style fix to extract variable inside if expr ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add one test case for a spice display and one for std
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
by remembering the 'forcemachine' parameter that's passed along when
starting the target instance.
In preparation to introduce a call to get_current_qemu_machine after
starting a VM to check for machine version deprecation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to turn the 'machine' parameter into a property string.
parse_property_string checks for the regex, therefore the test-cases
with 'somemachine' and 'someothermachine' would fail.
To avoid that, replace 'somemachine' and 'someothermachine' with 'q35'
and 'pc' with sed:
sed -i 's/somemachine/q35/g'
sed -i 's/someothermachine/pc/g'
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[FE: improve wording in commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Quoting from QEMU commit 4271f40383 ("virtio-net: correctly report
maximum tx_queue_size value"):
> Maximum value for tx_queue_size depends on the backend type.
> 1024 for vDPA/vhost-user, 256 for all the others.
> So the parameter is silently ignored and ethtool reports a different
> value than the one provided by the user.
Indeed, for a non-vDPA/vhost-user netdev, the guest will see TX: 256
instead of the specified 1024 here. With the mentioned QEMU commit (in
master and will be part of 8.1), using 1024 will be a hard error:
> Invalid tx_queue_size (= 1024), must be a power of 2 between 256 and 256
Since neither vhost-user, nor vhost-vdpa netdev types are exposed by
Proxmox VE, just changing the limit to the correct 256 should be fine.
No obvious issue during live-migration found.
Fixes: 620d6b32 ("virtio-net: increase defaults rx|tx-queue-size to 1024")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Only unit 0 for IDE is supported with machine type q35. Currently,
QEMU will fail startup with machine type q35 with an error like
> Can't create IDE unit 1, bus supports only 1 units
when ide1 or ide3 is configured.
Make sure to keep backwards compat form migration by leaving ide0 and
ide2 fixed. Since starting with ide1 or ide3 never worked, they can be
moved to a controller with a higher ID without issue.
Reported in the community forum:
https://forum.proxmox.com/threads/124615/post-543127https://forum.proxmox.com/threads/130815/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
None of the configured test storages support the content type iso
right now, just add it to cifs-store.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When scanning all configured storages for disk images belonging to the
VM, the migration could easily fail if a storage is not available, but
enabled. That storage might not even be used by the VM at all.
By not scanning all storages and only looking at the disk images
referenced in the VM config, we can avoid unnecessary failures.
Some information that used to be provided by the storage scanning needs
to be fetched explicilty (size, format).
Behaviorally the biggest change is that unreferenced disk images will
not be migrated anymore. Only images referenced in the config will be
migrated.
The tests have been adapted accordingly.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by adding them to their own list, saving the nodes where they are not
allowed, and return those on 'wantarray' so we don't break existing
callers that don't expect it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring pci devices that are mapped via cluster
resource mapping when the user has 'Resource.Use' on the ACL path
'/mapping/pci/{ID}' (in addition to the usual required vm config
privileges)
When given multiple mappings in the config, we use them as alternatives
for the passthrough, and will select the first free one on startup.
It is using our regular pci reservation mechanism for regular devices and
we introduce a selection mechanism for mediated devices.
A few changes to the inner workings were required to make this work well:
* parse_hostpci now returns a different structure where we have a list
of lists (first level is for the different alternatives and second
level is for the different devices that should be passed through
together)
* factor out the 'parse_hostpci_devices' which parses each device from
the config and does some precondition checks
* reserve_pci_usage now behaves slightly different when trying to
reserve an device with the same VMID that's already reserved for,
since for checking which alternative we can use, we already must
reserve one (this means that qm showcmd can actually reserve devices,
albeit only for up to 10 seconds)
* configuring a mediated device on a multifunction device is not
supported anymore, and results in failure to start (previously, it
just chose the first device to do it). This is a breaking change
* configuring a single pci device twice on different hostpci slots now
fails during commandline generation instead on qemu start, so we had
to adapt one test where this occurred (it could never have worked
anyway)
* we now check permissions during clone/restore, meaning raw/real
devices can only be cloned/restored by root@pam from now on.
this is a breaking change.
Fixes#3574: Improve SR-IOV usability
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
The commands snapshot-drive and delete-drive-snapshot have been unused
by qemu-server since commit eba2b721 ("use qemu's blockdev-snapshot
functions") and are now going to be dropped in our QEMU builds too, so
get rid of these left-overs.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Even if between single quotes, the dollar sign needs to be escaped
here. Otherwise, there will be an error
> Search pattern not terminated at -e line 1.
and no migration tests would be run. The error did not lead to
aborting though, making it harder to notice.
Fixes: aac89f6c ("tests: avoid calling test script to get target names")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
As otherwise we couple *all* Makefile targets to the dependencies of
the test script, even for a simple make call (e.g., done on building
the source), so use a much simpler heuristic that just depends on
perl, which is essential in Debian.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
(ratelimit) parameter to qemu-img convert fixes this issue.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
[ T: reword subject line slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 9b6efe43 ("migrate: add live-migration of replicated
disks") live-migration with replicated volumes is possible. When
handling the replication, it is checked that all local volumes
previously detected as replicatable are actually replicated. So the
check if migration with snapshots is possible can just allow volumes
that are detected as replicatable.
Note that VM state files are also replicated.
If there is an invalid configuration with a non-replicatable volume or
state file and replication is enabled, then replication will fail, and
thus migration will fail early.
Trying to live-migrate to a non-replication target (needs --force)
will still fail if there are snapshots, because they are (correctly)
detected as non-replicated.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
in preparation of reworking the new separate method for OVMF cmd
assembly, do this in a separate very targeted commit to make it more
clear that the next reworking-commit doesn't messes with our tests at
all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is reducing packet drop on high pps, and also needed for dpdk.
Redhat already have use it by default in rhev and his openstack platform too
since 2019.
I'm using it in production since 6 months, I don't have seen performance regression.
fix: (which ask for custom option, but setting it by default seem fine for me)
https://bugzilla.proxmox.com/show_bug.cgi?id=1546https://bugzilla.proxmox.com/show_bug.cgi?id=2349
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
at the end of a live migration, we need to remove old mac entries
on source host (vm is not yet stopped), before resume vm on target host
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[T: resolve conflicts and rework on apply ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
q35 + usb passthrough
q35 + usb3 passthrough
q35 + usb3 passthrough with new xhci controller
old machine type + new usb config error
old machine type + q35 + new usb config error
old ostype (w2k) + new usb config error
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
going by reports in the forum (e.g. [0]) and semi-official qemu
information[1], we should prefer qemu-xhci over nec-usb-xhci
for compatibility purposes, we guard that behind the machine version,
so that guests with a fixed version don't suddenly have a different usb
controller after a reboot (which could potentially break some hardcoded
guest configs)
0: https://forum.proxmox.com/threads/proxmox-usb-connect-disconnect-loop.117063/
1: https://www.kraxel.org/blog/2018/08/qemu-usb-tips/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the former CPU type never existed on the market and will be dropped
by QEMU 7.1, so map it to the server variant as they're pretty much
identical anyway FIWCT.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
besides the log calls these don't need any parts of the migration state,
so let's make them generic and re-use them for container migration and
replication in the future.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This allows mobile- and vGPUs to be presented to the guest as if they
were the original desktop variants of the card. It also allows
device-ID variants that guests don't know about to be renamed to
match compatible sibling devices the guest does have drivers for
(e.g. to remove manufacturer-specific vendor ID variants that prevent
the use of a device which would otherwise have a supported chipset)
e.g. hostpci0: 03:00,vendor-id=0x8086,device-id=0x10f6
Signed-off-by: Nicholas Sherlock <n.sherlock@gmail.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
See commit 17858a1695 (hw/acpi/ich9: Set ACPI PCI hot-plug as default
on Q35)[0] in upstream QEMU repository for details about why the change
was made.
As that change affects systemds predictable interface naming[1],
e.g., by going from a previously `ens18` name to `enp6s18`, it may
have rather bad effects for users that did not setup some .link files
to enforce a specific naming by an more stable information like the
NIC's MAC-Address
The alternative would be making the preferred mode of hotplug an
option like `hotplug-mode=<acpi|pcie>`, but it does not seems like
one would like to change that much in the first place...
Note the changes to the tests and especially the tests with q35
machines that did not change.
[0]: https://gitlab.com/qemu-project/qemu/-/commit/17858a1695
[1]: https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html#Naming
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ovmf with SMM enabled will not boot on i440fx (hangs on graphics
initialization), so load the non SMM variant.
should be no issue regarding live-migration since it never worked with
this anyway.
adapts the test and adds one with q35
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>