'$entry->{host}' can be empty, so we have to check for that before
doing a regex check, otherwise we get ugly errors in the log
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of a "pbs-backing" parameter we now have a
"live-restore-backing" parameter containing the `-blockdev` arg and
its name, which also means we print the blockdev earlier
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
A network device of a VM does not necessarily has to be connected to
an actual bridge, so when a new pending value is set we need to use
the undef-safe compare helpers when checking if there was a change
between old and new value, as otherwise one gets ugly "use of
uninitialized value in string ne" warnings.
Link: https://forum.proxmox.com/threads/143072/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
could be a better fit in PVE::Tools, like proposed by Filip, but OTOH.
Tools is already crowded as is, so wait if we need it on more places
outside of qemu-server.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make the default value for 'kvm' consistent, taking into account
whether the VM will run on the same CPU architecture as the host.
This would be a breaking change to CPU hotplug for VMs with a
different CPU architecture running on an x86_64 host, as in this case
the default CPU type for CPU hotplug changes from 'kvm64' to 'qemu64'.
However, CPU hotplug of non x86_64 architectures is not supported
anyway, so this is not a breaking change after all.
It should be noted that this change does alter the CPU hotplug
behaviour when emulating an x86_64 CPU on a non-x86_64 host. This is
however not officially supported in Proxmox VE.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Move is_native from PVE::QemuServer to PVE::Tools and rename it to
is_native_arch to be more descriptive.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.
To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries that have the long mode
feature (CPUID_EXT2_LM).
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
When rebooting a VM from PVE (via CLI/API), the reboot code is called
under a guest lock, which creates a reboot request, shuts down the VM
and then calls the regular cleanup code, which includes the mdev
cleanup.
In parallel, the qmeventd observes that the VM process has gone, and
starts 'qm cleanup' which is (among other tasks) also starts the VM
again if a reboot from the PVE side is pending.
The qmeventd synchronizes this through a lock on the guest, with a
default timeout of 10 seconds.
Since we currently also always wait 10 seconds for the NVIDIA driver
to clean up the mdev, this creates a race condition for the cleanup
lock. IOW., when the call to `qm cleanup` starts before we started to
sleep for 10 seconds, it will not be able to acquire its lock and not
start the vm again.
To avoid the race condition in practice, do two things:
* increase the timeout in `qm cleanup` to 60 seconds.
Technically this still might run into a timeout, as we can configure
up to 16 mediated devices with each delaying 10 seconds in the worst
case, but realistically most users won't configure more than two or
three of them, if even that.
* change the hard-coded `sleep 10` to a loop sleeping for 1 second
each before checking the state again. This shortens the timeout when
the NVIDIA driver did not require the full 10s to finish the
clean-up.
Further, add a bit of logging, so one can properly see in the task log
what is happening at which point in time.
Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: change warn to print, reword commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
[FE: add missing space to exception message
use config option for exception e.g. scsi0 rather than 'product'
style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since we always determine the deviceid, passing in a possibly wrong value makes
no sense and could actually re-introduce bugs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.
Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.
Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1
While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Encapsulation of the functionality for determining the scsi device type
in a new function for reusability in QemuServer/Drive.pm
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.
It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore, the
port reservation can be moved very close to the actual usage.
Mentioned here for completeness and can still be done as an additional
change later if desired: next_migrate_port could be modified to
optionally return the open socket and it should be possible to pass
the file descriptor directly to QEMU, but that would require accepting
the connection before on the Perl side (otherwise leads to ENOTCONN
107). While it would avoid any races, it's not the most elegant
and the change at hand should be enough in all practical situations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>
We want to notify guest of the change, so it can resubmit dhcp request,
or send gratuitous arp,...
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
add option to use the qemu vdagent implementation to enable the VNC
clipboard. When enabled with SPICE the spice-vdagent gets replaced
with the QEMU implementation.
This patch does not solve #1406, but does allow copy and paste with a
running X-session, when spice-vdagent is installed on the guest.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
While there already is a warning from QEMU proper, that one is not
visible as a task warning and it's not straightforward to make it be
one, because QEMU is started inside a run_fork(). It's also more
future-proof to have the detection explicit on our side and the
documentation can be referenced.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This can seemingly need a bit longer than expected, and better than
erroring out on migration is to wait a bit longer.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The vCPUs are passed as devices with specific id only when CPU
hot-plug is enable at cold start.
So, we can't enable/disable allow-hotplug online as then vCPU hotplug
API will thrown errors not finding core id.
Not enforcing this could also lead to migration failure, as the QEMU
command line for the target VM could be made different than the one it
was actually running with, causing a crash of the target as Fiona
observed [0].
[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-October/059434.html
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ TL: Reflowed & expanded commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fix races with ACPI-suspended VMs which could wake up during migration
or during a suspend-mode backup.
Revert prevention, of ACPI-suspended VMs automatically resuming after
migration, introduced by 7ba974a682. The commit introduced a
potential problem that causes a suspended VM that wakes up during
migration to remain paused after the migration finishes.
This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.
Furthermore the commit increased the race window during the
preparation of a suspend-mode backup, when a suspended VM wakes up
between the vm_is_paused check in PVE::VZDump::QemuServer::prepare and
PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
fs-freeze even if the VM has woken up, potentially leaving the file
system in an inconsistent state.
To prevent this, do not treat the suspended runstate as paused when
migrating or archiving a VM.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg05260.html
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: massage in Fiona's extra info into commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add checks for "suspended" and "prelaunch" runstates when checking
whether a VM is paused.
This fixes the following issues:
* ACPI-suspended VMs automatically resuming after migration
* Shutdown and reboot commands timing out instead of failing
immediately on suspended VMs
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
In preparation to add more properties to the memory configuration like
maximum hotpluggable memory and whether virtio-mem devices should be
used.
This also allows to get rid of the cyclic include of PVE::QemuServer
in the memory module.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: also convert new usage in get_derived_property
remove cyclic include of PVE::QemuServer
add commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
moving qemu_{device,object}{add,del} helpers there for now.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module and generally for better modularity in the future.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
which is the only user of the parse_numa() helper. While at it, avoid
the duplication of MAX_NUMA.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.
Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.
The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.
There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.
[0]: https://forum.proxmox.com/threads/126825/
[1]: https://forum.proxmox.com/threads/128093/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Skip the software TPM startup when starting a template VM for performing
a backup. This fixes an error that occurs when the TPM state disk is
write-protected.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Quoting from QEMU commit 4271f40383 ("virtio-net: correctly report
maximum tx_queue_size value"):
> Maximum value for tx_queue_size depends on the backend type.
> 1024 for vDPA/vhost-user, 256 for all the others.
> So the parameter is silently ignored and ethtool reports a different
> value than the one provided by the user.
Indeed, for a non-vDPA/vhost-user netdev, the guest will see TX: 256
instead of the specified 1024 here. With the mentioned QEMU commit (in
master and will be part of 8.1), using 1024 will be a hard error:
> Invalid tx_queue_size (= 1024), must be a power of 2 between 256 and 256
Since neither vhost-user, nor vhost-vdpa netdev types are exposed by
Proxmox VE, just changing the limit to the correct 256 should be fine.
No obvious issue during live-migration found.
Fixes: 620d6b32 ("virtio-net: increase defaults rx|tx-queue-size to 1024")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While the comment sated
> # order of precedence, filtered by whether storage supports it:
> # 1. explicit requested format
> # 2. format of current volume
> # 3. default format of storage
the code did not fall back to the default format in the case of remote
migration, because the format was already set and the code used
> $format //= $defFormat;
This made remote migration from dir with qcow2 to e.g. LVM-thin fail.
Move extracting the format from the volume name to the call side for
local migration. This allows the logic here to be much simpler.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, qemu_img_format() was called with the target storage's
$scfg and the source storage's volume name.
This mismatch should only be relevant for certain special kinds of
storage plugins:
- no path, but does support an additional QEMU image format besides
'raw', in short NPAF.
- no path, volume name can match QEMU_FORMAT_RE, in short NPVM.
Note that all integrated plugins are neither NPAF nor NPVM.
Note that for NPAF plugins, qemu_img_format() already always returns
'raw' because there is no path. It's a bit unlikely such a plugin
exists, because there were no bug reports about qemu_img_format()
misbehaving there yet.
Let's go through the cases:
- If source and target storage both have or don't have a path,
qemu_img_format($scfg, $volname) returns the same for both $scfg's.
- If source storage has a path, but target storage does not, the
format hint was previously 'raw', but can only be more correct now
(being what the source image actually is):
- For non-NPAF targets, since we know there is no path, it follows
that 'raw' is the only supported QEMU image format.
- For NPAF targets, the format will be preserved now (if actually
supported).
- If source storage does not have a path, but target storage does, the
format hint will be 'raw' now.
- For non-NPVM sources, QEMU_FORMAT_RE didn't match when
qemu_img_format() was called with the target storage's $scfg, so
the hint also was 'raw' before this commit.
- For NPVM sources, qemu_img_format() might've guessed a format from
the source volume name when called with the target's $scfg before
this commit. If the target storage supports the previously guessed
format, it was preserved before this commit, but will not be
anymore. In theory, the guess might've also been wrong, and in
this case, this commit avoids the wrong guess.
To summarize, there is only one edge case with an exotic kind of third
party storage plugin where format preservation would be lost and in
another edge case, format preservation is gained.
In preparation to simplify the format fallback logic implementation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The new ciupgrade option was missing in $cloudinitoptions in
PVE::API2::Qemu, so $check_vm_modify_config_perm defaulted to
requiring root@pam for modifying the option. To fix this, add
ciupgrade to $cloudinitoptions. This also fixes an issue where
ciupgrade was missing in the output of `qm cloudinit pending`,
as it also relies on $cloudinitoptions.
This issue was originally reported in the forum [0].
Also add a comment to avoid similar issues when adding new options in
the future.
[0]: https://forum.proxmox.com/threads/131043/
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Only unit 0 for IDE is supported with machine type q35. Currently,
QEMU will fail startup with machine type q35 with an error like
> Can't create IDE unit 1, bus supports only 1 units
when ide1 or ide3 is configured.
Make sure to keep backwards compat form migration by leaving ide0 and
ide2 fixed. Since starting with ide1 or ide3 never worked, they can be
moved to a controller with a higher ID without issue.
Reported in the community forum:
https://forum.proxmox.com/threads/124615/post-543127https://forum.proxmox.com/threads/130815/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since we don't scan all storages for matching disk images anymore for a
migration we don't have any images found via storage alone. They will be
referenced in the config somewhere.
Therefore, there is no need for the 'storage' ref.
The 'referenced_in_config' is not really needed and can apply to both,
attached and unused disk images.
Therefore the QemuServer::foreach_volid() will change the
'referenced_in_config' attribute to an 'is_attached' one that only
applies to disk images that are in the _main_ config part and are not
unused.
In QemuMigrate::scan_local_volumes() we can then quite easily map the
refs to each state, attached, unused, referenced_in_{pending,snapshot}.
The refs are mostly used for informational use to print out in the logs
why a disk image is part of the migration. Except for the 'attached' case.
In the future the extra step of the refs in QemuMigrate could probably
be streamlined even more.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
All calling sites except for QemuConfig.pm::get_replicatable_volumes()
already enabled it. Making it the non-configurable default results in a
change in the VM replication. Now a disk image only referenced in the
pending section will also be replicated.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Make it possible to optionally iterate over disks in the pending section
of VMs, similar as to how snapshots are handled already.
This is for example useful in the migration if we don't want to rely on
the scanning of all storages.
All calling sites are adapted and enable it, except for
QemuConfig::get_replicatable_volumes as that would cause a change for
the replication if pending disks would be included.
The following lists the calling sites and if they should be fine with
the change (source [0]):
1. QemuMigrate: scan_local_volumes(): needed to include pending disk
images
2. API2/Qemu.pm: check_vm_disks_local() for migration precondition:
related to migration, so more consistent with pending
3. QemuConfig.pm: get_replicatable_volumes(): would change the behavior
of the replication, will not use it for now.
4. QemuServer.pm: get_vm_volumes(): is used multiple times by:
4a. vm_stop_cleanup() to deactivate/unmap: should also be fine with
including pending
4b. QemuMigrate.pm: in prepare(): part of migration, so more consistent
with pending
4c. QemuMigrate.pm: in phase3_cleanup() for deactivation: part of
migration, so more consistent with pending
[0] https://lists.proxmox.com/pipermail/pve-devel/2023-May/056868.html
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Commit efa3355d ("fix #3428: cloudinit: add parameter for upgrade on
boot") changed the default, but this is a breaking change. The bug
report was only about making the option configurable.
The commit doesn't give an explicit reason for why, and arguably,
doing the upgrade is not an issue for most users. It also leads to a
different cloud-init instance ID, because of the different setting,
which in turn leads to ssh host key regeneration within the VM.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The scope can get into failed state for some issues like OOM kills of
the whole scope, in that case a user cannot re-start the VM until
they manually reset it.
Do this for now inline to avoid a pve-common bump as done in [0]
(location was suggested by me thinking we could maybe do it over
dbus, but as we have a stop command here already it probably doesn't
matters)
[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-June/057770.html
Originally-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to allow early checking of the merged config, if the backup archive
passed in is a proper volume where extraction is possible.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by adding them to their own list, saving the nodes where they are not
allowed, and return those on 'wantarray' so we don't break existing
callers that don't expect it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring pci devices that are mapped via cluster
resource mapping when the user has 'Resource.Use' on the ACL path
'/mapping/pci/{ID}' (in addition to the usual required vm config
privileges)
When given multiple mappings in the config, we use them as alternatives
for the passthrough, and will select the first free one on startup.
It is using our regular pci reservation mechanism for regular devices and
we introduce a selection mechanism for mediated devices.
A few changes to the inner workings were required to make this work well:
* parse_hostpci now returns a different structure where we have a list
of lists (first level is for the different alternatives and second
level is for the different devices that should be passed through
together)
* factor out the 'parse_hostpci_devices' which parses each device from
the config and does some precondition checks
* reserve_pci_usage now behaves slightly different when trying to
reserve an device with the same VMID that's already reserved for,
since for checking which alternative we can use, we already must
reserve one (this means that qm showcmd can actually reserve devices,
albeit only for up to 10 seconds)
* configuring a mediated device on a multifunction device is not
supported anymore, and results in failure to start (previously, it
just chose the first device to do it). This is a breaking change
* configuring a single pci device twice on different hostpci slots now
fails during commandline generation instead on qemu start, so we had
to adapt one test where this occurred (it could never have worked
anyway)
* we now check permissions during clone/restore, meaning raw/real
devices can only be cloned/restored by root@pam from now on.
this is a breaking change.
Fixes#3574: Improve SR-IOV usability
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring usb devices that are mapped via
cluster resource mapping when the user has 'Mapping.Use' on the ACL
path '/mapping/usb/{ID}' (in addition to the usual required vm config
privileges)
for now, this is only valid if there is exactly one mapping for the
host, since we don't track passed through usb devices yet
This now also checks permissions on clone/restore, meaning a
'non-mapped' device can only be cloned/restored as root@pam user.
That is a breaking change.
Refactor the checks for restoring into a sub, so we have central place
where we can add such checks
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
similar to how we handle the PCI module and format. This makes the
'verify_usb_device' method and format unnecessary since
we simply check the format with a regex.
while doing tihs, i noticed that we don't correctly check for the
case-insensitive variant for 'spice' during hotplug, so fix that too
With this we can also remove some parameters from the get_usb_devices
and get_usb_controllers functions
while were at it, refactor the permission checks for the usb config too
and use the new 'my sub' style for the functions
also make print_usbdevice_full parse the device itself, so we don't have
to do it in multiple places (especially in places where we don't see
that this is needed)
No functional change intended
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
Using the word 'agent' is highly confusing here as there is no QMP
agent and thus wrongly suggests that the value is related to the
guest agent[0].
[0]: https://forum.proxmox.com/threads/123590/post-537716
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This was not only rather inefficient (getting the config from the
archive twice) but also wrong, as we can override options on restore,
so we can do the check only when the backed-up config and override
config got merged.
If this is to late from POV of volume deletion or the like, then the
issue is that those things happen to early, as we can only know what
to do with the actual target config, so destructive actions that
happen before that are wrong by design.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for convenience. These options do not influence the QEMU instance
directly, but are only used for migration, so no need to keep them in
pending.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
instead use a recent example that served as a workaround in #4625.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit 7246e8f9 ("Set zero $size and continue if volume_resize()
returns false") mentions that this is needed for "some storages with
backing block devices to do online resize" and since this patch came
together [0] with pve-storage commit a4aee43 ("Fix RBD resize with
krbd option enabled."), it's safe to assume that RBD with krbd is
meant. But it should be the same situation for any external plugin
relying on the same behavior.
Other storages backed by block devices like LVM(-thin) and ZFS return
1 and the new size respectively, and the code is older than the above
mentioned commits. So really, the RBD plugin just should have returned
a positive value to be in-line with those and there should be no need
to pass 0 to the block_resize QMP command either.
Actually, it's a hack, because the block_resize QMP command does not
actually do special handling for the value 0. It's just that in the
case of a block device, QEMU won't try to resize it (and not fail for
shrinkage). But the size in the raw driver's BlockDriverState is
temporarily set to 0 (which is not nice), until the sector count is
refreshed, where raw_co_getlength is called, which queries the new
size and sets the size in the raw driver's BlockDriverState again as a
side effect. It's not known to cause any issues, but bdrv_getlength is
a coroutine wrapper starting from QEMU 8.0.0, and it's just better to
avoid setting a completely wrong value even temporarily. Just pass the
actually requested size like is done for LVM(thin) and ZFS.
[0]: https://lists.proxmox.com/pipermail/pve-devel/2017-January/025060.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Handle and warn about network interfaces which are not attached to
any bridge because the user actively removed it from the VM config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
With the recent pve-storage commit d70d814 ("api: fix get content call response
type for RBD/ZFS/iSCSI volumes"), the volume_size_info call for RBD in
list context is much slower than before (from a quick test, about twice as long
without snapshots, even longer with snapshots and untested, but when using an
external cluster with image not having the fast-diff feature, it should be worse
still) and thus increases the likelihood to run into timeouts here.
None of the callers here actually need the more expensive call, so just
avoid calling in list context.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
in some nvidia grid drivers (e.g. 14.4 and 15.x), their kernel module
tries to clean up the mdev device when the vm is shutdown and if it
cannot do that (e.g. becaues we already cleaned it up), their removal
process cancels with an error such that the vgpu does still exist inside
their book-keeping, but can't be used/recreated/freed until a reboot.
since there seems no obvious way to detect if thats the case besides
either parsing dmesg (which is racy), or the nvidia kernel module
version(which i'd rather not do), we simply test the pci device vendor
for nvidia and add a 10s sleep. that should give the driver enough time
to clean up and we will not find the path anymore and skip the cleanup.
This way, it works with both the newer and older versions of the driver
(some of the older drivers are LTS releases, so they're still
supported).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of using the mdev uuid. The nvidia driver does not actually care
that it's the same as the mdev, and in qemu the uuid parameter
overwrites the smbios1 uuid internally, so we should have been reusing
that in the first place.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Respecting bandwidth limit for offline clone was implemented by commit
56d16f16 ("fix #4249: make image clone or conversion respect bandwidth
limit"). It's still not respected for EFI disks, but those are small,
so just ignore it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
(ratelimit) parameter to qemu-img convert fixes this issue.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
[ T: reword subject line slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid pretending that a MTU change on a existing network device gets
applied live to a running VM.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reworded and expanded commit message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Required because there's one single efi for ARM, and the 2m/4m
difference doesn't seem to apply.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: move description to format and reword subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
AFAICT, previously, errors from swtpm would not show up in any logs,
because they were just printed to the stderr of the daemonized
invocation here.
The 'truncate' option is not used, so that the log is not immediately
lost when a new instance is started. This increases the chance that
the relevant errors are still present when requesting the log from a
user.
Log level 1 contains the most relevant errors and seems to be quiet
for working-as-expected invocations. Log level 2 already includes
logging full TPM commands, some of which are 1024 bytes long. Thus,
log level 1 was chosen.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].
[0]: https://forum.proxmox.com/threads/123159/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The target of the drive-mirror operation is opened with (essentially)
the same flags as the source in QEMU, in particular whether io_uring
should be used is inherited.
But io_uring currently causes problems in combination with certain
storage types, sometimes even leading to crashes (LVM with Linux 6.1).
Just disallow live cloning of drives when the source uses io_uring and
the target storage is not ready for it. There is one exception, namely
when source and target storage are the same. In that case, just assume
it will keep working for the target.
Migration does not seem to be affected, because there, the target VM
opens the images with the checked aio setting and then NBD exports of
those are used as the targets for mirroring.
It can be that the default determined for the source is not what's
actually used, because after a drive-mirror to a storage with a
different default, it will still use the default from the old storage.
Unfortunately, aio doesn't seem to be part of the 'query-block' QMP
command's result, so just tolerate this edge case.
The check can be removed if either
1. drive-mirror learns to open the target with a different aio setting
or more ideally
2. there are no more bad storages for io_uring.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, changing aio would be applied to the configuration, but
the drive would still be using the old setting.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In the web UI, this was fixed years ago by pve-manager commit c11c4a40
("fix #1631: change units to binary prefix").
Quickly checked with the 'query-memory-size-summary' QMP command that
this is actually the case.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when a vm is configured to use a physical cd rom drive but there is no
such drive a cryptic "uninitialized value" error is thrown. this is
due to `$path` being undefined in `sub print_drive_commandline_full`.
warn that no cd rom drive is available instead.
note that the error was cosmetic as the vm would start just fine.
forum thread: https://forum.proxmox.com/threads/119592/
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>