Convert the machine parameter to a property-string and use the machine
type as the default key for backward compatibility.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
'$entry->{host}' can be empty, so we have to check for that before
doing a regex check, otherwise we get ugly errors in the log
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of a "pbs-backing" parameter we now have a
"live-restore-backing" parameter containing the `-blockdev` arg and
its name, which also means we print the blockdev earlier
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
A network device of a VM does not necessarily has to be connected to
an actual bridge, so when a new pending value is set we need to use
the undef-safe compare helpers when checking if there was a change
between old and new value, as otherwise one gets ugly "use of
uninitialized value in string ne" warnings.
Link: https://forum.proxmox.com/threads/143072/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
could be a better fit in PVE::Tools, like proposed by Filip, but OTOH.
Tools is already crowded as is, so wait if we need it on more places
outside of qemu-server.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make the default value for 'kvm' consistent, taking into account
whether the VM will run on the same CPU architecture as the host.
This would be a breaking change to CPU hotplug for VMs with a
different CPU architecture running on an x86_64 host, as in this case
the default CPU type for CPU hotplug changes from 'kvm64' to 'qemu64'.
However, CPU hotplug of non x86_64 architectures is not supported
anyway, so this is not a breaking change after all.
It should be noted that this change does alter the CPU hotplug
behaviour when emulating an x86_64 CPU on a non-x86_64 host. This is
however not officially supported in Proxmox VE.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Move is_native from PVE::QemuServer to PVE::Tools and rename it to
is_native_arch to be more descriptive.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.
To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries that have the long mode
feature (CPUID_EXT2_LM).
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
When rebooting a VM from PVE (via CLI/API), the reboot code is called
under a guest lock, which creates a reboot request, shuts down the VM
and then calls the regular cleanup code, which includes the mdev
cleanup.
In parallel, the qmeventd observes that the VM process has gone, and
starts 'qm cleanup' which is (among other tasks) also starts the VM
again if a reboot from the PVE side is pending.
The qmeventd synchronizes this through a lock on the guest, with a
default timeout of 10 seconds.
Since we currently also always wait 10 seconds for the NVIDIA driver
to clean up the mdev, this creates a race condition for the cleanup
lock. IOW., when the call to `qm cleanup` starts before we started to
sleep for 10 seconds, it will not be able to acquire its lock and not
start the vm again.
To avoid the race condition in practice, do two things:
* increase the timeout in `qm cleanup` to 60 seconds.
Technically this still might run into a timeout, as we can configure
up to 16 mediated devices with each delaying 10 seconds in the worst
case, but realistically most users won't configure more than two or
three of them, if even that.
* change the hard-coded `sleep 10` to a loop sleeping for 1 second
each before checking the state again. This shortens the timeout when
the NVIDIA driver did not require the full 10s to finish the
clean-up.
Further, add a bit of logging, so one can properly see in the task log
what is happening at which point in time.
Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: change warn to print, reword commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
[FE: add missing space to exception message
use config option for exception e.g. scsi0 rather than 'product'
style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since we always determine the deviceid, passing in a possibly wrong value makes
no sense and could actually re-introduce bugs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.
Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.
Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1
While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Encapsulation of the functionality for determining the scsi device type
in a new function for reusability in QemuServer/Drive.pm
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.
It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore, the
port reservation can be moved very close to the actual usage.
Mentioned here for completeness and can still be done as an additional
change later if desired: next_migrate_port could be modified to
optionally return the open socket and it should be possible to pass
the file descriptor directly to QEMU, but that would require accepting
the connection before on the Perl side (otherwise leads to ENOTCONN
107). While it would avoid any races, it's not the most elegant
and the change at hand should be enough in all practical situations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>
We want to notify guest of the change, so it can resubmit dhcp request,
or send gratuitous arp,...
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
add option to use the qemu vdagent implementation to enable the VNC
clipboard. When enabled with SPICE the spice-vdagent gets replaced
with the QEMU implementation.
This patch does not solve #1406, but does allow copy and paste with a
running X-session, when spice-vdagent is installed on the guest.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
While there already is a warning from QEMU proper, that one is not
visible as a task warning and it's not straightforward to make it be
one, because QEMU is started inside a run_fork(). It's also more
future-proof to have the detection explicit on our side and the
documentation can be referenced.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This can seemingly need a bit longer than expected, and better than
erroring out on migration is to wait a bit longer.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The vCPUs are passed as devices with specific id only when CPU
hot-plug is enable at cold start.
So, we can't enable/disable allow-hotplug online as then vCPU hotplug
API will thrown errors not finding core id.
Not enforcing this could also lead to migration failure, as the QEMU
command line for the target VM could be made different than the one it
was actually running with, causing a crash of the target as Fiona
observed [0].
[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-October/059434.html
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ TL: Reflowed & expanded commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fix races with ACPI-suspended VMs which could wake up during migration
or during a suspend-mode backup.
Revert prevention, of ACPI-suspended VMs automatically resuming after
migration, introduced by 7ba974a682. The commit introduced a
potential problem that causes a suspended VM that wakes up during
migration to remain paused after the migration finishes.
This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.
Furthermore the commit increased the race window during the
preparation of a suspend-mode backup, when a suspended VM wakes up
between the vm_is_paused check in PVE::VZDump::QemuServer::prepare and
PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
fs-freeze even if the VM has woken up, potentially leaving the file
system in an inconsistent state.
To prevent this, do not treat the suspended runstate as paused when
migrating or archiving a VM.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg05260.html
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: massage in Fiona's extra info into commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add checks for "suspended" and "prelaunch" runstates when checking
whether a VM is paused.
This fixes the following issues:
* ACPI-suspended VMs automatically resuming after migration
* Shutdown and reboot commands timing out instead of failing
immediately on suspended VMs
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
In preparation to add more properties to the memory configuration like
maximum hotpluggable memory and whether virtio-mem devices should be
used.
This also allows to get rid of the cyclic include of PVE::QemuServer
in the memory module.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: also convert new usage in get_derived_property
remove cyclic include of PVE::QemuServer
add commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
moving qemu_{device,object}{add,del} helpers there for now.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module and generally for better modularity in the future.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
which is the only user of the parse_numa() helper. While at it, avoid
the duplication of MAX_NUMA.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.
Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.
The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.
There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.
[0]: https://forum.proxmox.com/threads/126825/
[1]: https://forum.proxmox.com/threads/128093/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Skip the software TPM startup when starting a template VM for performing
a backup. This fixes an error that occurs when the TPM state disk is
write-protected.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Quoting from QEMU commit 4271f40383 ("virtio-net: correctly report
maximum tx_queue_size value"):
> Maximum value for tx_queue_size depends on the backend type.
> 1024 for vDPA/vhost-user, 256 for all the others.
> So the parameter is silently ignored and ethtool reports a different
> value than the one provided by the user.
Indeed, for a non-vDPA/vhost-user netdev, the guest will see TX: 256
instead of the specified 1024 here. With the mentioned QEMU commit (in
master and will be part of 8.1), using 1024 will be a hard error:
> Invalid tx_queue_size (= 1024), must be a power of 2 between 256 and 256
Since neither vhost-user, nor vhost-vdpa netdev types are exposed by
Proxmox VE, just changing the limit to the correct 256 should be fine.
No obvious issue during live-migration found.
Fixes: 620d6b32 ("virtio-net: increase defaults rx|tx-queue-size to 1024")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While the comment sated
> # order of precedence, filtered by whether storage supports it:
> # 1. explicit requested format
> # 2. format of current volume
> # 3. default format of storage
the code did not fall back to the default format in the case of remote
migration, because the format was already set and the code used
> $format //= $defFormat;
This made remote migration from dir with qcow2 to e.g. LVM-thin fail.
Move extracting the format from the volume name to the call side for
local migration. This allows the logic here to be much simpler.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, qemu_img_format() was called with the target storage's
$scfg and the source storage's volume name.
This mismatch should only be relevant for certain special kinds of
storage plugins:
- no path, but does support an additional QEMU image format besides
'raw', in short NPAF.
- no path, volume name can match QEMU_FORMAT_RE, in short NPVM.
Note that all integrated plugins are neither NPAF nor NPVM.
Note that for NPAF plugins, qemu_img_format() already always returns
'raw' because there is no path. It's a bit unlikely such a plugin
exists, because there were no bug reports about qemu_img_format()
misbehaving there yet.
Let's go through the cases:
- If source and target storage both have or don't have a path,
qemu_img_format($scfg, $volname) returns the same for both $scfg's.
- If source storage has a path, but target storage does not, the
format hint was previously 'raw', but can only be more correct now
(being what the source image actually is):
- For non-NPAF targets, since we know there is no path, it follows
that 'raw' is the only supported QEMU image format.
- For NPAF targets, the format will be preserved now (if actually
supported).
- If source storage does not have a path, but target storage does, the
format hint will be 'raw' now.
- For non-NPVM sources, QEMU_FORMAT_RE didn't match when
qemu_img_format() was called with the target storage's $scfg, so
the hint also was 'raw' before this commit.
- For NPVM sources, qemu_img_format() might've guessed a format from
the source volume name when called with the target's $scfg before
this commit. If the target storage supports the previously guessed
format, it was preserved before this commit, but will not be
anymore. In theory, the guess might've also been wrong, and in
this case, this commit avoids the wrong guess.
To summarize, there is only one edge case with an exotic kind of third
party storage plugin where format preservation would be lost and in
another edge case, format preservation is gained.
In preparation to simplify the format fallback logic implementation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The new ciupgrade option was missing in $cloudinitoptions in
PVE::API2::Qemu, so $check_vm_modify_config_perm defaulted to
requiring root@pam for modifying the option. To fix this, add
ciupgrade to $cloudinitoptions. This also fixes an issue where
ciupgrade was missing in the output of `qm cloudinit pending`,
as it also relies on $cloudinitoptions.
This issue was originally reported in the forum [0].
Also add a comment to avoid similar issues when adding new options in
the future.
[0]: https://forum.proxmox.com/threads/131043/
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>