The machine handling was transformed into a full fledged property
string with a (sub) format, but the single call-site for print_machine
was seemingly not tested, as this could have never worked due to a
missing import of the print_property_string helper.
Fixes: 8082eb8 ("config: define machine schema as property-string")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Management for fleecing images is implemented here. If the fleecing
option is set, for each disk (except EFI disk and TPM state) a new
fleecing image is allocated on the configured fleecing storage (same
storage as original disk by default). The disk is attached to QEMU
with the 'size' parameter, because the block node in QEMU has to be
the exact same size and the newly allocated image might be bigger if
the storage has a coarser allocation or rounded up. After backup, the
disks are detached and removed from the storage.
If the storage supports qcow2, use that as the fleecing image format.
This allows saving some space even on storages that do not properly
support discard, like, for example, older versions of NFS.
Since there can be multiple volumes with the same volume name on
different storages, the fleecing image's name cannot be just based on
the original volume's name. The schema vm-ID-fleece-N(.FORMAT) with N
incrementing for each disk is used.
Partially inspired by the existing handling of the TPM state image
during backup.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since commit
1f743141 (fix#1905: Allow moving unused disks)
we want to check the source drive name for 'unused', but in case of
importing a volume from the 'import' content type (e.g. from esxi),
there is no source drive name. So we have to first check if it's
defined.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The previous wording made it sound like all "visible" tasks were
aborted, which is not the case: A user with Sys.Audit but without
Sys.Modify may see a task that was started by a different user, but
overrule-shutdown would not abort the task.
Change wording to better reflect that not all visible tasks may be
aborted.
Also, add a full-stop that was previously missing.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `qmshutdown` tasks for the same VM (which are
visible to the user/token) are aborted before attempting to stop the
VM.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
In the past, moving unused disks to another storage was prohibited due
to oversights in the handling of unused disks. This commit rectifies
this limitation by allowing the movement of unused disks.
Historical context:
* 16 Sep 2010 r5164 qemu-server/pve2: The disknames sub was removed.
* 17 Sep 2010 r5170 qemu-server/pve2: Unused disks were introduced.
* 28 Jan 2011 r5461 qemu-server/pve2: The same disknames sub that was
removed in r5164 was brought back. Since unused disks were not around
yet in r5164 the disknames sub did not consider unused disks.
* 6-8 Aug 2012 c1175c92..f91b2e45 qemu-server.git: Disk resize was
introduced. In commit c1175c92 in sub qemu_block_resize unused disks
were not taken into account and in commit 2f48a4f5 (8 Aug 2012) the
resize API call was changed to only allow disks matching the ones in
the disknames sub. Since sub disknames did not contain any unused
disks, those were not allowed at all in the resize API call.
* 27 May 2013 586bfa78 qemu-server.git: Disk move was introduced. The
API call implementation borrowed heavily from disk resize, including
the behaviour of not taking unused disks into account. Thus, unused
disk could not be moved, which persists to this day.
In summary, this behaviour was introduced because the handling of unused
disks was overlooked and it was never changed.
There is no inherent reason why unused disks should be restricted from
being moved to another storage. These disks cannot use the
qemu_drive_mirror, but they can still be moved with qemu_img_convert,
the same way as any other disk of a stopped VM.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
vIOMMU enables the option to passthrough pci devices to L2 VMs in L1
VMs via Nested Virtualisation and adds an extra isolation.
Uses the new property-string from the "config: define machine schema
as property-string"-commit to add the viommu option to the machine
parameter.
Currently there are two vIOMMU implementation in QEMU to choose:
intel or virtio
Virtio-iommu is more recent but less used in production than intel-iommu.
The assert_valid_machine_property function prevents using intel-iommu with
i440fx.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[ TL: tiny coding style fix to extract variable inside if expr ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Convert the machine parameter to a property-string and use the machine
type as the default key for backward compatibility.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Upon obtaining the device type, a check is performed to determine if it
is a CD drive. It is important to note that Cloudinit drives are always
assigned as CD drives. If the drive has not yet been allocated, the test
will fail due to the unset cd attribute.
To avoid this, an explicit check is now performed to determine if it is
a Cloudinit drive that has not yet been assigned.
Fixes: d1feab4 ("fix #4957: add vendor and product information passthrough for SCSI-Disks")
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
'$entry->{host}' can be empty, so we have to check for that before
doing a regex check, otherwise we get ugly errors in the log
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When attempting a CPU hotplug on an architecture other than x86_64, die
with a clean error instead of attempting a hotplug with a known
non-working device command line. Also move the corresponding FIXME up to
the error.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Add a command that can be used together with volumes from the new
'import' content type of storage plugins.
For now only the new ESXi exposes that content type, but in the long
run its planned to migrate over the existing OVF/OVA infra and extend
it so that it will replace the 'ovfimport' command.
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ TL: split out to separate commit and add message, fix completing
VMID to propose unused ones, note explicitly when in dry-run mode ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of a "pbs-backing" parameter we now have a
"live-restore-backing" parameter containing the `-blockdev` arg and
its name, which also means we print the blockdev earlier
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
A network device of a VM does not necessarily has to be connected to
an actual bridge, so when a new pending value is set we need to use
the undef-safe compare helpers when checking if there was a change
between old and new value, as otherwise one gets ugly "use of
uninitialized value in string ne" warnings.
Link: https://forum.proxmox.com/threads/143072/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
could be a better fit in PVE::Tools, like proposed by Filip, but OTOH.
Tools is already crowded as is, so wait if we need it on more places
outside of qemu-server.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There might've been a question back when it got first added in commit
9d689077 ("use long timeouts for snapshot monitor command"). But
nowadays, the value is well-established. Changing it would affect
quite a few operations, so that should not be done without good
reason and is likely better done for the specific operation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Make the default value for 'kvm' consistent, taking into account
whether the VM will run on the same CPU architecture as the host.
This would be a breaking change to CPU hotplug for VMs with a
different CPU architecture running on an x86_64 host, as in this case
the default CPU type for CPU hotplug changes from 'kvm64' to 'qemu64'.
However, CPU hotplug of non x86_64 architectures is not supported
anyway, so this is not a breaking change after all.
It should be noted that this change does alter the CPU hotplug
behaviour when emulating an x86_64 CPU on a non-x86_64 host. This is
however not officially supported in Proxmox VE.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Move is_native from PVE::QemuServer to PVE::Tools and rename it to
is_native_arch to be more descriptive.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.
To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries that have the long mode
feature (CPUID_EXT2_LM).
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
When rebooting a VM from PVE (via CLI/API), the reboot code is called
under a guest lock, which creates a reboot request, shuts down the VM
and then calls the regular cleanup code, which includes the mdev
cleanup.
In parallel, the qmeventd observes that the VM process has gone, and
starts 'qm cleanup' which is (among other tasks) also starts the VM
again if a reboot from the PVE side is pending.
The qmeventd synchronizes this through a lock on the guest, with a
default timeout of 10 seconds.
Since we currently also always wait 10 seconds for the NVIDIA driver
to clean up the mdev, this creates a race condition for the cleanup
lock. IOW., when the call to `qm cleanup` starts before we started to
sleep for 10 seconds, it will not be able to acquire its lock and not
start the vm again.
To avoid the race condition in practice, do two things:
* increase the timeout in `qm cleanup` to 60 seconds.
Technically this still might run into a timeout, as we can configure
up to 16 mediated devices with each delaying 10 seconds in the worst
case, but realistically most users won't configure more than two or
three of them, if even that.
* change the hard-coded `sleep 10` to a loop sleeping for 1 second
each before checking the state again. This shortens the timeout when
the NVIDIA driver did not require the full 10s to finish the
clean-up.
Further, add a bit of logging, so one can properly see in the task log
what is happening at which point in time.
Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: change warn to print, reword commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make the post-if check for the target not already running more
prominent by using a full if block.
Also comment on why we ignore the error here, while the commit
changing that explained it well, this is one of the things that might
be better of with a in-code comment (as doing the deactivation is
described as important here, so one might wonder why the code
continues if that fails)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a template with disks on LVM is cloned to another node, the
volumes are first activated, then cloned and deactivated again after
cloning.
However, if clones of this template are now created in parallel to
other nodes, it can happen that one of the tasks can no longer
deactivate the logical volume because it is still in use. The reason
for this is that we use a shared lock.
Since the failed deactivation does not necessarily have consequences,
we downgrade the error to a warning, which means that the clone tasks
will continue to be completed successfully.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
by fixing the SCSI feature compatibility check helper. The helper is
also called for disks using import-from, so it has to use the extended
schema when parsing the drive.
Fixes: d1feab4a ("fix #4957: add vendor and product information passthrough for SCSI-Disks")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (since $volid
here is always a proper volid, unless we change the cicustom schema at some
point in the future).
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
[FE: add missing space to exception message
use config option for exception e.g. scsi0 rather than 'product'
style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since we always determine the deviceid, passing in a possibly wrong value makes
no sense and could actually re-introduce bugs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.
Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.
Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1
While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Encapsulation of the functionality for determining the scsi device type
in a new function for reusability in QemuServer/Drive.pm
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.
It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore, the
port reservation can be moved very close to the actual usage.
Mentioned here for completeness and can still be done as an additional
change later if desired: next_migrate_port could be modified to
optionally return the open socket and it should be possible to pass
the file descriptor directly to QEMU, but that would require accepting
the connection before on the Perl side (otherwise leads to ENOTCONN
107). While it would avoid any races, it's not the most elegant
and the change at hand should be enough in all practical situations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>
It is not yet supported for QEMU's vdagent device which is used for
the VNC clipboard.
The migration precondition API call will now treat the VNC clipboard
as a local resource. Thus the GUI blocks migration and shows:
"Can't migrate VM with local resources: clipboard=vnc"
QemuMigrate's prepare function will also abort live migration early
when using the VNC clipboard.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[FE: adapt commit message a bit]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
We want to notify guest of the change, so it can resubmit dhcp request,
or send gratuitous arp,...
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This can be used by noVNC to check if a clipboard is available.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
add option to use the qemu vdagent implementation to enable the VNC
clipboard. When enabled with SPICE the spice-vdagent gets replaced
with the QEMU implementation.
This patch does not solve #1406, but does allow copy and paste with a
running X-session, when spice-vdagent is installed on the guest.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
While there already is a warning from QEMU proper, that one is not
visible as a task warning and it's not straightforward to make it be
one, because QEMU is started inside a run_fork(). It's also more
future-proof to have the detection explicit on our side and the
documentation can be referenced.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
by adding a comment and grouping the code better. See the PVE QEMU
patch "PVE: Allow version code in machine type" for reference. The way
the code was written previously made it look like a bug where
$pve_version might be overwritten multiple times.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This can seemingly need a bit longer than expected, and better than
erroring out on migration is to wait a bit longer.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The vCPUs are passed as devices with specific id only when CPU
hot-plug is enable at cold start.
So, we can't enable/disable allow-hotplug online as then vCPU hotplug
API will thrown errors not finding core id.
Not enforcing this could also lead to migration failure, as the QEMU
command line for the target VM could be made different than the one it
was actually running with, causing a crash of the target as Fiona
observed [0].
[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-October/059434.html
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ TL: Reflowed & expanded commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by including the errno. Might make it clearer what the issue is in
cases like: https://forum.proxmox.com/threads/135261/
Also add the missing newlines, the missing "to" in the second message,
switch to the more common "or die" and avoid line bloat while at it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Fix races with ACPI-suspended VMs which could wake up during migration
or during a suspend-mode backup.
Revert prevention, of ACPI-suspended VMs automatically resuming after
migration, introduced by 7ba974a682. The commit introduced a
potential problem that causes a suspended VM that wakes up during
migration to remain paused after the migration finishes.
This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.
Furthermore the commit increased the race window during the
preparation of a suspend-mode backup, when a suspended VM wakes up
between the vm_is_paused check in PVE::VZDump::QemuServer::prepare and
PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
fs-freeze even if the VM has woken up, potentially leaving the file
system in an inconsistent state.
To prevent this, do not treat the suspended runstate as paused when
migrating or archiving a VM.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg05260.html
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: massage in Fiona's extra info into commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Since it is not only
required in combination with websocket, drop that conditional.
For the non-serial case, this was the last remaining effect of the
'websocket' parameter, so update the parameter description.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since commit 3e7567e0 ("do not use novnc wsproxy"), the websocket
upgrade is done via the HTTP server.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Add checks for "suspended" and "prelaunch" runstates when checking
whether a VM is paused.
This fixes the following issues:
* ACPI-suspended VMs automatically resuming after migration
* Shutdown and reboot commands timing out instead of failing
immediately on suspended VMs
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
The default VM startup timeout is `max(30, VM memory in GiB)` seconds.
Multiple reports in the forum [0] [1] and the bug tracker [2] suggest
this is too short when using PCI passthrough with a large amount of VM
memory, since QEMU needs to map the whole memory during startup (see
comment #2 in [2]). As a result, VM startup fails with "got timeout".
To work around this, set a larger default timeout if at least one PCI
device is passed through. The question remains how to choose an
appropriate timeout. Users reported the following startup times:
ref | RAM | time | ratio (s/GiB)
---------------------------------
[1] | 60G | 135s | 2.25
[1] | 70G | 157s | 2.24
[1] | 80G | 277s | 3.46
[2] | 65G | 213s | 3.28
[2] | 96G | >290s | >3.02
The data does not really indicate any simple (e.g. linear)
relationship between RAM and startup time (even data from the same
source). However, to keep the heuristic simple, assume linear growth
and multiply the default timeout by 4 if at least one `hostpci[n]`
option is present, obtaining `4 * max(30, VM memory in GiB)`. This
covers all cases above, and should still leave some headroom.
[0]: https://forum.proxmox.com/threads/83765/post-552071
[1]: https://forum.proxmox.com/threads/126398/post-592826
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3502
Suggested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
In preparation to add more properties to the memory configuration like
maximum hotpluggable memory and whether virtio-mem devices should be
used.
This also allows to get rid of the cyclic include of PVE::QemuServer
in the memory module.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: also convert new usage in get_derived_property
remove cyclic include of PVE::QemuServer
add commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
moving qemu_{device,object}{add,del} helpers there for now.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module and generally for better modularity in the future.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
PVE::QemuServer::check_running() does both
PVE::QemuConfig::assert_config_exists_on_node()
PVE::QemuServer::Helpers::vm_running_locally()
The former one isn't needed here when doing hotplug, because the API
already assert that the VM config exists. It also would introduce a
new cyclic dependency between PVE::QemuServer::Memory <->
PVE::QemuConfig with the proposed virtio-mem patch set.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
which is the only user of the parse_numa() helper. While at it, avoid
the duplication of MAX_NUMA.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
No functional change intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.
Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.
The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.
There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.
[0]: https://forum.proxmox.com/threads/126825/
[1]: https://forum.proxmox.com/threads/128093/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Skip the software TPM startup when starting a template VM for performing
a backup. This fixes an error that occurs when the TPM state disk is
write-protected.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Quoting from QEMU commit 4271f40383 ("virtio-net: correctly report
maximum tx_queue_size value"):
> Maximum value for tx_queue_size depends on the backend type.
> 1024 for vDPA/vhost-user, 256 for all the others.
> So the parameter is silently ignored and ethtool reports a different
> value than the one provided by the user.
Indeed, for a non-vDPA/vhost-user netdev, the guest will see TX: 256
instead of the specified 1024 here. With the mentioned QEMU commit (in
master and will be part of 8.1), using 1024 will be a hard error:
> Invalid tx_queue_size (= 1024), must be a power of 2 between 256 and 256
Since neither vhost-user, nor vhost-vdpa netdev types are exposed by
Proxmox VE, just changing the limit to the correct 256 should be fine.
No obvious issue during live-migration found.
Fixes: 620d6b32 ("virtio-net: increase defaults rx|tx-queue-size to 1024")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While the comment sated
> # order of precedence, filtered by whether storage supports it:
> # 1. explicit requested format
> # 2. format of current volume
> # 3. default format of storage
the code did not fall back to the default format in the case of remote
migration, because the format was already set and the code used
> $format //= $defFormat;
This made remote migration from dir with qcow2 to e.g. LVM-thin fail.
Move extracting the format from the volume name to the call side for
local migration. This allows the logic here to be much simpler.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, qemu_img_format() was called with the target storage's
$scfg and the source storage's volume name.
This mismatch should only be relevant for certain special kinds of
storage plugins:
- no path, but does support an additional QEMU image format besides
'raw', in short NPAF.
- no path, volume name can match QEMU_FORMAT_RE, in short NPVM.
Note that all integrated plugins are neither NPAF nor NPVM.
Note that for NPAF plugins, qemu_img_format() already always returns
'raw' because there is no path. It's a bit unlikely such a plugin
exists, because there were no bug reports about qemu_img_format()
misbehaving there yet.
Let's go through the cases:
- If source and target storage both have or don't have a path,
qemu_img_format($scfg, $volname) returns the same for both $scfg's.
- If source storage has a path, but target storage does not, the
format hint was previously 'raw', but can only be more correct now
(being what the source image actually is):
- For non-NPAF targets, since we know there is no path, it follows
that 'raw' is the only supported QEMU image format.
- For NPAF targets, the format will be preserved now (if actually
supported).
- If source storage does not have a path, but target storage does, the
format hint will be 'raw' now.
- For non-NPVM sources, QEMU_FORMAT_RE didn't match when
qemu_img_format() was called with the target storage's $scfg, so
the hint also was 'raw' before this commit.
- For NPVM sources, qemu_img_format() might've guessed a format from
the source volume name when called with the target's $scfg before
this commit. If the target storage supports the previously guessed
format, it was preserved before this commit, but will not be
anymore. In theory, the guess might've also been wrong, and in
this case, this commit avoids the wrong guess.
To summarize, there is only one edge case with an exotic kind of third
party storage plugin where format preservation would be lost and in
another edge case, format preservation is gained.
In preparation to simplify the format fallback logic implementation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Trying to regenerate a cloudinit drive as a non-root user via the API
currently throws a Perl error, as reported in the forum [1]. This is
due to a type mismatch in the permission check, where a string is
passed but an array is expected.
[1] https://forum.proxmox.com/threads/regenerate-cloudinit-by-put-api-return-500.124099/
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
The new ciupgrade option was missing in $cloudinitoptions in
PVE::API2::Qemu, so $check_vm_modify_config_perm defaulted to
requiring root@pam for modifying the option. To fix this, add
ciupgrade to $cloudinitoptions. This also fixes an issue where
ciupgrade was missing in the output of `qm cloudinit pending`,
as it also relies on $cloudinitoptions.
This issue was originally reported in the forum [0].
Also add a comment to avoid similar issues when adding new options in
the future.
[0]: https://forum.proxmox.com/threads/131043/
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Only unit 0 for IDE is supported with machine type q35. Currently,
QEMU will fail startup with machine type q35 with an error like
> Can't create IDE unit 1, bus supports only 1 units
when ide1 or ide3 is configured.
Make sure to keep backwards compat form migration by leaving ide0 and
ide2 fixed. Since starting with ide1 or ide3 never worked, they can be
moved to a controller with a higher ID without issue.
Reported in the community forum:
https://forum.proxmox.com/threads/124615/post-543127https://forum.proxmox.com/threads/130815/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
factor the common checks for disk-less and "normal" backups out into
its own helper, avoiding code duplication and ensuring that the
messages and checks stay in sync.
The use sites for key and master key are a bit clearer, as it all
just depends on them being defined or not.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
.. and make it use a warn level, which can then also mark the whole
task as potentially problematic as with a new enough pve-guest-common
the REST environment worker warn counters are then increased.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
our backup logs are still quite noise at the task start part so avoid
logging that the task is running with encryption enabled twice for
the master-key feature.
The definedness check on master_keyfile isn't required anymore, it
was never for the no-disk case, and for the standard case it isn't
since 781fb80 ("vzdump: error out for master-key backup but no QEMU
support")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Our QEMU gained master-key support for Proxmox VE 6.4 with initial
QEMU 5.2.0 packaging in 0b8da68 ("add PBS master key support")
version.
As we're now two major releases in the future any VM needs to run
with a newer QEMU version we can just make this a hard-error, as
there really should be no use-case left. After all we only support
upgrading directly to the next major release, so one needs to do at
least a migration (or shutdown) of the VM to reboot the node for
upgrading to Proxmox VE 8, so the lowest QEMU version baseline is 6.0
for Proxmox VE 8 (i.e., the version from PVE 7.0).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these are backed up directly with proxmox-backup-client, and the invocation was
lacking the key parameters.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Aliased volids can lead to unexpected behavior in a migration.
An aliased volid can happen if we have two storage configurations,
pointing to the same place. The resulting 'path' for a disk image
will be the same.
Therefore, stop the migration in such a case.
The check works by comparing the path returned by the storage plugin.
We decided against checking the storages themselves being aliased. It is
not possible to infer that reliably from just the storage configuration
options alone.
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Since we don't scan all storages for matching disk images anymore for a
migration we don't have any images found via storage alone. They will be
referenced in the config somewhere.
Therefore, there is no need for the 'storage' ref.
The 'referenced_in_config' is not really needed and can apply to both,
attached and unused disk images.
Therefore the QemuServer::foreach_volid() will change the
'referenced_in_config' attribute to an 'is_attached' one that only
applies to disk images that are in the _main_ config part and are not
unused.
In QemuMigrate::scan_local_volumes() we can then quite easily map the
refs to each state, attached, unused, referenced_in_{pending,snapshot}.
The refs are mostly used for informational use to print out in the logs
why a disk image is part of the migration. Except for the 'attached' case.
In the future the extra step of the refs in QemuMigrate could probably
be streamlined even more.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When scanning all configured storages for disk images belonging to the
VM, the migration could easily fail if a storage is not available, but
enabled. That storage might not even be used by the VM at all.
By not scanning all storages and only looking at the disk images
referenced in the VM config, we can avoid unnecessary failures.
Some information that used to be provided by the storage scanning needs
to be fetched explicilty (size, format).
Behaviorally the biggest change is that unreferenced disk images will
not be migrated anymore. Only images referenced in the config will be
migrated.
The tests have been adapted accordingly.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
All calling sites except for QemuConfig.pm::get_replicatable_volumes()
already enabled it. Making it the non-configurable default results in a
change in the VM replication. Now a disk image only referenced in the
pending section will also be replicated.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Make it possible to optionally iterate over disks in the pending section
of VMs, similar as to how snapshots are handled already.
This is for example useful in the migration if we don't want to rely on
the scanning of all storages.
All calling sites are adapted and enable it, except for
QemuConfig::get_replicatable_volumes as that would cause a change for
the replication if pending disks would be included.
The following lists the calling sites and if they should be fine with
the change (source [0]):
1. QemuMigrate: scan_local_volumes(): needed to include pending disk
images
2. API2/Qemu.pm: check_vm_disks_local() for migration precondition:
related to migration, so more consistent with pending
3. QemuConfig.pm: get_replicatable_volumes(): would change the behavior
of the replication, will not use it for now.
4. QemuServer.pm: get_vm_volumes(): is used multiple times by:
4a. vm_stop_cleanup() to deactivate/unmap: should also be fine with
including pending
4b. QemuMigrate.pm: in prepare(): part of migration, so more consistent
with pending
4c. QemuMigrate.pm: in phase3_cleanup() for deactivation: part of
migration, so more consistent with pending
[0] https://lists.proxmox.com/pipermail/pve-devel/2023-May/056868.html
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Commit efa3355d ("fix #3428: cloudinit: add parameter for upgrade on
boot") changed the default, but this is a breaking change. The bug
report was only about making the option configurable.
The commit doesn't give an explicit reason for why, and arguably,
doing the upgrade is not an issue for most users. It also leads to a
different cloud-init instance ID, because of the different setting,
which in turn leads to ssh host key regeneration within the VM.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The scope can get into failed state for some issues like OOM kills of
the whole scope, in that case a user cannot re-start the VM until
they manually reset it.
Do this for now inline to avoid a pve-common bump as done in [0]
(location was suggested by me thinking we could maybe do it over
dbus, but as we have a stop command here already it probably doesn't
matters)
[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-June/057770.html
Originally-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to allow early checking of the merged config, if the backup archive
passed in is a proper volume where extraction is possible.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to avoid duplicate work, always set 'volid' to the backup volume's volid, if it
was successfully parsed as such.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
they can only be migrated to nodes where there exists a mapping and if
the migration is done offline
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
for offline migration, limit the allowed nodes to the ones where the
mapped resources are available
this adds new info to the api call namely the 'mapped-resources' list,
as well as the 'unavailable-resources' info in the 'not_allowed_nodes'
object
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
by adding them to their own list, saving the nodes where they are not
allowed, and return those on 'wantarray' so we don't break existing
callers that don't expect it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring pci devices that are mapped via cluster
resource mapping when the user has 'Resource.Use' on the ACL path
'/mapping/pci/{ID}' (in addition to the usual required vm config
privileges)
When given multiple mappings in the config, we use them as alternatives
for the passthrough, and will select the first free one on startup.
It is using our regular pci reservation mechanism for regular devices and
we introduce a selection mechanism for mediated devices.
A few changes to the inner workings were required to make this work well:
* parse_hostpci now returns a different structure where we have a list
of lists (first level is for the different alternatives and second
level is for the different devices that should be passed through
together)
* factor out the 'parse_hostpci_devices' which parses each device from
the config and does some precondition checks
* reserve_pci_usage now behaves slightly different when trying to
reserve an device with the same VMID that's already reserved for,
since for checking which alternative we can use, we already must
reserve one (this means that qm showcmd can actually reserve devices,
albeit only for up to 10 seconds)
* configuring a mediated device on a multifunction device is not
supported anymore, and results in failure to start (previously, it
just chose the first device to do it). This is a breaking change
* configuring a single pci device twice on different hostpci slots now
fails during commandline generation instead on qemu start, so we had
to adapt one test where this occurred (it could never have worked
anyway)
* we now check permissions during clone/restore, meaning raw/real
devices can only be cloned/restored by root@pam from now on.
this is a breaking change.
Fixes#3574: Improve SR-IOV usability
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
this patch allows configuring usb devices that are mapped via
cluster resource mapping when the user has 'Mapping.Use' on the ACL
path '/mapping/usb/{ID}' (in addition to the usual required vm config
privileges)
for now, this is only valid if there is exactly one mapping for the
host, since we don't track passed through usb devices yet
This now also checks permissions on clone/restore, meaning a
'non-mapped' device can only be cloned/restored as root@pam user.
That is a breaking change.
Refactor the checks for restoring into a sub, so we have central place
where we can add such checks
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
similar to how we handle the PCI module and format. This makes the
'verify_usb_device' method and format unnecessary since
we simply check the format with a regex.
while doing tihs, i noticed that we don't correctly check for the
case-insensitive variant for 'spice' during hotplug, so fix that too
With this we can also remove some parameters from the get_usb_devices
and get_usb_controllers functions
while were at it, refactor the permission checks for the usb config too
and use the new 'my sub' style for the functions
also make print_usbdevice_full parse the device itself, so we don't have
to do it in multiple places (especially in places where we don't see
that this is needed)
No functional change intended
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Markus Frank <m.frank@proxmox.com>
We use this in a few places. By factoring it into its own function, we
can avoid running slightly different checks in various places.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Using the word 'agent' is highly confusing here as there is no QMP
agent and thus wrongly suggests that the value is related to the
guest agent[0].
[0]: https://forum.proxmox.com/threads/123590/post-537716
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
https://gitlab.com/x86-psABIs/x86-64-ABI/https://lists.gnu.org/archive/html/qemu-devel/2021-06/msg01592.html
"
In 2020, AMD, Intel, Red Hat, and SUSE worked together to define
three microarchitecture levels on top of the historical x86-64
baseline:
* x86-64: original x86_64 baseline instruction set
* x86-64-v2: vector instructions up to Streaming SIMD
Extensions 4.2 (SSE4.2) and Supplemental
Streaming SIMD Extensions 3 (SSSE3), the
POPCNT instruction, and CMPXCHG16B
* x86-64-v3: vector instructions up to AVX2, MOVBE,
and additional bit-manipulation instructions.
* x86-64-v4: vector instructions from some of the
AVX-512 variants.
"
This patch add new builtin model derivated from qemu64 model,
to be compatible between intel/amd.
mandatory flags from qemu-doc generator:
https://gitlab.com/qemu/qemu/-/blob/master/scripts/cpu-x86-uarch-abi.py
levels = [
[ # x86-64 baseline
"cmov",
"cx8",
"fpu",
"fxsr",
"mmx",
"syscall",
"sse",
"sse2",
],
[ # x86-64-v2
"cx16",
"lahf-lm",
"popcnt",
"pni",
"sse4.1",
"sse4.2",
"ssse3",
],
[ # x86-64-v3
"avx",
"avx2",
"bmi1",
"bmi2",
"f16c",
"fma",
"abm",
"movbe",
"xsave" #missing from qemu doc currently
],
[ # x86-64-v4
"avx512f",
"avx512bw",
"avx512cd",
"avx512dq",
"avx512vl",
],
]
x86-64-v1 : I'm skipping it, as it's basicaly qemu64|kvm64 -vme,-cx16 for compat Opteron_G1 from 2004
so will use it as qemu64|kvm64 is higher are not working on opteron_g1 anyway
x86-64-v2 : Derived from qemu, +popcnt;+pni;+sse4.1;+sse4.2;+ssse3
min intel: Nehalem
min amd : Opteron_G3
x86-64-v2-AES : Derived from qemu, +aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3
min intel: Westmere
min amd : Opteron_G3
x86-64-v3 : Derived from qemu64 +aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe+xsave
min intel: Haswell
min amd : EPYC_v1
x86-64-v4 : Derived from qemu64 +aes;+popcnt;+pni;+sse4.1;+sse4.2;+ssse3;+avx;+avx2;+bmi1;+bmi2;+f16c;+fma;+abm;+movbe;+xsave;+avx512f;+avx512bw;+avx512cd;+avx512dq;+avx512vl
min intel: Skylake
min amd : EPYC_v4
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
It can also be a permission issue, so the current error can be
a bit confusing.
Reported in the community forum:
https://forum.proxmox.com/threads/120619/post-562660
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This was not only rather inefficient (getting the config from the
archive twice) but also wrong, as we can override options on restore,
so we can do the check only when the backed-up config and override
config got merged.
If this is to late from POV of volume deletion or the like, then the
issue is that those things happen to early, as we can only know what
to do with the actual target config, so destructive actions that
happen before that are wrong by design.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See the corresponding commit in guest-common for more information.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The commands snapshot-drive and delete-drive-snapshot have been unused
by qemu-server since commit eba2b721 ("use qemu's blockdev-snapshot
functions") and are now going to be dropped in our QEMU builds too, so
get rid of these left-overs.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
If no FQDN is provided, we simply set it to the current hostname. This
ensures that the hostname *really* gets set, since we encountered an
issue on Fedora and CentOS based systems where no hostname got set at
all.
When there's no FQDN set in the cloudinit config, this leads to the
following entry:
127.0.1.1 <hostname> <hostname>
Which doesn't seem to cause any issues.
Tested on:
- Ubuntu 23.04
- CentOS 8
- Fedora 38
- Debian 11
- SUSE 15.4
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
Similar to the corresponding endpoint for containers. Because disks
are involved, this can be a longer running operation, as is also
indicated by the 60 seconds timeout used in qemu_block_resize() which
is called by this endpoint.
This is a breaking API change.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Support for this was added in QEMU 5.1 by commit 7fa140abf6 ("qcow2:
Allow resize of images with internal snapshots").
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
for convenience. These options do not influence the QEMU instance
directly, but are only used for migration, so no need to keep them in
pending.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This patch partially reverts commit 1b5706cd16,
by reintroducing the old format for return values (key, value, pending,
delete), but drops the "force-delete" return value. Right now, this
endpoint does not conform to its own format, because the return values
are as follows:
{
key => {
old => 'foo',
new => 'bar',
},
[…]
}
While the format specified is
[
{
key => 'baz',
old => 'foo',
new => 'bar',
},
[…]
]
This leads to the endpoint being broken when used through 'qm' and
'pvesh'. Using the API works fine, because the format doesn't get
verified there. Reverting this change brings the advantage that we can
also use PVE::GuestHelpers::format_pending when calling the endpoint
through qm again.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
these config keys only affect the cloudinit drive contents (and state of the
guest inside the VM), they are not used anywhere on the hypervisor side, so
they should not require VM.Config.Network (which allows a lot more, such as
changing vNIC VLAN tags or the bridges they are connected to).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead use a recent example that served as a workaround in #4625.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
like the deprecation message printed by QEMU suggests.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we don't want to use the '-alist' formats anymore in favor of real arrays
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Commit 7246e8f9 ("Set zero $size and continue if volume_resize()
returns false") mentions that this is needed for "some storages with
backing block devices to do online resize" and since this patch came
together [0] with pve-storage commit a4aee43 ("Fix RBD resize with
krbd option enabled."), it's safe to assume that RBD with krbd is
meant. But it should be the same situation for any external plugin
relying on the same behavior.
Other storages backed by block devices like LVM(-thin) and ZFS return
1 and the new size respectively, and the code is older than the above
mentioned commits. So really, the RBD plugin just should have returned
a positive value to be in-line with those and there should be no need
to pass 0 to the block_resize QMP command either.
Actually, it's a hack, because the block_resize QMP command does not
actually do special handling for the value 0. It's just that in the
case of a block device, QEMU won't try to resize it (and not fail for
shrinkage). But the size in the raw driver's BlockDriverState is
temporarily set to 0 (which is not nice), until the sector count is
refreshed, where raw_co_getlength is called, which queries the new
size and sets the size in the raw driver's BlockDriverState again as a
side effect. It's not known to cause any issues, but bdrv_getlength is
a coroutine wrapper starting from QEMU 8.0.0, and it's just better to
avoid setting a completely wrong value even temporarily. Just pass the
actually requested size like is done for LVM(thin) and ZFS.
[0]: https://lists.proxmox.com/pipermail/pve-devel/2017-January/025060.html
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Handle and warn about network interfaces which are not attached to
any bridge because the user actively removed it from the VM config.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
This makes the description consistent with the other places that
have bwlimit as a parameter as well.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
which is the case for passed-through disks. The qemu_img_format()
function cannot correctly handle those, and it's not safe to assume
they are raw (it's most likely, but not guaranteed), so just use the
storage method for the format like it was done before commit
efa3aa24 ("avoid list context for volume_size_info calls"). This will
use 'qemu-img info' to get the actual format.
Reported in the community forum:
https://forum.proxmox.com/threads/124794/https://forum.proxmox.com/threads/124823/https://forum.proxmox.com/threads/124818/
Fixes: efa3aa24 ("avoid list context for volume_size_info calls")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the recent pve-storage commit d70d814 ("api: fix get content call response
type for RBD/ZFS/iSCSI volumes"), the volume_size_info call for RBD in
list context is much slower than before (from a quick test, about twice as long
without snapshots, even longer with snapshots and untested, but when using an
external cluster with image not having the fast-diff feature, it should be worse
still) and thus increases the likelihood to run into timeouts here.
None of the callers here actually need the more expensive call, so just
avoid calling in list context.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While, usually, the slot should match the ID, it's not explicitly
guaranteed and relies on QEMU internals. Using the numerical ID is
more future-proof and more consistent with plugging, where no slot
information (except the maximum limit) is relied upon.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
current qemu_dimm_list can return any kind of memory devices.
make it more generic, with an optionnal type device
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
in some nvidia grid drivers (e.g. 14.4 and 15.x), their kernel module
tries to clean up the mdev device when the vm is shutdown and if it
cannot do that (e.g. becaues we already cleaned it up), their removal
process cancels with an error such that the vgpu does still exist inside
their book-keeping, but can't be used/recreated/freed until a reboot.
since there seems no obvious way to detect if thats the case besides
either parsing dmesg (which is racy), or the nvidia kernel module
version(which i'd rather not do), we simply test the pci device vendor
for nvidia and add a 10s sleep. that should give the driver enough time
to clean up and we will not find the path anymore and skip the cleanup.
This way, it works with both the newer and older versions of the driver
(some of the older drivers are LTS releases, so they're still
supported).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of using the mdev uuid. The nvidia driver does not actually care
that it's the same as the mdev, and in qemu the uuid parameter
overwrites the smbios1 uuid internally, so we should have been reusing
that in the first place.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Respecting bandwidth limit for offline clone was implemented by commit
56d16f16 ("fix #4249: make image clone or conversion respect bandwidth
limit"). It's still not respected for EFI disks, but those are small,
so just ignore it.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
(ratelimit) parameter to qemu-img convert fixes this issue.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
[ T: reword subject line slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid pretending that a MTU change on a existing network device gets
applied live to a running VM.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reworded and expanded commit message slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Required because there's one single efi for ARM, and the 2m/4m
difference doesn't seem to apply.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: move description to format and reword subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
AFAICT, previously, errors from swtpm would not show up in any logs,
because they were just printed to the stderr of the daemonized
invocation here.
The 'truncate' option is not used, so that the log is not immediately
lost when a new instance is started. This increases the chance that
the relevant errors are still present when requesting the log from a
user.
Log level 1 contains the most relevant errors and seems to be quiet
for working-as-expected invocations. Log level 2 already includes
logging full TPM commands, some of which are 1024 bytes long. Thus,
log level 1 was chosen.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].
[0]: https://forum.proxmox.com/threads/123159/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The target of the drive-mirror operation is opened with (essentially)
the same flags as the source in QEMU, in particular whether io_uring
should be used is inherited.
But io_uring currently causes problems in combination with certain
storage types, sometimes even leading to crashes (LVM with Linux 6.1).
Just disallow live cloning of drives when the source uses io_uring and
the target storage is not ready for it. There is one exception, namely
when source and target storage are the same. In that case, just assume
it will keep working for the target.
Migration does not seem to be affected, because there, the target VM
opens the images with the checked aio setting and then NBD exports of
those are used as the targets for mirroring.
It can be that the default determined for the source is not what's
actually used, because after a drive-mirror to a storage with a
different default, it will still use the default from the old storage.
Unfortunately, aio doesn't seem to be part of the 'query-block' QMP
command's result, so just tolerate this edge case.
The check can be removed if either
1. drive-mirror learns to open the target with a different aio setting
or more ideally
2. there are no more bad storages for io_uring.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, changing aio would be applied to the configuration, but
the drive would still be using the old setting.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since commit 9b6efe43 ("migrate: add live-migration of replicated
disks") live-migration with replicated volumes is possible. When
handling the replication, it is checked that all local volumes
previously detected as replicatable are actually replicated. So the
check if migration with snapshots is possible can just allow volumes
that are detected as replicatable.
Note that VM state files are also replicated.
If there is an invalid configuration with a non-replicatable volume or
state file and replication is enabled, then replication will fail, and
thus migration will fail early.
Trying to live-migrate to a non-replication target (needs --force)
will still fail if there are snapshots, because they are (correctly)
detected as non-replicated.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In the web UI, this was fixed years ago by pve-manager commit c11c4a40
("fix #1631: change units to binary prefix").
Quickly checked with the 'query-memory-size-summary' QMP command that
this is actually the case.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when a vm is configured to use a physical cd rom drive but there is no
such drive a cryptic "uninitialized value" error is thrown. this is
due to `$path` being undefined in `sub print_drive_commandline_full`.
warn that no cd rom drive is available instead.
note that the error was cosmetic as the vm would start just fine.
forum thread: https://forum.proxmox.com/threads/119592/
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
if it is present. Should give more information for some failures and
even if it's not present, that fact helps to narrow things down.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since we can now differentiate between 'suspended' and 'suspending',
it is possible to ignore the 'suspended' lock when destroying a VM.
It shouldn't matter whether the VM is locked because of hibernation
when you want to remove it. Therefore we can safely ignore the lock.
When $d->{'pci_bridge'}->{devices} is undef, @-dereferencing it will
die with:
> Can't use an undefined value as an ARRAY reference
This can happen (at least) when the VM is in 'prelaunch' state. The
QAPI definition for '@PciBridgeInfo' also declares the 'devices'
member as optional.
Before commit 721624b ("collect device list for nested pci-bridges"),
there was no issue, because $d->{'pci_bridge'}->{devices} was used in
foreach, so auto-vivified if undef.
Fixes: f721624b ("collect device list for nested pci-bridges")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When rolling back to the snapshot of a VM that includes RAM, the VM
gets started by the rollback task anyway, so no additional start task
is needed. Previously, when rolling back with the start parameter and
the VM snapshot included RAM, a start task was created. That task
failed because the VM had already been started by the rollback task.
Additionally documented this behaviour in the description of the start
parameter
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
We can now do a few things that would be not really possible, or at
least mess with readability when this was still mostly inline
config2command, shaves of quite a few lines of code.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in preparation of reworking the new separate method for OVMF cmd
assembly, do this in a separate very targeted commit to make it more
clear that the next reworking-commit doesn't messes with our tests at
all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the fix for the recently introduced requirement of loading the VM config while
migrating was incomplete, since the vmlist node value could already be out of
date by the time load_config is called.
extend the fallback behaviour even further, by doing the following sequence:
- try regular load_config (likely case, rename already fully processed)
- if it fails, get node from vmlist, and load_config using that
- it that fails, invalidate the PVE::Cluster cache, retry regular load_config
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
was only explained in git history and vm_stop, add comments in other
relevant places to avoid future breakage.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it's not deterministic whether the rename/move of the VM config
triggered on the source side of a migration is already visible on the
target side when vm_resume is executed. check the vmlist for the node
where the config is currently located if $nocheck is set - it is now
needed to add the forwarding DB entries to the bridge.
this fixes an issue on busier or slower clusters, where pmxcfs hasn't
yet processed the rename, and resuming would fail with an error about
the config not existing.
Reported-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
When applying the series introducing those calls, the helper was moved
to pve-common's CGroup.pm (see 07c10d5 ("cgroup: move get_cpuunits
helper from qemu-server as clamp_cpu_shares") in pve-common) instead
of pve-guest-common's GuestHelpers.pm. But these calls were not
updated.
Reported in the community forum:
https://forum.proxmox.com/threads/118267
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
we need to handle OVS setups differently, so for now just ignore it
there (behavior as it was in 7.2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Mira found out that 41 phys-bits the limit is pretty much the same as
with 40, as such odd sizes are a bit unexpected anyway lets mask the
LSB and use that as base, that way we're good again.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
QEMU 7.1 introduced some actual checks for the max memory value in
1caab5cf86bd ("i386/pc: bounds check phys-bits against max used GPA")
and while correct it breaks our by-luck working hard coded max mem of
4 TB for cases with smaller phys bit address sizes, like some older
CPUs or most CPU types have per default if not 'host' or 'max'.
QEMU uses 40 bits per default if the CPU isn't set to host or
phys-bits is not set explicitly.
For 40 bit it seems that depending on machine type one has a max
possible mem of: i440 -> 752, q35 -> 722 GiB, but instead of reducing
it to 704 GiB (512+1128+64) in a hard coded way we acutally check for
the bit size that will probably be used and use that to determine the
max memory size useable.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
which wraps the remote_migrate_vm API endpoint, but does the
precondition checks that can be done up front itself.
this now just leaves the FP retrieval and target node name lookup to the
sync part of the API endpoint, which should be do-able in <30s ..
an example invocation:
$ qm remote-migrate 1234 4321 'host=123.123.123.123,apitoken=PVEAPIToken=user@pve!incoming=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee,fingerprint=aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb:cc:dd:ee:ff:aa:bb' --target-bridge vmbr0 --target-storage zfs-a:rbd-b,nfs-c:dir-d,zfs-e --online
will migrate the local VM 1234 to the host 123.123.1232.123 using the
given API token, mapping the VMID to 4321 on the target cluster, all its
virtual NICs to the target vm bridge 'vmbr0', any volumes on storage
zfs-a to storage rbd-b, any on storage nfs-c to storage dir-d, and any
other volumes to storage zfs-e. the source VM will be stopped but remain
on the source node/cluster after the migration has finished.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
entry point for the remote migration on the source side, mainly
preparing the API client that gets passed to the actual migration code
and doing some parameter parsing.
querying of the remote sides resources (like available storages, free
VMIDs, lookup of endpoint details for specific nodes, ...) should be
done by the client - see next commit with CLI example.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the main differences to regular intra-cluster migration are:
- source VM config and disks are only removed upon request via --delete
- shared storages are treated like local storages, since we can't
assume they are shared across clusters (with potentical to extend this
by marking storages as shared)
- NBD migrated disks are explicitly pre-allocated on the target node via
tunnel command before starting the target VM instance
- in addition to storages, network bridges and the VMID itself is
transformed via a user defined mapping
- all commands and migration data streams are sent via a WS tunnel proxy
- pending changes and snapshots are discarded on the target side (for
the time being)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the following two endpoints are used for migration on the remote side
POST /nodes/NODE/qemu/VMID/mtunnel
which creates and locks an empty VM config, and spawns the main qmtunnel
worker which binds to a VM-specific UNIX socket.
this worker handles JSON-encoded migration commands coming in via this
UNIX socket:
- config (set target VM config)
-- checks permissions for updating config
-- strips pending changes and snapshots
-- sets (optional) firewall config
- disk (allocate disk for NBD migration)
-- checks permission for target storage
-- returns drive string for allocated volume
- disk-import, query-disk-import, bwlimit
-- handled by PVE::StorageTunnel
- start (returning migration info)
- fstrim (via agent)
- ticket (creates a ticket for a WS connection to a specific socket)
- resume
- stop
- nbdstop
- unlock
- quit (+ cleanup)
this worker serves as a replacement for both 'qm mtunnel' and various
manual calls via SSH. the API call will return a ticket valid for
connecting to the worker's UNIX socket via a websocket connection.
GET+WebSocket upgrade /nodes/NODE/qemu/VMID/mtunnelwebsocket
gets called for connecting to a UNIX socket via websocket forwarding,
i.e. once for the main command mtunnel, and once each for the memory
migration and each NBD drive-mirror/storage migration.
access is guarded by a short-lived ticket binding the authenticated user
to the socket path. such tickets can be requested over the main mtunnel,
which keeps track of socket paths currently used by that
mtunnel/migration instance.
each command handler should check privileges for the requested action if
necessary.
both mtunnel and mtunnelwebsocket endpoints are not proxied, the
client/caller is responsible for ensuring the passed 'node' parameter
and the endpoint handling the call are matching.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
in case of remote migration, we use the `update_vm_api` helper for
checking permissions on the incoming config. this would also cause an
incoming cloud-init image to be overwritten, since the VM is not running
yet at this point.
provide a parameter which can be set by an incoming *remote* migration
to avoid having inconsistent cloud init images on the source and target
side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
cloudinit generation needs to see the cloudinit drive so we
need to pass a config with it already updated
Fixes: 4b785da1a9 ("delay cloudinit generation in hotplug")
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
from GuestHelpers. This function checks all necessary permissions and
raises an exception if the user does not have the correct ones.
This is necessary for the new 'privileged' tags and 'user-tag-access'
permissions to work.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The process for editing Cloud-init drives checked for inconsistent
permissions: for adding, the VM.Config.Disk permission was needed, while
the VM.Config.CDROM permission was needed to remove a drive. The regex
in drive_is_cloudinit needed to be adapted since the drive names have
different formats before/after they are actually generated.
Due to the regex letting names fall through before, Cloud-init drives
were being checked as disks, even though they are actually treated as
CDROM drives. Due to this, it makes more sense to check for
VM.Config.CDROM instead, while also requiring VM.Config.Cloudinit, since
generating a Cloud-init drive already generates default values that are
passed to the VM.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
- The forced-remove flag wasn't really used AFAICT and makes
no sense IMO.
- Whether or not we care about non-MAC changes does not
belong here, but should instead taken into account in the
actual hotplug path recording the cloud-init state (iow.
into $cloudinit_record_changed().)
(This is not done here atm.)
- It seems much simpler to just have:
* 'old' = the old value if it's not a new value
* 'new' = the new value unless it's being deleted
* If only one of them is set it's an addition or removal.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
It performs schema valdiation (and normalization).
We only ever write values into it which came from an
already validated config, and we also add an additional
"added" key which is not covered by the schema, so this
would fail.
Simply skip it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
introducing an 'added' value in the cloudinit section for
values which have not been present when the cloudinit image
has been generated
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Hotpluggieg generated a cloudinit image based on old values
in order to attach the device and later update it again, but
the update was only done if cloudinit hotplug was enabled.
This is weird, let's not.
Also introduce 'apply_cloudinit_config' which also write the
config, which, as it turns out, is the only thing we
actually need anyway, currently.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This partially reverts commit 95a5135dad.
Particularly the unprotected write to the config when
generating the cloudinit file. We leave the rest as is for
now and update the callers to deal with the config later.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Changing the read-only status of a disk is not possible through QMP, so
it needs to be exempt from the hotpluggable values as to notify the
user.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
The former to ensure the manager that depends on the newer
qemu-server is actually installed and the latter to avoid false
positives
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
max supported queues tx + rx = 256, so 128 for combined
https://lists.gnu.org/archive/html/qemu-devel/2015-03/msg03917.html
But from above link it also seems that x86 only supports 80 pairs in
practice, so for now "only" quadruple the limit to 64 and see if we
get user feedback for more requested.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reduce from 128 to 64 and add short rationale for that ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
there's no guarantee that we're locked here and it also produces
unnecessary extra IO in most cases.
While at it also avoid that a special:cloudinit section is added on
start to *every* VM, which caused another bug to trigger (see prev.
commit) and is just odd for users that ain't using cloudinit
Note in two call sites that we may need to write the config indeed
out there on the caller side.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we now always write out a new clouding special section on start (to
be fixed) independent of any cloudinit drive/config configured or
not, and thus always run into that section after a VM started with
the new qemu-server installed, which in turn set the description
always to undef.
Fixes: 95a5135 ("cloudinit: add cloudinit section for current generated config.")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
that function also caches the value, and it recently was changed to
be importable, so we can just import and drop this once a new enough
pve-common is available.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is reducing packet drop on high pps, and also needed for dpdk.
Redhat already have use it by default in rhev and his openstack platform too
since 2019.
I'm using it in production since 6 months, I don't have seen performance regression.
fix: (which ask for custom option, but setting it by default seem fine for me)
https://bugzilla.proxmox.com/show_bug.cgi?id=1546https://bugzilla.proxmox.com/show_bug.cgi?id=2349
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
at the end of a live migration, we need to remove old mac entries
on source host (vm is not yet stopped), before resume vm on target host
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[T: resolve conflicts and rework on apply ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In theory we can have a config with netX records that do not specify
a `macaddr` property, we just auto-generate on in config2cmd for
startup transitively, but don't save that explicitly back to the
config; so while we could parse the /proc/$pid/cmdline or try to get
the info from QMP (not fully straight forward) it seems rather a
hassle; especially if one has in mind that this cannot happen via the
API FWICT; as there a "deletion" *saves* a newly auto generated value
out to the config, same with clone of a VM and restore of a backup.
So, in basically all reasonable cases we got the `macaddr` available,
but if we don't it makes no sense to add a FDB variable for a *newly*
generated one by the parse_net call, as the VM won't use that (well,
at least if one doesn't get "lucky" and it randomly re-generates the
same as on startup), so allow telling parse_net to skip auto
generating MACs and use that in the add-fdb-entries helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On plain VM start (no live migration), we can simply add MAC address
into the fdb. In case of a live migration, we add the mac address
just before the resume.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
If the config doesn't contain the cloud-init disk anymore after the
rollback, we have to clean it up since otherwise no further disk can be
attached unless the one still existing on the storage is deleted.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Stefan Hanreich <s.hanreich@proxmox.com>
same as with the extended support for more usb devices, allow
hotplugging for guests that can use the qemu-xhci controller which
require a machine type >= 7.1 and a ostype l26 or windows > 7
if no usb device was passed through on startup, dynamically add
the xhci controller (and remove if the last usb device is unplugged)
so that live migration is still possible
much of the usb hotplug code was already there, but it still needed
a few adaptions, for example we have to add a chardev when adding
a spice redir port (that gets automatically removed when the
usb-redir device gets removed)
since the spice devices use the id 'usbredirdevX' instead of 'usbX', we
have to manually map that a bit around
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
for machine versions >= 7.1 and ostype linux or windows > 7, we use the
qemu-xhci controller where we have up to 14 usable ports, so make them
available to the user
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>