Commit Graph

2570 Commits

Author SHA1 Message Date
Thomas Lamprecht
20fc9811ec style fix: improve device-type variable name
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-04-10 13:56:50 +02:00
Hannes Duerr
6906c2ab33 drive: style fix the name of the get_scsi_device_type method
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-04-10 13:56:50 +02:00
Hannes Duerr
7e2065956f fix #5363: cloudinit: make creation of scsi cloudinit discs possible again
Upon obtaining the device type, a check is performed to determine if it
is a CD drive. It is important to note that Cloudinit drives are always
assigned as CD drives. If the drive has not yet been allocated, the test
will fail due to the unset cd attribute.
To avoid this, an explicit check is now performed to determine if it is
a Cloudinit drive that has not yet been assigned.

Fixes: d1feab4 ("fix #4957: add vendor and product information passthrough for SCSI-Disks")
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
2024-04-10 13:56:41 +02:00
Dominik Csapak
f0923f49e9 usb: fix undef error on string match
'$entry->{host}' can be empty, so we have to check for that before
doing a regex check, otherwise we get ugly errors in the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-03-22 14:17:53 +01:00
Filip Schauer
caa88bc80a cpu config: die on hotplug of non x86_64 CPUs
When attempting a CPU hotplug on an architecture other than x86_64, die
with a clean error instead of attempting a hotplug with a known
non-working device command line. Also move the corresponding FIXME up to
the error.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2024-03-14 14:01:21 +01:00
Fiona Ebner
f6039cedf1 disk import: warn when fallback is used instead of requested format
Might avoid some confusion. Reported in the community forum:
https://forum.proxmox.com/threads/142988/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-03-14 14:01:12 +01:00
Wolfgang Bumiller
ddca7afe61 import: remove useless typoed error message
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-03-14 13:37:16 +01:00
Wolfgang Bumiller
81b984433b also support live-import with absolute paths
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-03-14 13:37:16 +01:00
Thomas Lamprecht
e8710a9ae7 qm: add VM import command
Add a command that can be used together with volumes from the new
'import' content type of storage plugins.

For now only the new ESXi exposes that content type, but in the long
run its planned to migrate over the existing OVF/OVA infra and extend
it so that it will replace the 'ovfimport' command.

Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
 [ TL: split out to separate commit and add message, fix completing
   VMID to propose unused ones, note explicitly when in dry-run mode ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-14 13:35:11 +01:00
Wolfgang Bumiller
eb06e48657 support live-import for 'import-from' disk options on create
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-03-13 16:29:58 +01:00
Wolfgang Bumiller
5b8d01f575 generalize live restore code
instead of a "pbs-backing" parameter we now have a
"live-restore-backing" parameter containing the `-blockdev` arg and
its name, which also means we print the blockdev earlier

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-03-11 10:41:42 +01:00
Thomas Lamprecht
4f2404057e config: update network: code-style & readability improvements
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-10 18:29:53 +01:00
Thomas Lamprecht
7c0f763d0c config: apply pending: code-style & readability improvements
among other things, avoid one indentation level by returning early
from the eval.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-10 18:29:26 +01:00
Thomas Lamprecht
3fde43d2ec config: pending network: avoid undef-warning on old/new comparison
A network device of a VM does not necessarily has to be connected to
an actual bridge, so when a new pending value is set we need to use
the undef-safe compare helpers when checking if there was a change
between old and new value, as otherwise one gets ugly "use of
uninitialized value in string ne" warnings.

Link: https://forum.proxmox.com/threads/143072/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-10 18:27:22 +01:00
Wolfgang Bumiller
9a1b5d0e71 add missing import
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2024-03-10 16:00:43 +01:00
Thomas Lamprecht
5ebb6018ed cpu config: implement is_native_arch locally for now
could be a better fit in PVE::Tools, like proposed by Filip, but OTOH.
Tools is already crowded as is, so wait if we need it on more places
outside of qemu-server.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-08 15:16:40 +01:00
Fiona Ebner
c7c2e4dbd1 QMP client: sort commands with 10 minutes timeout alphabetically
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-03-08 14:37:29 +01:00
Fiona Ebner
bb600b7bf2 QMP client: add missing use statement for UNIX Sockets module
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-03-08 14:37:21 +01:00
Fiona Ebner
0d00383d0e QMP client: remove unnecessary question mark from comment
There might've been a question back when it got first added in commit
9d689077 ("use long timeouts for snapshot monitor command"). But
nowadays, the value is well-established. Changing it would affect
quite a few operations, so that should not be done without good
reason and is likely better done for the specific operation.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-03-08 14:35:16 +01:00
Fiona Ebner
b0d1a00a24 QMP client: increase default timeout for drive-mirror to 10 minutes
like for other block operations.

Reported in the community forum:
https://forum.proxmox.com/threads/141238/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-03-08 14:35:16 +01:00
Filip Schauer
d1a7abd07c cpu config: Unify the default value for 'kvm'
Make the default value for 'kvm' consistent, taking into account
whether the VM will run on the same CPU architecture as the host.

This would be a breaking change to CPU hotplug for VMs with a
different CPU architecture running on an x86_64 host, as in this case
the default CPU type for CPU hotplug changes from 'kvm64' to 'qemu64'.
However, CPU hotplug of non x86_64 architectures is not supported
anyway, so this is not a breaking change after all.

It should be noted that this change does alter the CPU hotplug
behaviour when emulating an x86_64 CPU on a non-x86_64 host. This is
however not officially supported in Proxmox VE.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2024-03-08 14:24:37 +01:00
Filip Schauer
bb42334981 Move is_native from PVE::QemuServer to PVE::Tools
Move is_native from PVE::QemuServer to PVE::Tools and rename it to
is_native_arch to be more descriptive.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2024-03-08 14:24:32 +01:00
Filip Schauer
89d5b1c90b prevent starting a 32-bit VM using a 64-bit OVMF BIOS
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.

To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries that have the long mode
feature (CPUID_EXT2_LM).

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2024-03-08 14:24:32 +01:00
Filip Schauer
5416ff700f cpu config: add helper to get the default CPU type
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2024-03-08 14:24:32 +01:00
Dominik Csapak
a672c578e0 mediated device pass-through: fix race condition on VM reboot
When rebooting a VM from PVE (via CLI/API), the reboot code is called
under a guest lock, which creates a reboot request, shuts down the VM
and then calls the regular cleanup code, which includes the mdev
cleanup.

In parallel, the qmeventd observes that the VM process has gone, and
starts 'qm cleanup' which is (among other tasks) also starts the VM
again if a reboot from the PVE side is pending.
The qmeventd synchronizes this through a lock on the guest, with a
default timeout of 10 seconds.

Since we currently also always wait 10 seconds for the NVIDIA driver
to clean up the mdev, this creates a race condition for the cleanup
lock. IOW., when the call to `qm cleanup` starts before we started to
sleep for 10 seconds, it will not be able to acquire its lock and not
start the vm again.

To avoid the race condition in practice, do two things:
* increase the timeout in `qm cleanup` to 60 seconds.
  Technically this still might run into a timeout, as we can configure
  up to 16 mediated devices with each delaying 10 seconds in the worst
  case, but realistically most users won't configure more than two or
  three of them, if even that.

* change the hard-coded `sleep 10` to a loop sleeping for 1 second
  each before checking the state again. This shortens the timeout when
  the NVIDIA driver did not require the full 10s to finish the
  clean-up.

Further, add a bit of logging, so one can properly see in the task log
what is happening at which point in time.

Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
 [ TL: change warn to print, reword commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-08 14:15:38 +01:00
Thomas Lamprecht
04736ecbd5 api: clone vm: comment and style clean-up deactivation error-handling
Make the post-if check for the target not already running more
prominent by using a full if block.

Also comment on why we ignore the error here, while the commit
changing that explained it well, this is one of the things that might
be better of with a in-code comment (as doing the deactivation is
described as important here, so one might wonder why the code
continues if that fails)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-03-08 13:51:30 +01:00
Hannes Duerr
9d6126e8db fix #1734: clone VM: if deactivation fails demote error to warning
When a template with disks on LVM is cloned to another node, the
volumes are first activated, then cloned and deactivated again after
cloning.

However, if clones of this template are now created in parallel to
other nodes, it can happen that one of the tasks can no longer
deactivate the logical volume because it is still in use.  The reason
for this is that we use a shared lock.
Since the failed deactivation does not necessarily have consequences,
we downgrade the error to a warning, which means that the clone tasks
will continue to be completed successfully.

Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
2024-03-08 13:40:42 +01:00
Fiona Ebner
40786ff967 api: fix using import-from with SCSI disks
by fixing the SCSI feature compatibility check helper. The helper is
also called for disks using import-from, so it has to use the extended
schema when parsing the drive.

Fixes: d1feab4a ("fix #4957: add vendor and product information passthrough for SCSI-Disks")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-01-31 12:30:37 +01:00
Fabian Grünbichler
9946d6fa57 fix #4085: properly activate cicustom storage(s)
PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (since $volid
here is always a proper volid, unless we change the cicustom schema at some
point in the future).

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-01-31 12:28:46 +01:00
Hannes Duerr
629923d025 migration: secure and use source volume names for deactivation
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.

Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
2024-01-30 10:41:45 +01:00
Fiona Ebner
613fed51fa drive: product/vendor: add comment with rationale for limits
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-01-26 11:28:32 +01:00
Hannes Duerr
d1feab4aa2 fix #4957: add vendor and product information passthrough for SCSI-Disks
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option

Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
[FE: add missing space to exception message
     use config option for exception e.g. scsi0 rather than 'product'
     style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-01-26 11:12:58 +01:00
Fabian Grünbichler
c1b2092d5e qemu_volume_snapshot_delete: drop (now) unused parameter
since we always determine the deviceid, passing in a possibly wrong value makes
no sense and could actually re-introduce bugs.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
2024-01-09 10:25:11 +01:00
Fiona Ebner
c60586838a fix #2258: select correct device when removing drive snapshot via QEMU
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.

Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.

Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1

While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-01-09 10:11:17 +01:00
Hannes Duerr
a17c753528 drive: Create get_scsi_devicetype
Encapsulation of the functionality for determining the scsi device type
in a new function for reusability in QemuServer/Drive.pm

Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
2024-01-05 16:26:25 +01:00
Hannes Duerr
028aee4554 Move NEW_DISK_RE to QemuServer/Drive.pm
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
2024-01-05 16:26:25 +01:00
Hannes Duerr
ad464e0b87 Move path_is_scsi to QemuServer/Drive.pm
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
2024-01-05 16:26:25 +01:00
Fiona Ebner
498d7da470 fix #4501: TCP migration: start vm: move port reservation and usage closer together
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.

It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore, the
port reservation can be moved very close to the actual usage.

Mentioned here for completeness and can still be done as an additional
change later if desired: next_migrate_port could be modified to
optionally return the open socket and it should be possible to pass
the file descriptor directly to QEMU, but that would require accepting
the connection before on the Perl side (otherwise leads to ENOTCONN
107). While it would avoid any races, it's not the most elegant
and the change at hand should be enough in all practical situations.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>
2024-01-03 11:31:53 +01:00
Alexandre Derumier
2f2da05217 cpu config: add QEMU 8.1 cpu models
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: add prefix to commit title, capitalize QEMU]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-12-12 11:27:42 +01:00
Fiona Ebner
2754b7e4d6 schema: mention that migration with VNC clipboard is not yet supported
as this might be surprising to users.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-29 10:25:16 +01:00
Markus Frank
bb30eedfc6 migration: do not allow live-migration with clipboard=vnc
It is not yet supported for QEMU's vdagent device which is used for
the VNC clipboard.

The migration precondition API call will now treat the VNC clipboard
as a local resource. Thus the GUI blocks migration and shows:
"Can't migrate VM with local resources: clipboard=vnc"

QemuMigrate's prepare function will also abort live migration early
when using the VNC clipboard.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
[FE: adapt commit message a bit]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-29 10:18:43 +01:00
Stefan Lendl
0b034c15f4 sdn: pass vmid and hostname to add_dhcp_mapping
if no DHCP mapping was found in IPAM it will request a new IP which
requires these values.

Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
2023-11-21 20:51:56 +01:00
Wolfgang Bumiller
8497232bf5 fixup an sdn call outside the have_sdn guard
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2023-11-21 13:25:51 +01:00
Alexandre Derumier
3f14f206a0 nic online bridge/vlan change: link disconnect/reconnect
We want to notify guest of the change, so it can resubmit dhcp request,
or send gratuitous arp,...

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2023-11-21 13:24:50 +01:00
Alexandre Derumier
e6e6c2934d nic hotplug: add_dhcp_mapping
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2023-11-21 13:24:48 +01:00
Alexandre Derumier
89229af914 vm_destroy: delete ip from ipam
Co-Authored-By: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2023-11-21 13:24:46 +01:00
Alexandre Derumier
ee8d2dea5c api2: create|restore|clone: add_free_ip
Co-Authored-by: Stefan Lendl <s.lendl@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2023-11-21 13:24:44 +01:00
Alexandre Derumier
c944fef885 vmnic add|remove : add|del ip in ipam
Co-Authored-by: Stefan Lendl <s.lendl@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2023-11-21 13:24:41 +01:00
Markus Frank
bfd4595553 api: add clipboard variable to return at status/current
This can be used by noVNC to check if a clipboard is available.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2023-11-20 16:20:26 +01:00
Markus Frank
b62997a171 config: enable VNC clipboard parameter in vga_fmt
add option to use the qemu vdagent implementation to enable the VNC
clipboard. When enabled with SPICE the spice-vdagent gets replaced
with the QEMU implementation.

This patch does not solve #1406, but does allow copy and paste with a
running X-session, when spice-vdagent is installed on the guest.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
2023-11-20 16:20:08 +01:00
Fiona Ebner
dec371d96c vm start: add warning about deprecated machine version
While there already is a warning from QEMU proper, that one is not
visible as a task warning and it's not straightforward to make it be
one, because QEMU is started inside a run_fork(). It's also more
future-proof to have the detection explicit on our side and the
documentation can be referenced.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-12 18:48:01 +01:00
Fiona Ebner
da84155405 machine: get current: add flag if current machine is deprecated in list context
Will be used for a warning.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-12 18:48:01 +01:00
Fiona Ebner
be690b7a91 machine: get current: return early from loop if possible
No point iterating through the rest if we already got the current
machine.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-12 18:48:01 +01:00
Fiona Ebner
7d6a629269 machine: get current: make it clear that pve-version only exists for the current machine
by adding a comment and grouping the code better. See the PVE QEMU
patch "PVE: Allow version code in machine type" for reference. The way
the code was written previously made it look like a bug where
$pve_version might be overwritten multiple times.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-12 18:48:01 +01:00
Fiona Ebner
081eed3b79 machine: get current: improve naming and style
No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-12 18:48:01 +01:00
Thomas Lamprecht
c351659d87 add some comments for legacy 2MB OVMF image builds
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-11-11 15:09:42 +01:00
Alexandre Derumier
c2f4482053 migration: add missing eval on nbdstop with tunnel v2
It was already done in tunnel v1.

Avoid to avoid migration (and keep both source/targetvm locked) if
nbdstop error occur

2023-09-28 16:20:39 ERROR: error - tunnel command '{"cmd":"nbdstop"}' failed - failed to handle 'nbdstop' command - VM 140 qmp command 'nbd-server-stop' failed - got timeout
2023-09-28 16:20:39 ERROR: migration finished with problems (duration 00:01:42)

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-06 19:46:43 +01:00
Alexandre Derumier
6cb2338f53 nbd-stop: increase timeout to 25s
This can seemingly need a bit longer than expected, and better than
erroring out on migration is to wait a bit longer.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
2023-11-06 19:45:55 +01:00
Alexandre Derumier
5bb8815a3f add|del_bridge_fdb: remove unused firewall param 2023-10-25 13:02:45 +02:00
Alexandre Derumier
d797bb62c3 cpu hotplug: cannot change feature online
The vCPUs are passed as devices with specific id only when CPU
hot-plug is enable at cold start.

So, we can't enable/disable allow-hotplug online as then vCPU hotplug
API will thrown errors not finding core id.

Not enforcing this could also lead to migration failure, as the QEMU
command line for the target VM could be made different than the one it
was actually running with, causing a crash of the target as Fiona
observed [0].

[0]: https://lists.proxmox.com/pipermail/pve-devel/2023-October/059434.html

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
 [ TL: Reflowed & expanded commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-10-24 11:46:52 +02:00
Fiona Ebner
b7bbeff3f9 vzdump: assemble: improve error messages
by including the errno. Might make it clearer what the issue is in
cases like: https://forum.proxmox.com/threads/135261/

Also add the missing newlines, the missing "to" in the second message,
switch to the more common "or die" and avoid line bloat while at it.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-10-23 17:09:41 +02:00
Filip Schauer
6f0627d4bd backup, migrate: fix races with suspended VMs that can wake up
Fix races with ACPI-suspended VMs which could wake up during migration
or during a suspend-mode backup.

Revert prevention, of ACPI-suspended VMs automatically resuming after
migration, introduced by 7ba974a682. The commit introduced a
potential problem that causes a suspended VM that wakes up during
migration to remain paused after the migration finishes.

This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.

Furthermore the commit increased the race window during the
preparation of a suspend-mode backup, when a suspended VM wakes up
between the vm_is_paused check in PVE::VZDump::QemuServer::prepare and
PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
fs-freeze even if the VM has woken up, potentially leaving the file
system in an inconsistent state.

To prevent this, do not treat the suspended runstate as paused when
migrating or archiving a VM.

[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg05260.html

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
 [ TL: massage in Fiona's extra info into commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-10-16 16:48:47 +02:00
Fiona Ebner
62c1904921 fix #4522: api: vncproxy: also set environment variable for ticket without websocket
Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Since it is not only
required in combination with websocket, drop that conditional.

For the non-serial case, this was the last remaining effect of the
'websocket' parameter, so update the parameter description.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-10-16 16:43:47 +02:00
Fiona Ebner
876d993886 api: vncproxy: update description of websocket parameter
Since commit 3e7567e0 ("do not use novnc wsproxy"), the websocket
upgrade is done via the HTTP server.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-10-16 16:43:47 +02:00
Thomas Lamprecht
263c803dd1 api: reduce overly lengthy comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-10-10 12:57:18 +02:00
Filip Schauer
7ba974a682 Fix ACPI-suspended VMs resuming after migration
Add checks for "suspended" and "prelaunch" runstates when checking
whether a VM is paused.

This fixes the following issues:
* ACPI-suspended VMs automatically resuming after migration
* Shutdown and reboot commands timing out instead of failing
  immediately on suspended VMs

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2023-10-10 09:27:18 +02:00
Friedrich Weber
95f1de689e vm start: set higher timeout if using PCI passthrough
The default VM startup timeout is `max(30, VM memory in GiB)` seconds.
Multiple reports in the forum [0] [1] and the bug tracker [2] suggest
this is too short when using PCI passthrough with a large amount of VM
memory, since QEMU needs to map the whole memory during startup (see
comment #2 in [2]). As a result, VM startup fails with "got timeout".

To work around this, set a larger default timeout if at least one PCI
device is passed through. The question remains how to choose an
appropriate timeout. Users reported the following startup times:

ref | RAM | time  | ratio (s/GiB)
---------------------------------
[1] | 60G |  135s |  2.25
[1] | 70G |  157s |  2.24
[1] | 80G |  277s |  3.46
[2] | 65G |  213s |  3.28
[2] | 96G | >290s | >3.02

The data does not really indicate any simple (e.g. linear)
relationship between RAM and startup time (even data from the same
source). However, to keep the heuristic simple, assume linear growth
and multiply the default timeout by 4 if at least one `hostpci[n]`
option is present, obtaining `4 * max(30, VM memory in GiB)`. This
covers all cases above, and should still leave some headroom.

[0]: https://forum.proxmox.com/threads/83765/post-552071
[1]: https://forum.proxmox.com/threads/126398/post-592826
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3502

Suggested-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
2023-10-06 18:12:07 +02:00
Alexandre Derumier
7f8c808772 add memory parser
In preparation to add more properties to the memory configuration like
maximum hotpluggable memory and whether virtio-mem devices should be
used.

This also allows to get rid of the cyclic include of PVE::QemuServer
in the memory module.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: also convert new usage in get_derived_property
     remove cyclic include of PVE::QemuServer
     add commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-18 17:08:48 +02:00
Fiona Ebner
87b0f305e7 introduce QMPHelpers module
moving qemu_{device,object}{add,del} helpers there for now.

In preparation to remove the cyclic include of PVE::QemuServer in the
memory module and generally for better modularity in the future.

No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-18 17:08:48 +02:00
Fiona Ebner
2951c45ec0 memory: replace deprecated check_running() call
PVE::QemuServer::check_running() does both
PVE::QemuConfig::assert_config_exists_on_node()
PVE::QemuServer::Helpers::vm_running_locally()

The former one isn't needed here when doing hotplug, because the API
already assert that the VM config exists. It also would introduce a
new cyclic dependency between PVE::QemuServer::Memory <->
PVE::QemuConfig with the proposed virtio-mem patch set.

In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.

No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-18 17:08:48 +02:00
Fiona Ebner
0e32bf5bc2 move NUMA-related code into memory module
which is the only user of the parse_numa() helper. While at it, avoid
the duplication of MAX_NUMA.

In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.

No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-18 17:08:48 +02:00
Fiona Ebner
261b67e4aa move parse_number_sets() helper to helpers module
In preparation to move parse_numa() to the memory module.

No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-18 17:08:48 +02:00
Fiona Ebner
61b172d806 restore vma: inline one timeout variable and move other closer to usage
No functional change intended.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-15 16:27:44 +02:00
Fiona Ebner
e003f28404 restore vma: add comment describing timeout
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-15 16:27:44 +02:00
Fiona Ebner
853757ccec fix #2816: restore: remove timeout when allocating disks
10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.

Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.

The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.

There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.

[0]: https://forum.proxmox.com/threads/126825/
[1]: https://forum.proxmox.com/threads/128093/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-09-15 16:27:44 +02:00
Filip Schauer
6d1ac42b95 drive: Fix typo in description of efitype
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2023-09-06 11:55:11 +02:00
Alexandre Derumier
a132d50ea3 memory: use static_memory in foreach_dimm
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2023-09-01 12:46:02 +02:00
Filip Schauer
a55d0f71b2 fix #3963: Skip TPM startup for template VMs
Skip the software TPM startup when starting a template VM for performing
a backup. This fixes an error that occurs when the TPM state disk is
write-protected.

Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
2023-08-11 09:16:21 +02:00
Fiona Ebner
089aed811d cfg2cmd: netdev: fix value for tx_queue_size
Quoting from QEMU commit 4271f40383 ("virtio-net: correctly report
maximum tx_queue_size value"):

> Maximum value for tx_queue_size depends on the backend type.
> 1024 for vDPA/vhost-user, 256 for all the others.

> So the parameter is silently ignored and ethtool reports a different
> value than the one provided by the user.

Indeed, for a non-vDPA/vhost-user netdev, the guest will see TX: 256
instead of the specified 1024 here. With the mentioned QEMU commit (in
master and will be part of 8.1), using 1024 will be a hard error:

> Invalid tx_queue_size (= 1024), must be a power of 2 between 256 and 256

Since neither vhost-user, nor vhost-vdpa netdev types are exposed by
Proxmox VE, just changing the limit to the correct 256 should be fine.
No obvious issue during live-migration found.

Fixes: 620d6b32 ("virtio-net: increase defaults rx|tx-queue-size to 1024")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-07-27 13:14:12 +02:00
Fiona Ebner
0d4e8cbde0 migration: alloc nbd disks: fix fall-back for remote live migration
While the comment sated
>    # order of precedence, filtered by whether storage supports it:
>    # 1. explicit requested format
>    # 2. format of current volume
>    # 3. default format of storage

the code did not fall back to the default format in the case of remote
migration, because the format was already set and the code used
> $format //= $defFormat;

This made remote migration from dir with qcow2 to e.g. LVM-thin fail.

Move extracting the format from the volume name to the call side for
local migration. This allows the logic here to be much simpler.

Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-07-27 10:05:20 +02:00
Fiona Ebner
8056ee3030 migration: alloc nbd disks: base format hint off source storage
Previously, qemu_img_format() was called with the target storage's
$scfg and the source storage's volume name.

This mismatch should only be relevant for certain special kinds of
storage plugins:
- no path, but does support an additional QEMU image format besides
  'raw', in short NPAF.
- no path, volume name can match QEMU_FORMAT_RE, in short NPVM.

Note that all integrated plugins are neither NPAF nor NPVM.

Note that for NPAF plugins, qemu_img_format() already always returns
'raw' because there is no path. It's a bit unlikely such a plugin
exists, because there were no bug reports about qemu_img_format()
misbehaving there yet.

Let's go through the cases:
- If source and target storage both have or don't have a path,
  qemu_img_format($scfg, $volname) returns the same for both $scfg's.
- If source storage has a path, but target storage does not, the
  format hint was previously 'raw', but can only be more correct now
  (being what the source image actually is):
  - For non-NPAF targets, since we know there is no path, it follows
    that 'raw' is the only supported QEMU image format.
  - For NPAF targets, the format will be preserved now (if actually
    supported).
- If source storage does not have a path, but target storage does, the
  format hint will be 'raw' now.
  - For non-NPVM sources, QEMU_FORMAT_RE didn't match when
    qemu_img_format() was called with the target storage's $scfg, so
    the hint also was 'raw' before this commit.
  - For NPVM sources, qemu_img_format() might've guessed a format from
    the source volume name when called with the target's $scfg before
    this commit. If the target storage supports the previously guessed
    format, it was preserved before this commit, but will not be
    anymore. In theory, the guess might've also been wrong, and in
    this case, this commit avoids the wrong guess.

To summarize, there is only one edge case with an exotic kind of third
party storage plugin where format preservation would be lost and in
another edge case, format preservation is gained.

In preparation to simplify the format fallback logic implementation.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-07-27 10:05:20 +02:00
Friedrich Weber
dafabbd01f fix: api: fix permission check for cloudinit drive update
Trying to regenerate a cloudinit drive as a non-root user via the API
currently throws a Perl error, as reported in the forum [1]. This is
due to a type mismatch in the permission check, where a string is
passed but an array is expected.

[1] https://forum.proxmox.com/threads/regenerate-cloudinit-by-put-api-return-500.124099/

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
2023-07-25 17:23:17 +02:00
Friedrich Weber
92c02f6c64 cloudinit: allow non-root users to set ciupgrade option
The new ciupgrade option was missing in $cloudinitoptions in
PVE::API2::Qemu, so $check_vm_modify_config_perm defaulted to
requiring root@pam for modifying the option. To fix this, add
ciupgrade to $cloudinitoptions. This also fixes an issue where
ciupgrade was missing in the output of `qm cloudinit pending`,
as it also relies on $cloudinitoptions.

This issue was originally reported in the forum [0].

Also add a comment to avoid similar issues when adding new options in
the future.

[0]: https://forum.proxmox.com/threads/131043/

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
2023-07-25 11:42:07 +02:00
Fiona Ebner
b155086bd8 fix #4620: cfg2cmd: drive device: correctly handle IDE for q35
Only unit 0 for IDE is supported with machine type q35. Currently,
QEMU will fail startup with machine type q35 with an error like
> Can't create IDE unit 1, bus supports only 1 units
when ide1 or ide3 is configured.

Make sure to keep backwards compat form migration by leaving ide0 and
ide2 fixed. Since starting with ide1 or ide3 never worked, they can be
moved to a controller with a higher ID without issue.

Reported in the community forum:
https://forum.proxmox.com/threads/124615/post-543127
https://forum.proxmox.com/threads/130815/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-07-20 12:20:25 +02:00
Fabian Grünbichler
b704d82578 update_vm_api: properly wrap arguments
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2023-07-19 12:22:04 +02:00
Fiona Ebner
de5af2d3b2 api: update: also check access for currently configured bridge
Relevant when modifying or removing an existing network device.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-07-19 11:53:36 +02:00
Thomas Lamprecht
3234f5639d vzdump: pbs: factor out getting and checking encryption keys
factor the common checks for disk-less and "normal" backups out into
its own helper, avoiding code duplication and ensuring that the
messages and checks stay in sync.

The use sites for key and master key are a bit clearer, as it all
just depends on them being defined or not.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-07-06 17:44:18 +02:00
Thomas Lamprecht
76690de1e9 vzdump: reword "master-key but no encryption key" message
.. and make it use a warn level, which can then also mark the whole
task as potentially problematic as with a new enough pve-guest-common
the REST environment worker warn counters are then increased.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-07-06 15:39:30 +02:00
Thomas Lamprecht
b241ddbf15 vzdump: log only once that encryption is enabled
our backup logs are still quite noise at the task start part so avoid
logging that the task is running with encryption enabled twice for
the master-key feature.

The definedness check on master_keyfile isn't required anymore, it
was never for the no-disk case, and for the standard case it isn't
since  781fb80 ("vzdump: error out for master-key backup but no QEMU
support")

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-07-06 15:01:13 +02:00
Thomas Lamprecht
781fb80309 vzdump: error out for master-key backup but no QEMU support
Our QEMU gained master-key support for Proxmox VE 6.4 with initial
QEMU 5.2.0 packaging in 0b8da68 ("add PBS master key support")
version.
As we're now two major releases in the future any VM needs to run
with a newer QEMU version we can just make this a hard-error, as
there really should be no use-case left. After all we only support
upgrading directly to the next major release, so one needs to do at
least a migration (or shutdown) of the VM to reboot the node for
upgrading to Proxmox VE 8, so the lowest QEMU version baseline is 6.0
for Proxmox VE 8 (i.e., the version from PVE 7.0).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-07-06 15:01:13 +02:00
Thomas Lamprecht
b84587f312 vzdump: drop unused pathlist variable for PBS no-disk code path
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-07-06 15:01:13 +02:00
Fabian Grünbichler
fbd3dde735 fix #4822: vzdump: fix pbs encryption for no-disk guests
these are backed up directly with proxmox-backup-client, and the invocation was
lacking the key parameters.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-07-06 15:01:13 +02:00
Fiona Ebner
e9f12f967a migration: fix issue with qcow2 cloudinit disk live migration
The check for with_snapshots for qcow2 needs to happen for these too.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-06-21 12:48:11 +02:00
Fiona Ebner
1e5405838a migration: add trailing newline to aliased volumes error message
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-06-21 12:48:11 +02:00
Fiona Ebner
219719aada foreach volid helper: make $pending parameter behave like a boolean
Avoids potential future mistake with passing in an explicit 0.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2023-06-21 12:48:11 +02:00
Aaron Lauterer
07f2e57c98 migration: fail when aliased volume is detected
Aliased volids can lead to unexpected behavior in a migration.

An aliased volid can happen if we have two storage configurations,
pointing to the same place. The resulting 'path' for a disk image
will be the same.
Therefore, stop the migration in such a case.

The check works by comparing the path returned by the storage plugin.

We decided against checking the storages themselves being aliased. It is
not possible to infer that reliably from just the storage configuration
options alone.

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2023-06-21 12:48:11 +02:00
Aaron Lauterer
6e9c4929be qemuserver: migration: test_volid: change attr name and ref handling
Since we don't scan all storages for matching disk images anymore for a
migration we don't have any images found via storage alone. They will be
referenced in the config somewhere.

Therefore, there is no need for the 'storage' ref.
The 'referenced_in_config' is not really needed and can apply to both,
attached and unused disk images.

Therefore the QemuServer::foreach_volid() will change the
'referenced_in_config' attribute to an 'is_attached' one that only
applies to disk images that are in the _main_ config part and are not
unused.

In QemuMigrate::scan_local_volumes() we can then quite easily map the
refs to each state, attached, unused, referenced_in_{pending,snapshot}.

The refs are mostly used for informational use to print out in the logs
why a disk image is part of the migration. Except for the 'attached' case.

In the future the extra step of the refs in QemuMigrate could probably
be streamlined even more.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2023-06-21 12:48:11 +02:00
Aaron Lauterer
a0dbed5a6d migration: only migrate disks used by the guest
When scanning all configured storages for disk images belonging to the
VM, the migration could easily fail if a storage is not available, but
enabled. That storage might not even be used by the VM at all.

By not scanning all storages and only looking at the disk images
referenced in the VM config, we can avoid unnecessary failures.
Some information that used to be provided by the storage scanning needs
to be fetched explicilty (size, format).

Behaviorally the biggest change is that unreferenced disk images will
not be migrated anymore. Only images referenced in the config will be
migrated.

The tests have been adapted accordingly.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2023-06-21 12:48:11 +02:00
Aaron Lauterer
0b7a0b78db qemuserver: foreach_volid: always include pending disks
All calling sites except for QemuConfig.pm::get_replicatable_volumes()
already enabled it. Making it the non-configurable default results in a
change in the VM replication.  Now a disk image only referenced in the
pending section will also be replicated.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2023-06-21 12:48:11 +02:00
Aaron Lauterer
6328c554c1 qemuserver: foreach_volid: include pending volumes
Make it possible to optionally iterate over disks in the pending section
of VMs, similar as to how snapshots are handled already.

This is for example useful in the migration if we don't want to rely on
the scanning of all storages.

All calling sites are adapted and enable it, except for
QemuConfig::get_replicatable_volumes as that would cause a change for
the replication if pending disks would be included.

The following lists the calling sites and if they should be fine with
the change (source [0]):

1. QemuMigrate: scan_local_volumes(): needed to include pending disk
   images
2. API2/Qemu.pm: check_vm_disks_local() for migration precondition:
   related to migration, so more consistent with pending
3. QemuConfig.pm: get_replicatable_volumes(): would change the behavior
   of the replication, will not use it for now.
4. QemuServer.pm: get_vm_volumes(): is used multiple times by:
4a. vm_stop_cleanup() to deactivate/unmap: should also be fine with
    including pending
4b. QemuMigrate.pm: in prepare(): part of migration, so more consistent
    with pending
4c. QemuMigrate.pm: in phase3_cleanup() for deactivation: part of
    migration, so more consistent with pending

[0] https://lists.proxmox.com/pipermail/pve-devel/2023-May/056868.html

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2023-06-21 12:48:11 +02:00