Commit Graph

1523 Commits

Author SHA1 Message Date
Fiona Ebner
c5d4b11f3e machine: rename machine_version() function to is_machine_version_at_least()
The old name does not make it clear what exactly the function does.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
8c4f436b3c move meta information handling to its own module
Like this, it can be used by modules that cannot depend on
QemuServer.pm without creating a cyclic dependency.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
db528966c9 move get_vm_machine() function to machine module
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
5c8407b0dc move windows_get_pinned_machine_version() function to machine module
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
85d47ae2fe move get_installed_machine_version() helper to machine module
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
8532a50890 machine: add default_machine_for_arch() helper
There are already other places where 'aarch64' and 'x86_64' are
checked to be the only valid architectures, for example, the
get_command_for_arch() helper, so the new error scenario for an
unknown arch should not cause any regressions.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
82dff842f0 move get_vm_arch() helper to helpers module
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
1715febd33 move kvm_user_version() function to helpers module
Add an export, since the function is rather commonly used (in
particular inlined in function calls, where prefixing with the module
name would hurt readability) and there won't be much potential for
confusion name-wise.

This was the only user of stat(), so remove the File::stat include.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
0050aa8735 move get_command_for_arch() helper to helpers module
Cannot use the is_native_arch() helper inside the function anymore,
to avoid a cyclic dependency between the 'CPUConfig' and 'Helpers'
modules, inline it.

While at it, improve the variable name for the mapping.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Fiona Ebner
1cb9f2cb89 machine: drop unused parameter from assert_valid_machine_property() helper
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2025-01-17 19:24:02 +01:00
Daniel Kral
6b2091da7f vmstatus: make memory description consistent with pve-container
Fixes a small typo and uses the same wording as used in pve-container's
description of the `mem` property.

Signed-off-by: Daniel Kral <d.kral@proxmox.com>
2024-12-18 16:41:23 +01:00
Dominik Csapak
1b0df64e87 vmstatus: document more return types
namely 'cpu' and 'mem'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-12-17 14:21:42 +01:00
Fabio Fantoni via pve-devel
03614a8992 fix vm shutdown when agent conf is enabled but is not running in the vm
Checking only vm configuration for choose the shutdown method causes it
to always fail, after reaching the timeout, if the qemu agent option in
the vm configuration is enabled but the agent is not installed and
active in the guest.
As I seen in the windows vm the agent also crashes in some cases, so
shutdown don't fail only if qemu guest agent is not installed or not
started.

Added check that agent is active when choosing agent shutdown method to
avoid certain shutdown failure in those cases.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
[FE: do not set flag to suppress warning when agent is not running]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-17 13:50:20 +01:00
Fabian Grünbichler
4182c3da78 swtpm: drop unused $volname variable
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-12 10:47:44 +01:00
Fabian Grünbichler
1fe55a9f6c swtpm: check that format of tpmstate volume is raw
since swtpm currently doesn't support anything else, and might overwrite a file
using qcow2 or vmdk format by accident..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-12 10:47:42 +01:00
Fiona Ebner
034768882f remove dead qemu_img_format() helper
All callers have been switched to get the format from the storage
layer using checked_volume_format() and friends.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
1f961e51a8 resolve destination disk format helper: use volume format from storage layer
Avoid using the extension based qemu_img_format() helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
9cefe5d7bf drive mirror: use volume format from storage layer
Avoid using the extension based qemu_img_format() helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
2d62759f6c backup: parse backup hints: use volume format from storage layer
Avoid using the extension based qemu_img_format() helper. Failure is
not critical, because this is just the hint for what format the
restored target image should be allocated with, so fallback to 'raw'.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
0da8f110e8 migration: get nbd disks helper: use volume format from storage layer
Avoid using the extension based qemu_img_format() helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
d6e76d1b60 cfg2cmd: ovmf drive: use volume format from storage layer
Avoid using the extension based qemu_img_format() helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
9c5e02e5dd image convert: use volume format from storage layer
Avoid using the extension based qemu_img_format() helper.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
c9398d15ff clone: be explicit about source format when cloning EFI disk
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-09 09:07:17 +01:00
Fiona Ebner
9f887d3738 cfg2cmd: print drive commandline: improve format detection
For a Proxmox VE managed volume, prefer the format from the storage
layer rather than the 'format' option set on the drive. Fail if there
is a mismatch between the detected and configured format, because this
is not expected for managed volumes. Having this early hard failure
protects against undesirable issues with live migration and reboot
where the format of a drive would suddenly be different.

For a not Proxmox VE managed volume, use the same logic as before,
i.e. use the 'format' option for the drive with 'raw' as a fallback:
Only root can configure such devices.

Both also apply to the case where the 'cdrom' flag is set to avoid
autodetection by QEMU.

Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
FG: typo fix in comment
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2024-12-09 09:07:08 +01:00
Fiona Ebner
7dc33415e1 resolve destination disk format helper: fix indentation
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-05 17:27:05 +01:00
Fiona Ebner
1cd6990f9b print drive commandline: fix indentation
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-12-05 15:43:41 +01:00
Thomas Lamprecht
8770febbef tree-wide: fix various typos in comments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-12-05 12:33:24 +01:00
Fiona Ebner
c2f1e06ab9 cfg2cmd: drop superfluous check for QEMU binary version 4.1
The minimum supported version for Proxmox VE 8 nodes is QEMU 8.0 and
the beginning of the config_to_command() function already has a check
for at least version 5.0. No other caller of get_vm_machine() passes
in the parameter, so it can be removed from there as well.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
2024-12-05 12:13:20 +01:00
Fiona Ebner
86b1f1c24c cfg2cmd: require at least QEMU binary version 5.0
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
2024-12-05 12:13:20 +01:00
Fiona Ebner
2263b8548d cfg2cmd: require at least QEMU binary version 4.0
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
2024-12-05 12:13:20 +01:00
Fiona Ebner
e1bdb0ad44 code cleanup: drop unused parameter from get_vm_machine()
The parameter was added by ac0077cc ("Use 'QEMU version' ->
'+pve-version' mapping for machine types") but it doesn't seem like
there ever was a caller. In particular, none of the current callers
pass in a value and it's not clear when one would require passing a
different version than the KVM binary version.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
2024-12-05 12:13:20 +01:00
Thomas Lamprecht
136eb3bce8 config: non-migratable resource check: join blockers when printing them
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-18 21:56:15 +01:00
Thomas Lamprecht
9304dc09e5 Revert "schema: add fleecing-images config property"
This reverts commit fca0ba5d77, quoting
Fiona in verbatim:

> Regarding the patch "schema: add fleecing-images config property",
> Fabian off-list suggested using a config section "special:fleecing"
> instead of a property, so that it is truly internal-only. If we go for
> that, the commit should be reverted. Which approach do you prefer?
-- https://lore.proxmox.com/pve-devel/5126c251-64fd-44fe-b1a6-fda9074eb9a1@proxmox.com/

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-11-18 21:29:48 +01:00
Markus Frank
5fc635cc6d migration: add check_non_migratable_resources function
The function checks for resources that cannot be migrated, snapshoted,
or suspended.

To run this function while the snapshot lock is active, the
pve-guest-common patch 'AbstractConfig: add abstract method to check for
resources preventing a snapshot.' is required.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
2024-11-18 21:26:39 +01:00
Markus Frank
5d7288a415 config: add AMD SEV support
This patch is for enabling AMD SEV (Secure Encrypted Virtualization)
support in QEMU.

VM-Config-Examples:
amd_sev: type=std,no-debug=1,no-key-sharing=1
amd_sev: es,no-debug=1,kernel-hashes=1

kernel-hashes, reduced-phys-bits & cbitpos correspond to the variables
with the same name in QEMU.

kernel-hashes=1 adds kernel hashes to enable measured linux kernel
launch since it is per default off for backward compatibility.

reduced-phys-bios and cbitpos are system specific and are read out by
the query-machine-capabilities c program and saved to the
/run/qemu-server/host-hw-capabilities.json file. This file is parsed
and than used by qemu-server to correctly start a AMD SEV VM.

type=std stands for standard sev to differentiate it from sev-es (es)
or sev-snp (snp) when support is upstream.

QEMU's sev-guest policy gets calculated with the parameters no-debug
& no-key-sharing. These parameters correspond to policy-bits 0 & 1.
If type is 'es' than policy-bit 2 gets set to 1 to activate SEV-ES.
Policy bit 3 (nosend) is always set to 1, because migration features
for sev are not upstream yet and are attackable.

SEV-ES is highly experimental since it could not be tested.

see coherent doc patch

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-18 21:26:39 +01:00
Dominik Csapak
0d41c7f5a5 api: create: implement extracting disks when needed for import-from
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.

Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2024-11-18 18:55:54 +01:00
Daniel Kral
68b82f021f templates: add documentation to template_create
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
2024-11-17 19:53:08 +01:00
Fiona Ebner
fca0ba5d77 schema: add fleecing-images config property
to be used internally to record volume IDs of fleecing images
allocated during backup.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-17 19:38:32 +01:00
Fiona Ebner
f6a390ed08 parse config: allow config keys with minus sign
In preparation for the upcoming 'fleecing-images' key. To avoid mixing
of options with - and options with _, which is not very user-friendly,
it would be nice to add aliases for existing options with _. And
long-term, backup restore handlers could switch to the modern keys
with -.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-17 19:38:29 +01:00
Fiona Ebner
30681f147e restore: die early when there is no size for a device
Makes it a clean error for buggy (external) backup providers where the
size might not be set at all.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-12 10:27:52 +01:00
Fiona Ebner
dde471e142 move nbd_stop helper to QMPHelpers module
Like this nbd_stop() can be called from a module that cannot include
QemuServer.pm.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-11-11 14:54:06 +01:00
Daniel Kral
6f32c3fa7a vm_start: add syslog info with which PID a VM was started
Adds a syslog entry to log the process id that has been given to the
QEMU VM process at start. This is helpful debugging information if the
pid shows up at other places, like a kernel stack trace, while the VM
has been running, but cannot be retrieved anymore (e.g. the pidfile has
been deleted or only the syslog is available).

The syslog has been put in the `PVE::QemuServer::vm_start_nolock`
subroutine to make sure that the PID is logged not only when the VM has
been started by the API endpoint `vm_start`, but also when the VM is
started by a remote migration.

Suggested-by: Hannes Dürr <h.duerr@proxmox.com>
Suggested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Daniel Herzig <d.herzig@proxmox.com>
2024-11-10 20:16:23 +01:00
Dominik Csapak
48ada6982f pci: mdev: adapt to NVIDIA's modern interface with kernel >= 6.8
Since kernel 6.8, NVIDIAs vGPU driver does not use the generic mdev
interface anymore, since they relied on a feature there which is not
available anymore. IIUC the kernel [0] recommends drivers to implement
their own device specific features since putting all in the generic one
does not make sense.

They now have an 'nvidia' folder in the device sysfs path, which
contains the files `creatable_vgpu_types`/`current_vgpu_type` to
control the virtual functions model, and then the whole virtual function
has to be passed through (although without resetting and changing to the
vfio-pci driver).

This patch implements changes so that from a config perspective, it
still is an mediated device, and we map the functionality iff the device
has no mediated devices but the new NVIDIAs sysfsapi and the model name
is 'nvidia-<..>'

It behaves a bit different than mdevs and normal pci passthrough, as we
have to choose the correct device immediately since it's bound to the
pciid, but we must not bind the device to vfio-pci as the NVIDIA driver
implements this functionality itself.

When cleaning up, we iterate over all reserved devices (since for a
mapping we can't know at this point which was chosen besides looking at
the reservations) and reset the vgpu model to '0', so it frees up the
reservation from NVIDIAs side. (We also do that in a loop, since it's
not always immediately ready after QEMU closes)

A general problem (but that was previously also the case) is that a
showcmd (for a not running guest) reserves the pciids, which might block
an execution of a different real vm. This is now a bit more problematic
as we (temporarily) set the vgpu type then.

0: https://docs.kernel.org/driver-api/vfio-pci-device-specific-driver-acceptance.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-24 18:43:52 +02:00
Dominik Csapak
fc23c72a42 pci: device selection: don't reserve PCI IDs when VM is already running
Since the only way this could happen is when we're being called
from 'qm showcmd' and there we don't want to reserve or create anything.

In case the VM was not running, we actually reserve the devices, so we
want to call 'cleanup_pci_devices' after to remove those again. This
minimizes the timespan where those devices are not available for real vm
starts.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-24 18:39:37 +02:00
Thomas Lamprecht
c8a37e1993 status: reword description of some properties
clarify a few units and avoid "since the process start" as it's not
really clear which process is meant and "since the guest was started"
is telling enough too, and as we do a full stop+start cycle on CT
reboot it's true for that too.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-24 15:19:36 +02:00
Dominik Csapak
50a1d704e1 status: add some missing description for status return properties
i omitted the 'disk' property, since it's non functional currently,
since we don't query the disk usage here (complicated to calculate,
depending on the storage, or requires guest agent support, which is also
non-trivial)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
 [ TL: avoid having netin twice, change to netout once ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2024-10-24 15:05:57 +02:00
Maximiliano Sandoval
be8c868f0c fix typos in user-visible strings
This includes docs, and strings printed to stderr or stdout.

These were caught with:

    typos --exclude test --exclude changelog

Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
2024-10-24 13:15:06 +02:00
Fiona Ebner
60e1b142fb migration: avoid crash with heavy IO on local VM disk
There is a possibility that the drive-mirror job is not yet done when
the migration wants to inactivate the source's blockdrives:

> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.

This can be prevented by using the 'write-blocking' copy mode (also
called active mode) for the mirror. However, with active mode, the
guest write speed is limited by the synchronous writes to the mirror
target. For this reason, a way to start out in the faster 'background'
mode and later switch to active mode was introduced in QEMU 8.2.

The switch is done once the mirror job for all drives is ready to be
completed to reduce the time spent where guest IO is limited.

The loop waiting for actively-synced to become true is not an endless
loop: Once the remaining dirty parts have been mirrored by the
background iteration, the actively-synced flag will be set. Because
the 'block-job-change' QMP command already succeeded, new writes will
be done synchronously to the target and thus not lead to new dirty
parts. If the job fails or vanishes (shouldn't actually happen,
because auto-dismiss is false), the loop will be exited and the error
propagated.

Reported rarely, but steadily over the years:
https://forum.proxmox.com/threads/78954/post-353651
https://forum.proxmox.com/threads/78954/post-380015
https://forum.proxmox.com/threads/100020/post-431660
https://forum.proxmox.com/threads/111831/post-482425
https://forum.proxmox.com/threads/111831/post-499807
https://forum.proxmox.com/threads/137849/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-07-30 21:19:51 +02:00
Fiona Ebner
7b4fac1275 drive mirror: prevent wrongly logging success when completion fails differently
Currently, when completing a drive mirror job, only errors matching
"cannot be completed" will be handled. Other errors are ignored and
a wrong message that the job was completed successfully will be
printed to the log. An instance of this popped up in the community
forum [0].

The QMP command used for completing the job is either
'block-job-complete' or 'block-job-cancel'. The former causes the VM
to switch to the target drive, the latter doesn't, e.g. migration uses
the latter to not switch the source instance over to the target drive.
The 'block-job-cancel' command doesn't even have the same "cannot be
completed" message, but returns immediately.

The timeout for both 'block-job-cancel' and 'block-job-complete' is
set to 10 minutes in the QMPClient module, which should be enough.

[0]: https://forum.proxmox.com/threads/151518/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-07-30 21:15:15 +02:00
Fiona Ebner
0b50d3d29f resume: bump timeout for query-status
As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.

The reason is that activating the block drives in QEMU can take a bit
of time. For example, it might be necessary to invalidate the caches
(where for raw devices a flush might be needed) and the request
alignment and size of the block device needs to be queried.

In [0], an external Ceph cluster with krbd is used, and the initial
read to the block device after migration, for probing the request
alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
timeout to be on the safe side for the future.

All callers are inside workers or via the 'qm' CLI command, so bumping
beyond 30 seconds is fine.

[0]: https://forum.proxmox.com/threads/149610/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
2024-07-29 19:15:29 +02:00