Avoid using the extension based qemu_img_format() helper. Failure is
not critical, because this is just the hint for what format the
restored target image should be allocated with, so fallback to 'raw'.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
For a Proxmox VE managed volume, prefer the format from the storage
layer rather than the 'format' option set on the drive. Fail if there
is a mismatch between the detected and configured format, because this
is not expected for managed volumes. Having this early hard failure
protects against undesirable issues with live migration and reboot
where the format of a drive would suddenly be different.
For a not Proxmox VE managed volume, use the same logic as before,
i.e. use the 'format' option for the drive with 'raw' as a fallback:
Only root can configure such devices.
Both also apply to the case where the 'cdrom' flag is set to avoid
autodetection by QEMU.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
FG: typo fix in comment
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The minimum supported version for Proxmox VE 8 nodes is QEMU 8.0 and
the beginning of the config_to_command() function already has a check
for at least version 5.0. No other caller of get_vm_machine() passes
in the parameter, so it can be removed from there as well.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
The parameter was added by ac0077cc ("Use 'QEMU version' ->
'+pve-version' mapping for machine types") but it doesn't seem like
there ever was a caller. In particular, none of the current callers
pass in a value and it's not clear when one would require passing a
different version than the KVM binary version.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
This reverts commit fca0ba5d77, quoting
Fiona in verbatim:
> Regarding the patch "schema: add fleecing-images config property",
> Fabian off-list suggested using a config section "special:fleecing"
> instead of a property, so that it is truly internal-only. If we go for
> that, the commit should be reverted. Which approach do you prefer?
-- https://lore.proxmox.com/pve-devel/5126c251-64fd-44fe-b1a6-fda9074eb9a1@proxmox.com/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The function checks for resources that cannot be migrated, snapshoted,
or suspended.
To run this function while the snapshot lock is active, the
pve-guest-common patch 'AbstractConfig: add abstract method to check for
resources preventing a snapshot.' is required.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
This patch is for enabling AMD SEV (Secure Encrypted Virtualization)
support in QEMU.
VM-Config-Examples:
amd_sev: type=std,no-debug=1,no-key-sharing=1
amd_sev: es,no-debug=1,kernel-hashes=1
kernel-hashes, reduced-phys-bits & cbitpos correspond to the variables
with the same name in QEMU.
kernel-hashes=1 adds kernel hashes to enable measured linux kernel
launch since it is per default off for backward compatibility.
reduced-phys-bios and cbitpos are system specific and are read out by
the query-machine-capabilities c program and saved to the
/run/qemu-server/host-hw-capabilities.json file. This file is parsed
and than used by qemu-server to correctly start a AMD SEV VM.
type=std stands for standard sev to differentiate it from sev-es (es)
or sev-snp (snp) when support is upstream.
QEMU's sev-guest policy gets calculated with the parameters no-debug
& no-key-sharing. These parameters correspond to policy-bits 0 & 1.
If type is 'es' than policy-bit 2 gets set to 1 to activate SEV-ES.
Policy bit 3 (nosend) is always set to 1, because migration features
for sev are not upstream yet and are attackable.
SEV-ES is highly experimental since it could not be tested.
see coherent doc patch
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.
Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to be used internally to record volume IDs of fleecing images
allocated during backup.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation for the upcoming 'fleecing-images' key. To avoid mixing
of options with - and options with _, which is not very user-friendly,
it would be nice to add aliases for existing options with _. And
long-term, backup restore handlers could switch to the modern keys
with -.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Adds a syslog entry to log the process id that has been given to the
QEMU VM process at start. This is helpful debugging information if the
pid shows up at other places, like a kernel stack trace, while the VM
has been running, but cannot be retrieved anymore (e.g. the pidfile has
been deleted or only the syslog is available).
The syslog has been put in the `PVE::QemuServer::vm_start_nolock`
subroutine to make sure that the PID is logged not only when the VM has
been started by the API endpoint `vm_start`, but also when the VM is
started by a remote migration.
Suggested-by: Hannes Dürr <h.duerr@proxmox.com>
Suggested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Daniel Herzig <d.herzig@proxmox.com>
Since kernel 6.8, NVIDIAs vGPU driver does not use the generic mdev
interface anymore, since they relied on a feature there which is not
available anymore. IIUC the kernel [0] recommends drivers to implement
their own device specific features since putting all in the generic one
does not make sense.
They now have an 'nvidia' folder in the device sysfs path, which
contains the files `creatable_vgpu_types`/`current_vgpu_type` to
control the virtual functions model, and then the whole virtual function
has to be passed through (although without resetting and changing to the
vfio-pci driver).
This patch implements changes so that from a config perspective, it
still is an mediated device, and we map the functionality iff the device
has no mediated devices but the new NVIDIAs sysfsapi and the model name
is 'nvidia-<..>'
It behaves a bit different than mdevs and normal pci passthrough, as we
have to choose the correct device immediately since it's bound to the
pciid, but we must not bind the device to vfio-pci as the NVIDIA driver
implements this functionality itself.
When cleaning up, we iterate over all reserved devices (since for a
mapping we can't know at this point which was chosen besides looking at
the reservations) and reset the vgpu model to '0', so it frees up the
reservation from NVIDIAs side. (We also do that in a loop, since it's
not always immediately ready after QEMU closes)
A general problem (but that was previously also the case) is that a
showcmd (for a not running guest) reserves the pciids, which might block
an execution of a different real vm. This is now a bit more problematic
as we (temporarily) set the vgpu type then.
0: https://docs.kernel.org/driver-api/vfio-pci-device-specific-driver-acceptance.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since the only way this could happen is when we're being called
from 'qm showcmd' and there we don't want to reserve or create anything.
In case the VM was not running, we actually reserve the devices, so we
want to call 'cleanup_pci_devices' after to remove those again. This
minimizes the timespan where those devices are not available for real vm
starts.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
clarify a few units and avoid "since the process start" as it's not
really clear which process is meant and "since the guest was started"
is telling enough too, and as we do a full stop+start cycle on CT
reboot it's true for that too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
i omitted the 'disk' property, since it's non functional currently,
since we don't query the disk usage here (complicated to calculate,
depending on the storage, or requires guest agent support, which is also
non-trivial)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: avoid having netin twice, change to netout once ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This includes docs, and strings printed to stderr or stdout.
These were caught with:
typos --exclude test --exclude changelog
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
There is a possibility that the drive-mirror job is not yet done when
the migration wants to inactivate the source's blockdrives:
> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
This can be prevented by using the 'write-blocking' copy mode (also
called active mode) for the mirror. However, with active mode, the
guest write speed is limited by the synchronous writes to the mirror
target. For this reason, a way to start out in the faster 'background'
mode and later switch to active mode was introduced in QEMU 8.2.
The switch is done once the mirror job for all drives is ready to be
completed to reduce the time spent where guest IO is limited.
The loop waiting for actively-synced to become true is not an endless
loop: Once the remaining dirty parts have been mirrored by the
background iteration, the actively-synced flag will be set. Because
the 'block-job-change' QMP command already succeeded, new writes will
be done synchronously to the target and thus not lead to new dirty
parts. If the job fails or vanishes (shouldn't actually happen,
because auto-dismiss is false), the loop will be exited and the error
propagated.
Reported rarely, but steadily over the years:
https://forum.proxmox.com/threads/78954/post-353651https://forum.proxmox.com/threads/78954/post-380015https://forum.proxmox.com/threads/100020/post-431660https://forum.proxmox.com/threads/111831/post-482425https://forum.proxmox.com/threads/111831/post-499807https://forum.proxmox.com/threads/137849/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Currently, when completing a drive mirror job, only errors matching
"cannot be completed" will be handled. Other errors are ignored and
a wrong message that the job was completed successfully will be
printed to the log. An instance of this popped up in the community
forum [0].
The QMP command used for completing the job is either
'block-job-complete' or 'block-job-cancel'. The former causes the VM
to switch to the target drive, the latter doesn't, e.g. migration uses
the latter to not switch the source instance over to the target drive.
The 'block-job-cancel' command doesn't even have the same "cannot be
completed" message, but returns immediately.
The timeout for both 'block-job-cancel' and 'block-job-complete' is
set to 10 minutes in the QMPClient module, which should be enough.
[0]: https://forum.proxmox.com/threads/151518/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.
The reason is that activating the block drives in QEMU can take a bit
of time. For example, it might be necessary to invalidate the caches
(where for raw devices a flush might be needed) and the request
alignment and size of the block device needs to be queried.
In [0], an external Ceph cluster with krbd is used, and the initial
read to the block device after migration, for probing the request
alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
timeout to be on the safe side for the future.
All callers are inside workers or via the 'qm' CLI command, so bumping
beyond 30 seconds is fine.
[0]: https://forum.proxmox.com/threads/149610/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When detaching and attaching the network device on update, the
link_down setting is not considered and the network device always gets
attached to the guest - even if link_down is set.
Fixes: 3f14f206 ("nic online bridge/vlan change: link disconnect/reconnect")
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Systemd reapplies its known values on reload, so we cannot simply call
into PVE::CGroup. Call systemd's SetUnitProperties method via dbus
instead.
The hotplug and startup code also calculated different values, as one
operated within systemd's value framework (documented in
systemd.resource-control(5)) and one worked with cgroup values
(distinguishing between cgroup v1 and v2 manually).
This is now unified by overriding `change_cpu_quota()` and
`change_cpu_shares()` via `PVE::QemuServer::CGroup` which now takes
systemd-based values and sends those directly via dbus.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Updating the NIC of a VM when the following conditions were met:
* VM is turned off
* NIC is on a bridge that uses automatic dhcp
* Leave bridge unchanged
led to duplicate IPAM entries for the same network device.
This is due to the fact that the add_next_free_cidr always ran on
applying pending network changes.
Now we only add a new ipam entry if either:
* the value of the bridge or mac address changed
* the network device has been newly added
This way no duplicate IPAM entries should get created.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
tpmstate0 is already included in `get_vm_volumes`, and our only storage
plugin that has unmap_volume implemented is the RBDPlugin, where we call
unmap in `deactivate_volume`. So it's already ummapped by the
`deactivate_volumes` calls above.
For third-party storage plugins, it's natural to expect that
deactivate_volume() would also remove a mapping for the volume just
like RBDPlugin does.
While there is an explicit map_volume() call in start_swtpm(), a
third-party plugin might expect an explicit unmap_volume() call too.
However, the order of calls right now is
1. activate_volume()
2. map_volume()
3. deactivate_volume()
4. unmap_volume()
Which seems like it could cause problems already for third-party
plugins relying on an explicit unmap call.
All that said, it is unlikely that a third-party plugin breaks. If it
really happens, it can be discussed/adapted to the actual needs still.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
While archives with unknown or undetermined subtype could be shown,
this is only for autocompletion, so users can still specify those
manually if required.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Some callers like the move disk API endpoint do not pass an explicit
completion argument. This is not an issue in general, because
qemu_drive_mirror_monitor() defaults to 'complete'. However, there was
a string comparision for the cloudinit case that can trigger a warning
about the value being uninitialized.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
this was a stray search and replace for job -> job_id that should have only
changed variable names..
Fixes: 0ea24bf ("mirror monitor: refactoring/code cleanup")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in an actual example:
Before:
> VM 106 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block655'
After:
> block job (stream) error: restore-scsi0: No space left on device (io-status: ok)
Note that previously, it was not even detected that the stream job
failed and the error message is because the subsequent cleanup failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in an actual example:
Before:
> VM 112 qmp command 'blockdev-del' failed - Node 'drive-scsi0-pbs' is busy: node is used as backing hd of '#block046'
After:
> block job (stream) error: restore-drive-scsi0: No space left on device (io-status: ok)
Note that previously, it was not even detected that the stream job
failed and the error message is because the subsequent cleanup failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in actual examples (note that the fact that
it's a mirror job is already mentioned earlier in the full error, with
"block job (mirror) error:"):
Before:
> 'mirror' has been cancelled
> 'mirror' has been cancelled
After:
> Source and target image have different sizes (io-status: ok)
> No space left on device (io-status: ok)
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When auto-dismiss=true (the default), a failed job can disappear very
quickly from the job list and there might not be any chance to see the
error in the result of 'query-block-jobs'. For jobs with $completion
being 'auto', like 'block-stream', it couldn't even be detected that
the job failed.
Jobs with auto-dismiss=false on the other hand, will wait in
'concluded' state until manually dismissed. For those, it will be
possible to query the error if the job failed.
There doesn't seem to be a way to have only failed jobs stay around,
e.g. something like auto-dismiss=on-success.
Planned to be used for the 'drive-mirror' and 'block-stream' jobs
initially.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
templates can only be started in context of a pbs backup, and there we
don't need or want to use most of the config, since they cannot be
started normally anyway.
We minimize the config by copying some specific relevant options (see
the comments for why the options were chosen) and all disk
configurations.
Since we change the qemu commandline for templates, we now have to adapt
the tests involving templates.
Without this, users can get into a situation where the template cannot
be backed up when there are some resources not available (such as cpu
cores, kvm, pci devices, etc.) even if the backup process does not need
them.
This change has some nice side effects, such as we don't need to
allocate the full amount of memory anymore for templates that have a
hostpci device configured, the configured bridges don't have to exist,
etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With high IO pressure, 5 seconds might not be enough, even if the
request is small.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The generic "got timeout" message cannot be associated to a certain
code path and also isn't very user-friendly. Use dedicated messages
for each stage and also suggest why the timeout for reading the header
might have happened, i.e. because it was corrupted.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout for HMP commands is 5 seconds.
While it should be rather fast to attach a new drive to QEMU, a busy
system might take longer, so future-proof and increase to 60 seconds.
On the other hand, detaching a drive needs to complete any pending IO
on it, so use the same 10 minutes timeout that's used for
drive-related QMP commands.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>