This reverts commit fca0ba5d77, quoting
Fiona in verbatim:
> Regarding the patch "schema: add fleecing-images config property",
> Fabian off-list suggested using a config section "special:fleecing"
> instead of a property, so that it is truly internal-only. If we go for
> that, the commit should be reverted. Which approach do you prefer?
-- https://lore.proxmox.com/pve-devel/5126c251-64fd-44fe-b1a6-fda9074eb9a1@proxmox.com/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The function checks for resources that cannot be migrated, snapshoted,
or suspended.
To run this function while the snapshot lock is active, the
pve-guest-common patch 'AbstractConfig: add abstract method to check for
resources preventing a snapshot.' is required.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
This patch is for enabling AMD SEV (Secure Encrypted Virtualization)
support in QEMU.
VM-Config-Examples:
amd_sev: type=std,no-debug=1,no-key-sharing=1
amd_sev: es,no-debug=1,kernel-hashes=1
kernel-hashes, reduced-phys-bits & cbitpos correspond to the variables
with the same name in QEMU.
kernel-hashes=1 adds kernel hashes to enable measured linux kernel
launch since it is per default off for backward compatibility.
reduced-phys-bios and cbitpos are system specific and are read out by
the query-machine-capabilities c program and saved to the
/run/qemu-server/host-hw-capabilities.json file. This file is parsed
and than used by qemu-server to correctly start a AMD SEV VM.
type=std stands for standard sev to differentiate it from sev-es (es)
or sev-snp (snp) when support is upstream.
QEMU's sev-guest policy gets calculated with the parameters no-debug
& no-key-sharing. These parameters correspond to policy-bits 0 & 1.
If type is 'es' than policy-bit 2 gets set to 1 to activate SEV-ES.
Policy bit 3 (nosend) is always set to 1, because migration features
for sev are not upstream yet and are attackable.
SEV-ES is highly experimental since it could not be tested.
see coherent doc patch
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Add a systemd service that runs the query-machine-capabilities binary
at boot time to ensure that the machine capabilities are stored in the
host-hw-capabilities.json file.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
this is to override the target extraction storage for the option disk
extraction for 'import-from'. This way if the storage does not
supports the content type 'images', one can give an alternative one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.
Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
creating non-raw disk images with arbitrary content is only possible with raw
access to the storage, but checking for references to external files doesn't
hurt, in case for non pve-managed volumes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ DC: removed problematic checks for pve-managed volumes ]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This will automatically convert imported volume disks and newly
allocated VM volume disks (i.e. no efidisks, tpmstate disks, cloudinit
images, etc.) to a base volume, if the VM is a template.
Previously, this required a user to manually convert the
imported/allocated disk with `qm template --disk <disk>`.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Automatically converts any imported volume disk to a base volume image
if the VM is a template and the volume was imported using the
"target-disk" option, as "unused" disks are not needed to be converted
as they won't be cloned with either linked nor full clones.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Implements the "target-disk" option for the importdisk command, which
allows a disk to be imported and directly used instead of marking it as
an unused disk (e.g. unused0), which is the default behavior.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
[ TL: squash in style-nit with parameter wrapping multiple lines ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to be used internally to record volume IDs of fleecing images
allocated during backup.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation for the upcoming 'fleecing-images' key. To avoid mixing
of options with - and options with _, which is not very user-friendly,
it would be nice to add aliases for existing options with _. And
long-term, backup restore handlers could switch to the modern keys
with -.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This can happen after a hard failure, e.g. if the vzdump task was
killed. The next backup (after unlocking the VM) would then fail with
> ERROR: VM 125 qmp command 'backup' failed - previous backup not finished
During the failure path of that attempt, 'backup-cancel' is executed
and the subsequent attempt would then work again. Do it up-front with
a warning instead of relying on this behavior.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to use it to conditionally issue a QMP 'backup-cancel'
should a previous backup still be running.
While at it, avoid using the compat-only check_running() helper.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This moves all error output to stderr while at it and fixes some bad
references to wrong paths in some error messages.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
do not propagate that absolute mess of mixing tabs and spaces to new
programs that ain't perl and thus doesn't need to suffer.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implement a C program that extracts AMD SEV hardware information such
as reduced-phys-bios and cbitpos from CPUID, looks if SEV, SEV-ES &
SEV-SNP are enabled, and outputs these details as JSON to
/run/qemu-server/host-hw-capabilities.json
This program can also be used to read and save other hardware
information.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
When the VM is only started for backup, the VM will be stopped at that
point again. While the detach helpers do not warn about errors
currently, that might change in the future. This is also in
preparation for other cleanup QMP helpers that are more verbose about
failure.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since pve-common commit:
eff5957 (sysfstools: file_write: properly catch errors)
this check here fails now when the reset does not work. It turns out
that resetting the device is not always necessary, and we previously
ignored most errors when trying to do so.
To restore that functionality, downgrade this `die` to a warning.
If the device really needs a reset to work, it will either fail later
during startup, or not work correctly in the guest, but that behavior
existed before and is AFAIK not really detectable from our side.
Also improve the warning message a bit to not scare users and explain
that we're continuing.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: fine-tune error message a bit and avoid parenthesis ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a syslog entry to log the process id that has been given to the
QEMU VM process at start. This is helpful debugging information if the
pid shows up at other places, like a kernel stack trace, while the VM
has been running, but cannot be retrieved anymore (e.g. the pidfile has
been deleted or only the syslog is available).
The syslog has been put in the `PVE::QemuServer::vm_start_nolock`
subroutine to make sure that the PID is logged not only when the VM has
been started by the API endpoint `vm_start`, but also when the VM is
started by a remote migration.
Suggested-by: Hannes Dürr <h.duerr@proxmox.com>
Suggested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Daniel Herzig <d.herzig@proxmox.com>
To ensure the new behavior of our sysfs related helper is available
for the changes in commit a28e6fe ("pci: make variable name slightly
easier to read")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since kernel 6.8, NVIDIAs vGPU driver does not use the generic mdev
interface anymore, since they relied on a feature there which is not
available anymore. IIUC the kernel [0] recommends drivers to implement
their own device specific features since putting all in the generic one
does not make sense.
They now have an 'nvidia' folder in the device sysfs path, which
contains the files `creatable_vgpu_types`/`current_vgpu_type` to
control the virtual functions model, and then the whole virtual function
has to be passed through (although without resetting and changing to the
vfio-pci driver).
This patch implements changes so that from a config perspective, it
still is an mediated device, and we map the functionality iff the device
has no mediated devices but the new NVIDIAs sysfsapi and the model name
is 'nvidia-<..>'
It behaves a bit different than mdevs and normal pci passthrough, as we
have to choose the correct device immediately since it's bound to the
pciid, but we must not bind the device to vfio-pci as the NVIDIA driver
implements this functionality itself.
When cleaning up, we iterate over all reserved devices (since for a
mapping we can't know at this point which was chosen besides looking at
the reservations) and reset the vgpu model to '0', so it frees up the
reservation from NVIDIAs side. (We also do that in a loop, since it's
not always immediately ready after QEMU closes)
A general problem (but that was previously also the case) is that a
showcmd (for a not running guest) reserves the pciids, which might block
an execution of a different real vm. This is now a bit more problematic
as we (temporarily) set the vgpu type then.
0: https://docs.kernel.org/driver-api/vfio-pci-device-specific-driver-acceptance.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add an optional parameter to the helper that removes PCI reservations
so that we can partially release IDs again. This will be necessary for
NVIDIAs new sysfs api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since the only way this could happen is when we're being called
from 'qm showcmd' and there we don't want to reserve or create anything.
In case the VM was not running, we actually reserve the devices, so we
want to call 'cleanup_pci_devices' after to remove those again. This
minimizes the timespan where those devices are not available for real vm
starts.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
clarify a few units and avoid "since the process start" as it's not
really clear which process is meant and "since the guest was started"
is telling enough too, and as we do a full stop+start cycle on CT
reboot it's true for that too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
i omitted the 'disk' property, since it's non functional currently,
since we don't query the disk usage here (complicated to calculate,
depending on the storage, or requires guest agent support, which is also
non-trivial)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: avoid having netin twice, change to netout once ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This includes docs, and strings printed to stderr or stdout.
These were caught with:
typos --exclude test --exclude changelog
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
As reported in the community forum [0], when a remote migration
request comes in via an API client, the -T flag for Perl is set, so an
insecure dependency in a call like unlink() in forward_unix_socket()
will fail with:
> failed to write forwarding command - Insecure dependency in unlink while running with -T switch
To fix it, untaint the problematic socket addresses coming from the
remote side. Require that all sockets are below '/run/qemu-server/'
and end with '.migrate' with the main socket being matched more
strictly. This allows extensions in the future while still being quite
strict.
[0]: https://forum.proxmox.com/threads/123048/post-691958
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The version of the running QEMU binary is not related to the machine
version and so it's a bit confusing to have the helper in the
'Machine' module. It cannot live in the 'Helpers' module, because that
would lead to a cyclic inclusion Helpers <-> Monitor. Thus,
'QMPHelpers' is chosen as the new home.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
There is a possibility that the drive-mirror job is not yet done when
the migration wants to inactivate the source's blockdrives:
> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
This can be prevented by using the 'write-blocking' copy mode (also
called active mode) for the mirror. However, with active mode, the
guest write speed is limited by the synchronous writes to the mirror
target. For this reason, a way to start out in the faster 'background'
mode and later switch to active mode was introduced in QEMU 8.2.
The switch is done once the mirror job for all drives is ready to be
completed to reduce the time spent where guest IO is limited.
The loop waiting for actively-synced to become true is not an endless
loop: Once the remaining dirty parts have been mirrored by the
background iteration, the actively-synced flag will be set. Because
the 'block-job-change' QMP command already succeeded, new writes will
be done synchronously to the target and thus not lead to new dirty
parts. If the job fails or vanishes (shouldn't actually happen,
because auto-dismiss is false), the loop will be exited and the error
propagated.
Reported rarely, but steadily over the years:
https://forum.proxmox.com/threads/78954/post-353651https://forum.proxmox.com/threads/78954/post-380015https://forum.proxmox.com/threads/100020/post-431660https://forum.proxmox.com/threads/111831/post-482425https://forum.proxmox.com/threads/111831/post-499807https://forum.proxmox.com/threads/137849/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Currently, when completing a drive mirror job, only errors matching
"cannot be completed" will be handled. Other errors are ignored and
a wrong message that the job was completed successfully will be
printed to the log. An instance of this popped up in the community
forum [0].
The QMP command used for completing the job is either
'block-job-complete' or 'block-job-cancel'. The former causes the VM
to switch to the target drive, the latter doesn't, e.g. migration uses
the latter to not switch the source instance over to the target drive.
The 'block-job-cancel' command doesn't even have the same "cannot be
completed" message, but returns immediately.
The timeout for both 'block-job-cancel' and 'block-job-complete' is
set to 10 minutes in the QMPClient module, which should be enough.
[0]: https://forum.proxmox.com/threads/151518/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Cloudbase-Init, a cloud-init reimplementation for Windows, supports only
a subset of the configuration options of cloud-init. Some features
depend on support by the Metadata Service (ConfigDrive2 here) and have
further limitations [0].
To support a basic setup the following changes were made:
- password is saved as plaintext for any Windows guests (ostype)
- DNS servers are added to each of the interfaces
- SSH public keys are passed via metadata
Network and metadata generation for Cloudbase-Init is separate from the
default ConfigDrive2 one so as to not interfere with any other OSes that
depend on the current ConfigDrive2 implementation.
DNS search domains were removed because Cloudbase-Init's ENI parser
doesn't handle it at all.
The password set via `cipassword` is used for the Admin user configured
in the cloudbase-init.conf in the guest while the `ciuser` parameter is
ignored. The Admin user has to be set in the cloudbase-init.conf file
instead.
Specifying a different user does not work.
For the password to work the `ostype` needs to be any Windows variant
before `cipassword` is set. Otherwise the password will be encrypted and
the encrypted password used as plaintext password in the guest.
The `citype` needs to be `configdrive2`, which is the default for
Windows guests, for the generated configs to be compatible with
Cloudbase-Init.
[0] https://cloudbase-init.readthedocs.io/en/latest/index.html
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
URI is used in multiple files:
PVE/API2/Qemu.pm
PVE/CLI/qm.pm
PVE/QemuServer.pm
PVE/QemuServer/Cloudinit.pm
Dependencies of qemu-server already have it as dependency, but there's
no explicit dependency in qemu-server yet.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.
The reason is that activating the block drives in QEMU can take a bit
of time. For example, it might be necessary to invalidate the caches
(where for raw devices a flush might be needed) and the request
alignment and size of the block device needs to be queried.
In [0], an external Ceph cluster with krbd is used, and the initial
read to the block device after migration, for probing the request
alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
timeout to be on the safe side for the future.
All callers are inside workers or via the 'qm' CLI command, so bumping
beyond 30 seconds is fine.
[0]: https://forum.proxmox.com/threads/149610/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When detaching and attaching the network device on update, the
link_down setting is not considered and the network device always gets
attached to the guest - even if link_down is set.
Fixes: 3f14f206 ("nic online bridge/vlan change: link disconnect/reconnect")
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>