The file_size_info() will return the size of the image based on
guessing the format. When importing via API, the correct size is
already known, so it's better to pass it in. The root-only 'qm'
commands for disk import and OVF import will still use auto-detection
for backwards compatibility. It might make sense to be able to
explicitly specify the format there too to get the correct size in all
cases.
New callers should detect the size according to the appropriate format
first.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Keep format auto-detection for backwards compatibility. Only root is
allowed to use such images as source images and the untrusted checks
will be done here.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Use the same logic as the actual import later.
Has the nice side effect of getting rid of the unnecessary manual
dispatch to the plugin too.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With auto-detection, there might be false positives for raw images
that contain data that looks like another image format, but are not
problematic, because they will always be treated as raw images.
Since commit "image convert: use volume format from storage layer",
actual importing will happen using the storage layer format, so use
that here too.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Avoid that file_size_info() does auto-detection via qemu-img.
This also adapts to the new argument order.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since commit "print drive commandline: improve format detection" such
mismatches will lead to being unable to start the VM, so catch the
issue early.
Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
For a Proxmox VE managed volume, prefer the format from the storage
layer rather than the 'format' option set on the drive. Fail if there
is a mismatch between the detected and configured format, because this
is not expected for managed volumes. Having this early hard failure
protects against undesirable issues with live migration and reboot
where the format of a drive would suddenly be different.
For a not Proxmox VE managed volume, use the same logic as before,
i.e. use the 'format' option for the drive with 'raw' as a fallback:
Only root can configure such devices.
Both also apply to the case where the 'cdrom' flag is set to avoid
autodetection by QEMU.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
FG: typo fix in comment
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
First step towards using the storage layer format instead of the
extension based format from qemu_img_format() as a source of truth
everywhere. Currently, some callers use qemu_img_format() and some
use parse_volname().
For import, special handling is needed, because the format can be a
combined ova+$extracted_format.
There is a fallback for 'raw' format when no format is returned by the
storage layer for backwards compatibility, e.g. ISOs. Formats that are
not part of the $QEMU_FORMAT_RE are not allowed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The minimum supported version for Proxmox VE 8 nodes is QEMU 8.0 and
the beginning of the config_to_command() function already has a check
for at least version 5.0. No other caller of get_vm_machine() passes
in the parameter, so it can be removed from there as well.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
The minimum supported version for a Proxmox VE 8 node is QEMU 8.0.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
The parameter was added by ac0077cc ("Use 'QEMU version' ->
'+pve-version' mapping for machine types") but it doesn't seem like
there ever was a caller. In particular, none of the current callers
pass in a value and it's not clear when one would require passing a
different version than the KVM binary version.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
There have been some reports about `qm start` timeouts on VMs that have a
lot of NICs assigned.
This patch considers the number of NICs when calculating the config-specific
timeout. Since the increase in start time is linearly related to the number
of NICs, a constant timeout increment per NIC was chosen.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
as it may be the only cause of the clone incompatibility
Example:
# qm clone 101 102 --full --snapname foo
Before:
> Full clone feature is not supported for 'local-zfs:base-100-disk-2/vm-101-disk-2' (tpmstate0)
After:
> Full clone feature is not supported for a snapshot of 'local-zfs:base-100-disk-2/vm-101-disk-2' (tpmstate0)
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
So far, the error message only contained the name of the disk
(tpmstate0, efidisk0, ...), which can also lead to the assumption that a
specific disk type is the problem. Now the volume ID is primarily
listed.
Example:
# qm clone 101 102 --full --snapname foo
Before:
> Full clone feature is not supported for drive 'tpmstate0'
After:
> Full clone feature is not supported for 'local-zfs:base-100-disk-2/vm-101-disk-2' (tpmstate0)
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
When cloning was repeatedly attempted, the error message indicated a
different unsupported volume each time. The hash is now sorted to always
mention the same volume as long as it has not been fixed.
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
[FE: replace old-style 'foreach' with 'for' while at it]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This check was added to guard the config format migration to a
dedicated section for cloudinit. The respective package version set
required for that to be understood is guaranteed to be available with
pve-manager 7.2-13 or newer, as that raised the versioned dependencies
respectively.
This hedges against a migration from a node with newer version to one
with older version, the effects would be basically that the name
argument in a cloudinit section would override the current one, as the
old parser interprets it as belonging to the main section, not the
cloudinit section.
We normally are cautious with removing such guards, and communicate
stricter requirements than we check, to safeguard users with a certain
ignorance or willingness to care for proper and periodic timely
upgrades.
But due to:
- PVE 7 being EOL since a few months
- PVE 7.2 being EOL for well over a year
- the documented requirement to upgrade to latest PVE 7.4 before an
upgrade to PVE 8
- The relatively harmless effects when this check is voided
we can drop that check more than safely now.
Reported-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the ui and api talks about 'import working storage' but the error here
still said 'for extraction'. Improve the message by unifiying the
wording and adding the storage name to it too.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This reverts commit fca0ba5d77, quoting
Fiona in verbatim:
> Regarding the patch "schema: add fleecing-images config property",
> Fabian off-list suggested using a config section "special:fleecing"
> instead of a property, so that it is truly internal-only. If we go for
> that, the commit should be reverted. Which approach do you prefer?
-- https://lore.proxmox.com/pve-devel/5126c251-64fd-44fe-b1a6-fda9074eb9a1@proxmox.com/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The function checks for resources that cannot be migrated, snapshoted,
or suspended.
To run this function while the snapshot lock is active, the
pve-guest-common patch 'AbstractConfig: add abstract method to check for
resources preventing a snapshot.' is required.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
This patch is for enabling AMD SEV (Secure Encrypted Virtualization)
support in QEMU.
VM-Config-Examples:
amd_sev: type=std,no-debug=1,no-key-sharing=1
amd_sev: es,no-debug=1,kernel-hashes=1
kernel-hashes, reduced-phys-bits & cbitpos correspond to the variables
with the same name in QEMU.
kernel-hashes=1 adds kernel hashes to enable measured linux kernel
launch since it is per default off for backward compatibility.
reduced-phys-bios and cbitpos are system specific and are read out by
the query-machine-capabilities c program and saved to the
/run/qemu-server/host-hw-capabilities.json file. This file is parsed
and than used by qemu-server to correctly start a AMD SEV VM.
type=std stands for standard sev to differentiate it from sev-es (es)
or sev-snp (snp) when support is upstream.
QEMU's sev-guest policy gets calculated with the parameters no-debug
& no-key-sharing. These parameters correspond to policy-bits 0 & 1.
If type is 'es' than policy-bit 2 gets set to 1 to activate SEV-ES.
Policy bit 3 (nosend) is always set to 1, because migration features
for sev are not upstream yet and are attackable.
SEV-ES is highly experimental since it could not be tested.
see coherent doc patch
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
this is to override the target extraction storage for the option disk
extraction for 'import-from'. This way if the storage does not
supports the content type 'images', one can give an alternative one.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.
Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
creating non-raw disk images with arbitrary content is only possible with raw
access to the storage, but checking for references to external files doesn't
hurt, in case for non pve-managed volumes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ DC: removed problematic checks for pve-managed volumes ]
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This will automatically convert imported volume disks and newly
allocated VM volume disks (i.e. no efidisks, tpmstate disks, cloudinit
images, etc.) to a base volume, if the VM is a template.
Previously, this required a user to manually convert the
imported/allocated disk with `qm template --disk <disk>`.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Automatically converts any imported volume disk to a base volume image
if the VM is a template and the volume was imported using the
"target-disk" option, as "unused" disks are not needed to be converted
as they won't be cloned with either linked nor full clones.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Implements the "target-disk" option for the importdisk command, which
allows a disk to be imported and directly used instead of marking it as
an unused disk (e.g. unused0), which is the default behavior.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
[ TL: squash in style-nit with parameter wrapping multiple lines ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to be used internally to record volume IDs of fleecing images
allocated during backup.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation for the upcoming 'fleecing-images' key. To avoid mixing
of options with - and options with _, which is not very user-friendly,
it would be nice to add aliases for existing options with _. And
long-term, backup restore handlers could switch to the modern keys
with -.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This can happen after a hard failure, e.g. if the vzdump task was
killed. The next backup (after unlocking the VM) would then fail with
> ERROR: VM 125 qmp command 'backup' failed - previous backup not finished
During the failure path of that attempt, 'backup-cancel' is executed
and the subsequent attempt would then work again. Do it up-front with
a warning instead of relying on this behavior.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In preparation to use it to conditionally issue a QMP 'backup-cancel'
should a previous backup still be running.
While at it, avoid using the compat-only check_running() helper.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When the VM is only started for backup, the VM will be stopped at that
point again. While the detach helpers do not warn about errors
currently, that might change in the future. This is also in
preparation for other cleanup QMP helpers that are more verbose about
failure.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since pve-common commit:
eff5957 (sysfstools: file_write: properly catch errors)
this check here fails now when the reset does not work. It turns out
that resetting the device is not always necessary, and we previously
ignored most errors when trying to do so.
To restore that functionality, downgrade this `die` to a warning.
If the device really needs a reset to work, it will either fail later
during startup, or not work correctly in the guest, but that behavior
existed before and is AFAIK not really detectable from our side.
Also improve the warning message a bit to not scare users and explain
that we're continuing.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: fine-tune error message a bit and avoid parenthesis ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a syslog entry to log the process id that has been given to the
QEMU VM process at start. This is helpful debugging information if the
pid shows up at other places, like a kernel stack trace, while the VM
has been running, but cannot be retrieved anymore (e.g. the pidfile has
been deleted or only the syslog is available).
The syslog has been put in the `PVE::QemuServer::vm_start_nolock`
subroutine to make sure that the PID is logged not only when the VM has
been started by the API endpoint `vm_start`, but also when the VM is
started by a remote migration.
Suggested-by: Hannes Dürr <h.duerr@proxmox.com>
Suggested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Daniel Herzig <d.herzig@proxmox.com>
Since kernel 6.8, NVIDIAs vGPU driver does not use the generic mdev
interface anymore, since they relied on a feature there which is not
available anymore. IIUC the kernel [0] recommends drivers to implement
their own device specific features since putting all in the generic one
does not make sense.
They now have an 'nvidia' folder in the device sysfs path, which
contains the files `creatable_vgpu_types`/`current_vgpu_type` to
control the virtual functions model, and then the whole virtual function
has to be passed through (although without resetting and changing to the
vfio-pci driver).
This patch implements changes so that from a config perspective, it
still is an mediated device, and we map the functionality iff the device
has no mediated devices but the new NVIDIAs sysfsapi and the model name
is 'nvidia-<..>'
It behaves a bit different than mdevs and normal pci passthrough, as we
have to choose the correct device immediately since it's bound to the
pciid, but we must not bind the device to vfio-pci as the NVIDIA driver
implements this functionality itself.
When cleaning up, we iterate over all reserved devices (since for a
mapping we can't know at this point which was chosen besides looking at
the reservations) and reset the vgpu model to '0', so it frees up the
reservation from NVIDIAs side. (We also do that in a loop, since it's
not always immediately ready after QEMU closes)
A general problem (but that was previously also the case) is that a
showcmd (for a not running guest) reserves the pciids, which might block
an execution of a different real vm. This is now a bit more problematic
as we (temporarily) set the vgpu type then.
0: https://docs.kernel.org/driver-api/vfio-pci-device-specific-driver-acceptance.html
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add an optional parameter to the helper that removes PCI reservations
so that we can partially release IDs again. This will be necessary for
NVIDIAs new sysfs api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since the only way this could happen is when we're being called
from 'qm showcmd' and there we don't want to reserve or create anything.
In case the VM was not running, we actually reserve the devices, so we
want to call 'cleanup_pci_devices' after to remove those again. This
minimizes the timespan where those devices are not available for real vm
starts.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
clarify a few units and avoid "since the process start" as it's not
really clear which process is meant and "since the guest was started"
is telling enough too, and as we do a full stop+start cycle on CT
reboot it's true for that too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
i omitted the 'disk' property, since it's non functional currently,
since we don't query the disk usage here (complicated to calculate,
depending on the storage, or requires guest agent support, which is also
non-trivial)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ TL: avoid having netin twice, change to netout once ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This includes docs, and strings printed to stderr or stdout.
These were caught with:
typos --exclude test --exclude changelog
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
As reported in the community forum [0], when a remote migration
request comes in via an API client, the -T flag for Perl is set, so an
insecure dependency in a call like unlink() in forward_unix_socket()
will fail with:
> failed to write forwarding command - Insecure dependency in unlink while running with -T switch
To fix it, untaint the problematic socket addresses coming from the
remote side. Require that all sockets are below '/run/qemu-server/'
and end with '.migrate' with the main socket being matched more
strictly. This allows extensions in the future while still being quite
strict.
[0]: https://forum.proxmox.com/threads/123048/post-691958
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The version of the running QEMU binary is not related to the machine
version and so it's a bit confusing to have the helper in the
'Machine' module. It cannot live in the 'Helpers' module, because that
would lead to a cyclic inclusion Helpers <-> Monitor. Thus,
'QMPHelpers' is chosen as the new home.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
There is a possibility that the drive-mirror job is not yet done when
the migration wants to inactivate the source's blockdrives:
> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
This can be prevented by using the 'write-blocking' copy mode (also
called active mode) for the mirror. However, with active mode, the
guest write speed is limited by the synchronous writes to the mirror
target. For this reason, a way to start out in the faster 'background'
mode and later switch to active mode was introduced in QEMU 8.2.
The switch is done once the mirror job for all drives is ready to be
completed to reduce the time spent where guest IO is limited.
The loop waiting for actively-synced to become true is not an endless
loop: Once the remaining dirty parts have been mirrored by the
background iteration, the actively-synced flag will be set. Because
the 'block-job-change' QMP command already succeeded, new writes will
be done synchronously to the target and thus not lead to new dirty
parts. If the job fails or vanishes (shouldn't actually happen,
because auto-dismiss is false), the loop will be exited and the error
propagated.
Reported rarely, but steadily over the years:
https://forum.proxmox.com/threads/78954/post-353651https://forum.proxmox.com/threads/78954/post-380015https://forum.proxmox.com/threads/100020/post-431660https://forum.proxmox.com/threads/111831/post-482425https://forum.proxmox.com/threads/111831/post-499807https://forum.proxmox.com/threads/137849/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Currently, when completing a drive mirror job, only errors matching
"cannot be completed" will be handled. Other errors are ignored and
a wrong message that the job was completed successfully will be
printed to the log. An instance of this popped up in the community
forum [0].
The QMP command used for completing the job is either
'block-job-complete' or 'block-job-cancel'. The former causes the VM
to switch to the target drive, the latter doesn't, e.g. migration uses
the latter to not switch the source instance over to the target drive.
The 'block-job-cancel' command doesn't even have the same "cannot be
completed" message, but returns immediately.
The timeout for both 'block-job-cancel' and 'block-job-complete' is
set to 10 minutes in the QMPClient module, which should be enough.
[0]: https://forum.proxmox.com/threads/151518/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Cloudbase-Init, a cloud-init reimplementation for Windows, supports only
a subset of the configuration options of cloud-init. Some features
depend on support by the Metadata Service (ConfigDrive2 here) and have
further limitations [0].
To support a basic setup the following changes were made:
- password is saved as plaintext for any Windows guests (ostype)
- DNS servers are added to each of the interfaces
- SSH public keys are passed via metadata
Network and metadata generation for Cloudbase-Init is separate from the
default ConfigDrive2 one so as to not interfere with any other OSes that
depend on the current ConfigDrive2 implementation.
DNS search domains were removed because Cloudbase-Init's ENI parser
doesn't handle it at all.
The password set via `cipassword` is used for the Admin user configured
in the cloudbase-init.conf in the guest while the `ciuser` parameter is
ignored. The Admin user has to be set in the cloudbase-init.conf file
instead.
Specifying a different user does not work.
For the password to work the `ostype` needs to be any Windows variant
before `cipassword` is set. Otherwise the password will be encrypted and
the encrypted password used as plaintext password in the guest.
The `citype` needs to be `configdrive2`, which is the default for
Windows guests, for the generated configs to be compatible with
Cloudbase-Init.
[0] https://cloudbase-init.readthedocs.io/en/latest/index.html
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.
The reason is that activating the block drives in QEMU can take a bit
of time. For example, it might be necessary to invalidate the caches
(where for raw devices a flush might be needed) and the request
alignment and size of the block device needs to be queried.
In [0], an external Ceph cluster with krbd is used, and the initial
read to the block device after migration, for probing the request
alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
timeout to be on the safe side for the future.
All callers are inside workers or via the 'qm' CLI command, so bumping
beyond 30 seconds is fine.
[0]: https://forum.proxmox.com/threads/149610/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When detaching and attaching the network device on update, the
link_down setting is not considered and the network device always gets
attached to the guest - even if link_down is set.
Fixes: 3f14f206 ("nic online bridge/vlan change: link disconnect/reconnect")
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Systemd reapplies its known values on reload, so we cannot simply call
into PVE::CGroup. Call systemd's SetUnitProperties method via dbus
instead.
The hotplug and startup code also calculated different values, as one
operated within systemd's value framework (documented in
systemd.resource-control(5)) and one worked with cgroup values
(distinguishing between cgroup v1 and v2 manually).
This is now unified by overriding `change_cpu_quota()` and
`change_cpu_shares()` via `PVE::QemuServer::CGroup` which now takes
systemd-based values and sends those directly via dbus.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
With the last change in the permission check, I accidentally broke the
check for 'spice' host value, since in the if/elsif/else this will fall
through to the else case which was only intended for when neither 'host'
nor 'mapping' was set.
This made 'spice' only settable by root@pam since there we return early.
To fix this, move the spice check into the 'host' branch, but only error
out in case it's not spice.
Fixes: e3971865 (enable cluster mapped USB devices for guests)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Updating the NIC of a VM when the following conditions were met:
* VM is turned off
* NIC is on a bridge that uses automatic dhcp
* Leave bridge unchanged
led to duplicate IPAM entries for the same network device.
This is due to the fact that the add_next_free_cidr always ran on
applying pending network changes.
Now we only add a new ipam entry if either:
* the value of the bridge or mac address changed
* the network device has been newly added
This way no duplicate IPAM entries should get created.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
tpmstate0 is already included in `get_vm_volumes`, and our only storage
plugin that has unmap_volume implemented is the RBDPlugin, where we call
unmap in `deactivate_volume`. So it's already ummapped by the
`deactivate_volumes` calls above.
For third-party storage plugins, it's natural to expect that
deactivate_volume() would also remove a mapping for the volume just
like RBDPlugin does.
While there is an explicit map_volume() call in start_swtpm(), a
third-party plugin might expect an explicit unmap_volume() call too.
However, the order of calls right now is
1. activate_volume()
2. map_volume()
3. deactivate_volume()
4. unmap_volume()
Which seems like it could cause problems already for third-party
plugins relying on an explicit unmap call.
All that said, it is unlikely that a third-party plugin breaks. If it
really happens, it can be discussed/adapted to the actual needs still.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a check
for version 4.0.1 is not required anymore.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In Proxmox VE 8, the oldest supported QEMU version is 8.0, so a
check for version 4.2 is not required anymore. The check was also
wrong, because it checked the installed version and not the currently
running one.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While archives with unknown or undetermined subtype could be shown,
this is only for autocompletion, so users can still specify those
manually if required.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Some callers like the move disk API endpoint do not pass an explicit
completion argument. This is not an issue in general, because
qemu_drive_mirror_monitor() defaults to 'complete'. However, there was
a string comparision for the cloudinit case that can trigger a warning
about the value being uninitialized.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
this was a stray search and replace for job -> job_id that should have only
changed variable names..
Fixes: 0ea24bf ("mirror monitor: refactoring/code cleanup")
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in an actual example:
Before:
> VM 106 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block655'
After:
> block job (stream) error: restore-scsi0: No space left on device (io-status: ok)
Note that previously, it was not even detected that the stream job
failed and the error message is because the subsequent cleanup failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in an actual example:
Before:
> VM 112 qmp command 'blockdev-del' failed - Node 'drive-scsi0-pbs' is busy: node is used as backing hd of '#block046'
After:
> block job (stream) error: restore-drive-scsi0: No space left on device (io-status: ok)
Note that previously, it was not even detected that the stream job
failed and the error message is because the subsequent cleanup failed.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
upon failure. Otherwise, the job would disappear too quickly from the
job list and cannot be queried for the actual error anymore.
Relevant part of the error in actual examples (note that the fact that
it's a mirror job is already mentioned earlier in the full error, with
"block job (mirror) error:"):
Before:
> 'mirror' has been cancelled
> 'mirror' has been cancelled
After:
> Source and target image have different sizes (io-status: ok)
> No space left on device (io-status: ok)
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When auto-dismiss=true (the default), a failed job can disappear very
quickly from the job list and there might not be any chance to see the
error in the result of 'query-block-jobs'. For jobs with $completion
being 'auto', like 'block-stream', it couldn't even be detected that
the job failed.
Jobs with auto-dismiss=false on the other hand, will wait in
'concluded' state until manually dismissed. For those, it will be
possible to query the error if the job failed.
There doesn't seem to be a way to have only failed jobs stay around,
e.g. something like auto-dismiss=on-success.
Planned to be used for the 'drive-mirror' and 'block-stream' jobs
initially.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
templates can only be started in context of a pbs backup, and there we
don't need or want to use most of the config, since they cannot be
started normally anyway.
We minimize the config by copying some specific relevant options (see
the comments for why the options were chosen) and all disk
configurations.
Since we change the qemu commandline for templates, we now have to adapt
the tests involving templates.
Without this, users can get into a situation where the template cannot
be backed up when there are some resources not available (such as cpu
cores, kvm, pci devices, etc.) even if the backup process does not need
them.
This change has some nice side effects, such as we don't need to
allocate the full amount of memory anymore for templates that have a
hostpci device configured, the configured bridges don't have to exist,
etc.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
After the TPM state has been created (to be precise, initialized by
swtpm) it is not possible to change the version anymore. Doing so will
lead to failure starting the associated VM. While documented in the
description, it's better to enforce this via API.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Since the check in start_swtpm() only checks for an explicitly
configured v2.0 to opt-in to version 2, the actual default is v1.2
and not v2.0 like the schema stated.
Of course, it would be nicer to have the default be v2.0, but changing
the check to use that default would break any TPM state without an
explicitly configured version.
There doesn't seem to be any code beside start_swtpm() accessing the
version.
Fixes: f9dde219 ("fix #3075: add TPM v1.2 and v2.0 support via swtpm")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
With high IO pressure, 5 seconds might not be enough, even if the
request is small.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The generic "got timeout" message cannot be associated to a certain
code path and also isn't very user-friendly. Use dedicated messages
for each stage and also suggest why the timeout for reading the header
might have happened, i.e. because it was corrupted.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout for HMP commands is 5 seconds.
While it should be rather fast to attach a new drive to QEMU, a busy
system might take longer, so future-proof and increase to 60 seconds.
On the other hand, detaching a drive needs to complete any pending IO
on it, so use the same 10 minutes timeout that's used for
drive-related QMP commands.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout is 5 seconds, but some HMP commands (e.g.
disk-related ones) might take longer than that. It's still an
interactive session, so use 30 seconds for now. Should there be any
user-complains about frequent timeouts, it could still be increased
further.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout for HMP commands is 5 seconds and while it should
be rather fast to attach a new drive to QEMU, a system can be very
busy during backup, so future-proof and increase to 60 seconds.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The default timeout is 5 seconds, but some HMP commands (e.g.
disk-related ones) might take longer than that. The API call is
synchronous, so has to complete within 30 seconds, and since there is
no other costly operation, use 25 seconds.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Passing the timeout key with an explicit value of undef is fine,
because both the absence of the timeout key and an explicit value of
undef will lead to $timeout being undef in the qmp_cmd() function.
In preparation to increase the timeout for certain (e.g. disk-related)
HMP commands.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The savevm-end command also fails when no snapshot operation was
started before. In particular, this is the case when savevm-start
failed early, because of unmigratable devices.
Avoid potentially leaving an orphaned volume and snasphot-related
configuration keys around by continuing with cleanup instead.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
by wrapping the properties from the command definition to get an
actual schema definition.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Since commit 865ef132 ("implement dynamic migration_downtime") the
migration downtime will be automatically increased when migration
cannot converge at the very end. Update the description to reflect
reality.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This fixes the broken prevention of starting a VM with a 32-bit CPU
using a 64-bit OVMF (UEFI) BIOS.
Fixes: 89d5b1c9 ("prevent starting a 32-bit VM using a 64-bit OVMF BIOS")
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
[FE: add Fixes trailer, add prefix to title]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The machine handling was transformed into a full fledged property
string with a (sub) format, but the single call-site for print_machine
was seemingly not tested, as this could have never worked due to a
missing import of the print_property_string helper.
Fixes: 8082eb8 ("config: define machine schema as property-string")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Management for fleecing images is implemented here. If the fleecing
option is set, for each disk (except EFI disk and TPM state) a new
fleecing image is allocated on the configured fleecing storage (same
storage as original disk by default). The disk is attached to QEMU
with the 'size' parameter, because the block node in QEMU has to be
the exact same size and the newly allocated image might be bigger if
the storage has a coarser allocation or rounded up. After backup, the
disks are detached and removed from the storage.
If the storage supports qcow2, use that as the fleecing image format.
This allows saving some space even on storages that do not properly
support discard, like, for example, older versions of NFS.
Since there can be multiple volumes with the same volume name on
different storages, the fleecing image's name cannot be just based on
the original volume's name. The schema vm-ID-fleece-N(.FORMAT) with N
incrementing for each disk is used.
Partially inspired by the existing handling of the TPM state image
during backup.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since commit
1f743141 (fix#1905: Allow moving unused disks)
we want to check the source drive name for 'unused', but in case of
importing a volume from the 'import' content type (e.g. from esxi),
there is no source drive name. So we have to first check if it's
defined.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The previous wording made it sound like all "visible" tasks were
aborted, which is not the case: A user with Sys.Audit but without
Sys.Modify may see a task that was started by a different user, but
overrule-shutdown would not abort the task.
Change wording to better reflect that not all visible tasks may be
aborted.
Also, add a full-stop that was previously missing.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `qmshutdown` tasks for the same VM (which are
visible to the user/token) are aborted before attempting to stop the
VM.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
In the past, moving unused disks to another storage was prohibited due
to oversights in the handling of unused disks. This commit rectifies
this limitation by allowing the movement of unused disks.
Historical context:
* 16 Sep 2010 r5164 qemu-server/pve2: The disknames sub was removed.
* 17 Sep 2010 r5170 qemu-server/pve2: Unused disks were introduced.
* 28 Jan 2011 r5461 qemu-server/pve2: The same disknames sub that was
removed in r5164 was brought back. Since unused disks were not around
yet in r5164 the disknames sub did not consider unused disks.
* 6-8 Aug 2012 c1175c92..f91b2e45 qemu-server.git: Disk resize was
introduced. In commit c1175c92 in sub qemu_block_resize unused disks
were not taken into account and in commit 2f48a4f5 (8 Aug 2012) the
resize API call was changed to only allow disks matching the ones in
the disknames sub. Since sub disknames did not contain any unused
disks, those were not allowed at all in the resize API call.
* 27 May 2013 586bfa78 qemu-server.git: Disk move was introduced. The
API call implementation borrowed heavily from disk resize, including
the behaviour of not taking unused disks into account. Thus, unused
disk could not be moved, which persists to this day.
In summary, this behaviour was introduced because the handling of unused
disks was overlooked and it was never changed.
There is no inherent reason why unused disks should be restricted from
being moved to another storage. These disks cannot use the
qemu_drive_mirror, but they can still be moved with qemu_img_convert,
the same way as any other disk of a stopped VM.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
vIOMMU enables the option to passthrough pci devices to L2 VMs in L1
VMs via Nested Virtualisation and adds an extra isolation.
Uses the new property-string from the "config: define machine schema
as property-string"-commit to add the viommu option to the machine
parameter.
Currently there are two vIOMMU implementation in QEMU to choose:
intel or virtio
Virtio-iommu is more recent but less used in production than intel-iommu.
The assert_valid_machine_property function prevents using intel-iommu with
i440fx.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[ TL: tiny coding style fix to extract variable inside if expr ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Convert the machine parameter to a property-string and use the machine
type as the default key for backward compatibility.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Upon obtaining the device type, a check is performed to determine if it
is a CD drive. It is important to note that Cloudinit drives are always
assigned as CD drives. If the drive has not yet been allocated, the test
will fail due to the unset cd attribute.
To avoid this, an explicit check is now performed to determine if it is
a Cloudinit drive that has not yet been assigned.
Fixes: d1feab4 ("fix #4957: add vendor and product information passthrough for SCSI-Disks")
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
'$entry->{host}' can be empty, so we have to check for that before
doing a regex check, otherwise we get ugly errors in the log
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When attempting a CPU hotplug on an architecture other than x86_64, die
with a clean error instead of attempting a hotplug with a known
non-working device command line. Also move the corresponding FIXME up to
the error.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Add a command that can be used together with volumes from the new
'import' content type of storage plugins.
For now only the new ESXi exposes that content type, but in the long
run its planned to migrate over the existing OVF/OVA infra and extend
it so that it will replace the 'ovfimport' command.
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ TL: split out to separate commit and add message, fix completing
VMID to propose unused ones, note explicitly when in dry-run mode ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of a "pbs-backing" parameter we now have a
"live-restore-backing" parameter containing the `-blockdev` arg and
its name, which also means we print the blockdev earlier
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
A network device of a VM does not necessarily has to be connected to
an actual bridge, so when a new pending value is set we need to use
the undef-safe compare helpers when checking if there was a change
between old and new value, as otherwise one gets ugly "use of
uninitialized value in string ne" warnings.
Link: https://forum.proxmox.com/threads/143072/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
could be a better fit in PVE::Tools, like proposed by Filip, but OTOH.
Tools is already crowded as is, so wait if we need it on more places
outside of qemu-server.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There might've been a question back when it got first added in commit
9d689077 ("use long timeouts for snapshot monitor command"). But
nowadays, the value is well-established. Changing it would affect
quite a few operations, so that should not be done without good
reason and is likely better done for the specific operation.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Make the default value for 'kvm' consistent, taking into account
whether the VM will run on the same CPU architecture as the host.
This would be a breaking change to CPU hotplug for VMs with a
different CPU architecture running on an x86_64 host, as in this case
the default CPU type for CPU hotplug changes from 'kvm64' to 'qemu64'.
However, CPU hotplug of non x86_64 architectures is not supported
anyway, so this is not a breaking change after all.
It should be noted that this change does alter the CPU hotplug
behaviour when emulating an x86_64 CPU on a non-x86_64 host. This is
however not officially supported in Proxmox VE.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Move is_native from PVE::QemuServer to PVE::Tools and rename it to
is_native_arch to be more descriptive.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.
To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries that have the long mode
feature (CPUID_EXT2_LM).
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
When rebooting a VM from PVE (via CLI/API), the reboot code is called
under a guest lock, which creates a reboot request, shuts down the VM
and then calls the regular cleanup code, which includes the mdev
cleanup.
In parallel, the qmeventd observes that the VM process has gone, and
starts 'qm cleanup' which is (among other tasks) also starts the VM
again if a reboot from the PVE side is pending.
The qmeventd synchronizes this through a lock on the guest, with a
default timeout of 10 seconds.
Since we currently also always wait 10 seconds for the NVIDIA driver
to clean up the mdev, this creates a race condition for the cleanup
lock. IOW., when the call to `qm cleanup` starts before we started to
sleep for 10 seconds, it will not be able to acquire its lock and not
start the vm again.
To avoid the race condition in practice, do two things:
* increase the timeout in `qm cleanup` to 60 seconds.
Technically this still might run into a timeout, as we can configure
up to 16 mediated devices with each delaying 10 seconds in the worst
case, but realistically most users won't configure more than two or
three of them, if even that.
* change the hard-coded `sleep 10` to a loop sleeping for 1 second
each before checking the state again. This shortens the timeout when
the NVIDIA driver did not require the full 10s to finish the
clean-up.
Further, add a bit of logging, so one can properly see in the task log
what is happening at which point in time.
Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: change warn to print, reword commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Make the post-if check for the target not already running more
prominent by using a full if block.
Also comment on why we ignore the error here, while the commit
changing that explained it well, this is one of the things that might
be better of with a in-code comment (as doing the deactivation is
described as important here, so one might wonder why the code
continues if that fails)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a template with disks on LVM is cloned to another node, the
volumes are first activated, then cloned and deactivated again after
cloning.
However, if clones of this template are now created in parallel to
other nodes, it can happen that one of the tasks can no longer
deactivate the logical volume because it is still in use. The reason
for this is that we use a shared lock.
Since the failed deactivation does not necessarily have consequences,
we downgrade the error to a warning, which means that the clone tasks
will continue to be completed successfully.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Tested-by: Friedrich Weber <f.weber@proxmox.com>
by fixing the SCSI feature compatibility check helper. The helper is
also called for disks using import-from, so it has to use the extended
schema when parsing the drive.
Fixes: d1feab4a ("fix #4957: add vendor and product information passthrough for SCSI-Disks")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (since $volid
here is always a proper volid, unless we change the cicustom schema at some
point in the future).
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
[FE: add missing space to exception message
use config option for exception e.g. scsi0 rather than 'product'
style fixes]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since we always determine the deviceid, passing in a possibly wrong value makes
no sense and could actually re-introduce bugs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.
Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.
Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1
While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Encapsulation of the functionality for determining the scsi device type
in a new function for reusability in QemuServer/Drive.pm
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.
It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore, the
port reservation can be moved very close to the actual usage.
Mentioned here for completeness and can still be done as an additional
change later if desired: next_migrate_port could be modified to
optionally return the open socket and it should be possible to pass
the file descriptor directly to QEMU, but that would require accepting
the connection before on the Perl side (otherwise leads to ENOTCONN
107). While it would avoid any races, it's not the most elegant
and the change at hand should be enough in all practical situations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>