Attaching an ISO image to a VM is usually/often done for two reasons:
* booting an installer image
* supplying additional drivers to an installer (e.g. virtio)
Both of these cases (the latter at least with SeaBIOS and the Windows
installer) require the disk to be marked as bootable.
For this reason, enable the bootable flag for all new CDROM drives
attached to a VM by adding it to the bootorder list. It is appended to
the end, as otherwise it would cause new drives to boot before already
existing boot targets, which would be a more grave (and IMO bad)
behaviour change.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
there may be a kernel issue or a bug in how QEMU uses io_uring, but
we have users that report crashes which f.ebner could see on some
workloads, not really deterministic though and it seems that in newer
kernel versions (5.12+) the crash becomes a hang
While we're closing in on the actual issue here (which could be the
same as for RBD) let's disable io_uring for LVM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
That bit of code seems to be enough here, tested with
qm set VMID --net1 e1000e=EA:93:42:22:10:D8,bridge=vmbr0
on a Alpine Linux and a Windows Server 2016 VM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In v2 the range is [1, 10000], but the API allows the old limits from
2 to 262144, so clamp the upper for v2.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The efidisk never got restored correctly before, since we don't use the
generic print_drive_commandline_full for it, and as such it didn't get a
backing image attached. This not only causes the efidisk data to be lost
on restore, but also an error at the end, since we try to remove a
non-existing PBS blockdev.
Since it is attached differently to a regular drive, adding PBS backing
would be more difficult, but not to worry: an efidisk is small enough
that it doesn't hurt performance to just restore it via the regular
mechanism before starting the VM, and simply excluding it from the live
restore entirely.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
otherwise a user with only VM.Config.CDROM can detach a disk from a VM
by updating it to a cdrom drive
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
otherwise it'll produce a whole lot of checksum errors
and while this would be nice as a storage feature check,
it's hard to be 100% accurate there anyway since a directory
storage can point anywhere, like for instance a btrfs
directory, causing the same issue...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this allows effectively setting ALL volumes as read-only, even if the
disk controller does not support it. without it, IDE and SATA disks
with (base) volumes which are marked read-only/immutable on the storage
level prevent the template VM from starting for backup purposes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
ensuring the current behaviour:
templates will pass readonly=on to Qemu, except for SATA and IDE drives
which don't support that flag.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise backups of templates using UEFI fail with storages like LVM
thin, where the volumes are not writable. disk controllers like IDE and
SATA that don't support being read-only are still broken for UEFI.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ drop the readonly=off when not required, resolve merger conflict
from Dominik's EFI disk cache mode fix ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for unprivileged users (and possibly some root setups). reading from
pmxcfs now results in a hard error for unprivileged users, so there
might be some more of these lurking somewhere..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
KillMode 'none' is deprecated, and systemd loudly complains about that
in the journal. To avoid the warning, but keep the behaviour the same,
use KillMode 'process'.
This mode does two things differently, which we have to stop it from
doing:
* it sends SIGTERM right when the scope is cancelled (e.g. on shutdown)
-> but only to the "root" process, which in our case is the worker
instance forking QEMU, so it is already dead by the time this happens
* it sends SIGKILL to *all* children after a timeout
-> can be avoided by setting either SendSIGKILL to false, or
TimeoutStopUSec to infinity - for safety, we do both
In my testing, this replicated the previous behaviour exactly, but
without using the deprecated 'none' mode.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The 'aio' setting is not visible to the guest, and so can be changed
during migrations or snapshots without issue. It is thus only
dependendent on the actual QEMU version being >= 6.0, not machine
version.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Note that the value in this enum directly represents the value passed to
QEMU, so we need to use the underscore.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and use it for the vdisk_list call too. This avoids scanning (and picking up
volumes from!) storages that are not even configured to hold images.
Previously, the content type was only enforced when a storage map was present.
Also serves a bit as a preparation to enforce content type on guest startup,
because now migration failure happens early and not only when trying to start
the guest on the remote node.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
storage_check_enabled simply checks for the 'disable' option and then calls
storage_check_node.
While not strictly necessary for a second call where only the storage differs,
e.g. in case of clone, it is more future-proof: if support for a target storage
is added at some point, it might be easy to miss adapting the call.
For the migration checks, the situation is improved by now always catching
disabled (target) storages.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
after upgrading to bullseye, the cfs_read_file call within
restore_update_config_line() results in an error:
Is a directory!
when done as an unprivileged user.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to avoid potential problems with stringified numbers in Javascript and
elsewehere.
The vmid was not always an integer as the return schema expects, namely
when there was an opt_vmid argument, because the 'ne' comparision coerced the
vmid to be a string then.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reported in the community forum[0].
In QEMU's hw/scsi/vmw_pvscsi.c in the SCSIBusInfo struct, the max_lun property
is set to 0. This means that in our stack, one cannot have multiple disks and
use 'scsihw: pvscsi' currently, as kvm would fail with
bad scsi device lun: 1
Instead of increasing the lun number, increase the scsi-id, as we already do for
lsi.* (in hw/scsi/lsi53c895a.c the max_lun property is also 0).
[0]: https://forum.proxmox.com/threads/kvm-bad-scsi-device-lun-1.84318/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>