just outside of context, we already save the result from
machine_type_is_q35 into the $q35 variable, but never use it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reuse the PVE::CpuSet to validate cpuset formatting.
Add new qemu property called 'affinity' to store the cpuset.
Push taskset command in front of kvm if 'affinity' is set.
Signed-off-by: Daniel Bowder <daniel@bowdernet.com>
if the preparing of PCI devices or the start of the VM fails, we need
to cleanup the PCI devices (reservations *and* mdevs), or else it
might happen that there are leftovers which must be manually removed.
to include also mdevs now, refactor the cleanup code from
'vm_stop_cleanup' into it's own function, and call that instead of
only 'remove_pci_reservation'
also simplifies the code, such that it now removes all PCI ids
reserved for that VMID, since we cannot have multiple VMs with the
same VMID anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previously, only a plaintext line in the task log showed something was off.
Now, the GUI will show it as a warning.
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
This allow to regenerate config drive if pending values exist
when we change vm options.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
This allow to regenerate the config drive with 1 api call.
This also avoid to delete drive first, and recreate it again.
As it's a readonly drive, we can simply live update it,
and eject/replace it with qemu monitor
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Instead using vm pending options for pending cloudinit generated config,
write current generated cloudinit config in a new [special:cloudinit] SECTION.
Currently, some options like vm name, nic mac address can be hotplugged,
so they are not way to know if the cloud-init disk is already updated.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
while making it take the value directly instead of the config.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since kernel 5.15, there is an issue with io_uring when used in
combination with CIFS [0]. Unfortunately, the kernel developers did
not suggest any way to resolve the issue and didn't comment on my
proposed one. So for now, just disable io_uring when the storage is
CIFS, like is done for other storage types that had problematic
interactions.
It is rather easy to reproduce when writing large amounts of data
within the VM. I used
dd if=/dev/urandom of=file bs=1M count=1000
to reproduce it consistently, but your mileage may vary.
Some forum reports about users running into the issue [1][2][3].
[0]: https://www.spinics.net/lists/linux-cifs/msg26734.html
[1]: https://forum.proxmox.com/threads/109848/
[2]: https://forum.proxmox.com/threads/110464/
[3]: https://forum.proxmox.com/threads/111382/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
if the preparing of pci devices or the start of the vm fails, we need
to cleanup the pci devices (reservations *and* mdevs), or else
it might happen that there are leftovers which must be manually removed.
to include also mdevs now, refactor the cleanup code from 'vm_stop_cleanup'
into it's own function, and call that instead of only 'remove_pci_reservation'
also print the errors of the cleanup steps with 'warn', otherwise we
might discard important errors
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When passing through an NVIDIA vGPU via mediated devices, their
software needs the qemu process to have the 'uuid' parameter set to the
one of the vGPU. Since it's currently not possible to pass through multiple
vGPUs to one VM (seems to be an NVIDIA driver limitation at the moment),
we don't have to take care about that.
Sadly, the place we do this, it does not show up in 'qm showcmd' as we
don't (want to) query the pci devices in that case, and then we don't
have a way of knowing if it's an NVIDIA card or not. But since this
is informational with QEMU anyway, i'd say we can ignore that.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Before, the two strings were one single string each, rather than multiple
separated by newlines.
In the docs, this looked very strange as there were linebreaks and the
dots were shown. Can be seen e.g. in api-viewer /nodes/{node}/qemu/{vmid}/config.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
we forgot to give the namespace parameter here, so do that.
while we're at it, give the pbs options as a hash instead of adding
another parameter.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and exit early if they are not met.
The necessary libraries were taken from Thomas' post in our community
forum:
https://forum.proxmox.com/threads/.61801/post-466767 (ff)
The /dev/dri/renderD.* check is based on util/drm.c in the current
qemu source code.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Generalizes fd95d780 ("migrate: send updated TPM state volid to target
node") to also handle other offline migrated disks appearing in the
VM config, which currently should only be cloud-init.
Breaks migration new -> old under similar (edge-case-)conditions as
fd95d780 did.
Keep sending the 'tpmstate0' STDIN parameter to avoid breaking new ->
old in the scenario fd95d780 fixed.
Keep parsing the vm_start 'tpmstate0' STDIN parameter to avoid
breaking old -> new, and to be able to keep sending it.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To be consistent with PBS's implementation of multi-line comments
remove "\s*" here too. Since the regex isn't lazy .* matches
everything \s* would anyway. (Note that new lines occurs after "$").
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
by re-using the same hash that's used when allocating/activating the
disks in the helpers doing the opposite.
Also in preparation to allow skipping certain disks upon restore.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's only available since QEMU 6.2 and doing a check here rather than
bumping the package dependency allows for easy downgrades.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
via the special syntax <storeid>:<size>.
Not worth it by itself, but this is anticipating a new 'import-from'
parameter which is only used upon import/allocation, but shouldn't be
part of the schema for the config or other API enpoints.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
[split into its own patch + minor improvements/style fixes]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
[renamed API handler, since it's not an index]
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
because when the VM ID of target and source are the same,
qemu_drive_mirror_monitor() switches the QEMU device node over to the
new backing image. The planned import-from functionality makes it
possible to run into this, although for an a bit unusual use case.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Necessary to import from an existing storage using block-device
volumes like ZFS.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
[split into its own patch]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
For disk import, it should be based on the disk properties that are
passed in rather than on those of a possibly pre-existing disk in the
config.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and also when source and target drivename are different. In those
cases, it is done via qemu-img convert/dd.
In preparation to allow import from existing PVE-managed disks.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's confusing that the config associated to the destination is
actually a reference to the source config for both existing callers.
Also, disk import will need to base the calculation on the passed-in
drive parameters and not just the current config, so this change is in
preparation for that too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Avoids the error
adding drive failed: Duplicate ID 'drive-scsi1' for drive
that could happen when switching over to a new disk (e.g. via qm set),
if unplugging wasn't fast enough.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The refactoring in 36d4bdcb86 missed
this. The check is already done as part of the following check_storage
call.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When restoring a backup and the storage the disks would be created on
doesn't allow 'images', the process errors without cleanup.
This is the same behaviour we currently have when the storage is
disabled.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
preparation for also clamping on hotplug and lower the minimum in the
schema so that the full v2 range can be used.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when passing a config from one cluster to another, we want to be strict
when parsing - it's better to fail the migration early and upgrade the
target node instead of failing the migration later (when significant
work for transferring disks and/or state has already been done) or not
at all, but silently lose config settings that the target doesn't
understand.
this also might be helpful in other cases - e.g. when restoring from a
backup.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since we are going to reuse the same mechanism/code for network bridge
mapping and pve-container.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
While existing callers are not using the parameter after the call,
the modification is rather unexpected and could lead to bugs quickly.
Also avoid setting an undef value in the hash, but use delete instead.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
We drop properties which we do not understand and we call
`vmconfig_apply_pending` on stop and before start, so if a user tried
to edit the config or downgraded qemu-server they may get stuff
dropped from the config just by doing a stop/start, which may be a
bit too confusing, also the write is just unnecessary then.
we also have the same skipping logic when starting vms, this way we
avoid calling 'write_config' when there are no present changes to
commit.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
The volid may change if local-storage migration is involved, we need
to tell the target node the new one and update the in-memory config
for starting the target VM accordingly.
Reported here: https://forum.proxmox.com/threads/99906/#post-431345
this possibly breaks migration new -> old iff
- spice is not used (else the explicit ticket wins because it comes
later)
- a local TPM state volume is used
- that local TPM state volume has a different volume id on the target
node (switched storage, volname already taken, ..)
because the target node will then mis-interpret the tpmstate0 line as
spice ticket and set it accordingly. if the old tpm state volume ID does
not exist on the target node, migration will fail. if it exists by
chance, it might work albeit with a wrong spice ticket (new because of
this patch) and tpm state volume (pre-existing breakage).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This patch fixes the wrong attempt of setting up an NBD server for
the replicated TPM state volume, in contrast to the other volumes the
TPM state is managed by swtpm and isn't available to QEMU for
block-migration/bitmap tracking.
Note that we do migrate the state volume via a storage migration
anyway if necessary.
This code path was only triggered for replicated VMs with TPM.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
else a user cannot use more than one mdev per card per host.
We do not need to reserve them at all, since sysfs will error out
on creation/reuse anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
issue reported in community forum [0][1], like "serial[n]" display we
also need to set this option for "none", otherwise we get a boot
loop.
[0]: https://forum.proxmox.com/threads/99508
[1]: https://forum.proxmox.com/threads/97310/post-427129
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
See commit 17858a1695 (hw/acpi/ich9: Set ACPI PCI hot-plug as default
on Q35)[0] in upstream QEMU repository for details about why the change
was made.
As that change affects systemds predictable interface naming[1],
e.g., by going from a previously `ens18` name to `enp6s18`, it may
have rather bad effects for users that did not setup some .link files
to enforce a specific naming by an more stable information like the
NIC's MAC-Address
The alternative would be making the preferred mode of hotplug an
option like `hotplug-mode=<acpi|pcie>`, but it does not seems like
one would like to change that much in the first place...
Note the changes to the tests and especially the tests with q35
machines that did not change.
[0]: https://gitlab.com/qemu-project/qemu/-/commit/17858a1695
[1]: https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html#Naming
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is intended to be used to apply some workarounds for the
non-windows ostyped VMs which we'd still like to not pin on a
specific machine version, as normally Linux et al. can cope with such
changes on fresh boot just fine and until now this was a once every
few year issue (albeit systemd's "predictable" interface naming has
some potential to pick up on churn frequency).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
currently we only add the creation time (ctime), that was requested
as low priority wish from some users from time to time.
Note that the meta info is not available in the update API endpoints,
and at the moment the code should not change/add/delete it either in
any place.
We may want to update in on actions like clone or backup-restore in
the future, e.g., to also save the time of that event and possibly
the original source VMID, put that can be thought out later.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
while perl returns the (scalar) result of the last expression
automatically its still nicer to explicitly do so..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this allows a user to set a drive to 'read-only'. This can be useful
if a disk should not be written to, or if the backing file/source is
not writable (like a mapped pbs backup to /dev/loopX).
the option is named 'ro', to achieve consistency with containers
while this could also be achieved by setting 'snapshot=1', this would
create a temporary file in /var/tmp which can get quite big.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ovmf with SMM enabled will not boot on i440fx (hangs on graphics
initialization), so load the non SMM variant.
should be no issue regarding live-migration since it never worked with
this anyway.
adapts the test and adds one with q35
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
fix the classic indentation error on `additionalProperties` in the
main QEMU API
drop some not so useful empty lines to avoid making rather huge
methods even bigger (more intimidating, less on screen to grasp the
full picture).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
swtpm may take a little bit to daemonize, so the pidfile might not be
available right after run_command. Causes an ugly warning about using an
undefined value in a match, so wait up to 5s for it to appear.
Note that in testing this loop only ever got to the first or second
iteration, so I believe the timeout duration should be more than enough.
Also add a missing 'usleep' import, 'usleep' was used before but never
imported, apparently the other case never got triggered...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QEMU doesn't know about the tpmstate, so 'do_snapshots_with_qemu' should
never return true in that case. Note that inconsistencies related to
snapshot timing do not matter much, as the actual TPM data is exported
together with other device state by QEMU anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
`properties` is a bit ambiguous and as we have scope and start
runtime properties in the same scope it's good to avoid that
ambiguity.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
on vm start, we reserve all pciids that we use, and
remove the reservation again in vm_stop_cleanup
first with only a time-based reservation but after the vm is started,
we reserve again but with the pid.
for this, we have to move the start_timeout calculation above the
hostpci handling.
also moved the pci initialization out of the conf parsing loop
so that we can reserve all ids before we actually touch any of them
while touching the lines, fix the indentation
this way, when a vm starts with a pci device that is already configured
for a different running vm, will not be started and the user gets
the error that the device is already in use
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Provide support for secure boot by using the new "4m" and "4m-ms"
variants of the OVMF code/vars templates. This is specified on the
efidisk via the 'efitype' and 'ms-keys' parameters.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Starts an instance of swtpm per VM in it's systemd scope, it will
terminate by itself if the VM exits, or be terminated manually if
startup fails.
Before first use, a TPM state is created via swtpm_setup. State is
stored in a 'tpmstate0' volume, treated much the same way as an efidisk.
It is migrated 'offline', the important part here is the creation of the
target volume, the actual data transfer happens via the QEMU device
state migration process.
Move-disk can only work offline, as the disk is not registered with
QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
moving a backing storage at runtime.
For backups, a bit of a workaround is necessary (this may later be
replaced by NBD support in swtpm): During the backup, we attach the
backing file of the TPM as a read-only drive to QEMU, so our backup
code can detect it as a block device and back it up as such, while
ensuring consistency with the rest of disk state ("snapshot" semantic).
The name for the ephemeral drive is specifically chosen as
'drive-tpmstate0-backup', diverging from our usual naming scheme with
the '-backup' suffix, to avoid it ever being treated as a regular drive
from the rest of the stack in case it gets left over after a backup for
some reason (shouldn't happen).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if a volume is only referenced in the pending section of a config it was
previously not removed when removing the VM, unless the non-default
'remove unreferenced disks' option was enabled.
keeping track of volume IDs which we attempt to remove gets rid of false
warnings in case a volume is referenced both in the config and the
pending section, or multiple times in the config for other reasons.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the assumption that the index of the controller matches that of the last
removed drive only holds for virtio-scsi-single controller, which makes
the old code print a warning when removing the last drive of a
non-virtio-scsi-single controller except when the indices line up by
chance.
we can simply only call a simplified qemu_iothread_del when removing a
scsi disk of a VM with the virtio-scsi-single controller, and skip the
call for the other controllers which don't support io-threads anyway.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The content of the ISO should be the same on both nodes, so offline
migrate the ISO, but don't regenerate it on VM start on the target node.
This way even with snippets the content will not change during live
migration.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
there may be a kernel issue or a bug in how QEMU uses io_uring, but
we have users that report crashes which f.ebner could see on some
workloads, not really deterministic though and it seems that in newer
kernel versions (5.12+) the crash becomes a hang
While we're closing in on the actual issue here (which could be the
same as for RBD) let's disable io_uring for LVM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
That bit of code seems to be enough here, tested with
qm set VMID --net1 e1000e=EA:93:42:22:10:D8,bridge=vmbr0
on a Alpine Linux and a Windows Server 2016 VM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In v2 the range is [1, 10000], but the API allows the old limits from
2 to 262144, so clamp the upper for v2.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The efidisk never got restored correctly before, since we don't use the
generic print_drive_commandline_full for it, and as such it didn't get a
backing image attached. This not only causes the efidisk data to be lost
on restore, but also an error at the end, since we try to remove a
non-existing PBS blockdev.
Since it is attached differently to a regular drive, adding PBS backing
would be more difficult, but not to worry: an efidisk is small enough
that it doesn't hurt performance to just restore it via the regular
mechanism before starting the VM, and simply excluding it from the live
restore entirely.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
otherwise it'll produce a whole lot of checksum errors
and while this would be nice as a storage feature check,
it's hard to be 100% accurate there anyway since a directory
storage can point anywhere, like for instance a btrfs
directory, causing the same issue...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this allows effectively setting ALL volumes as read-only, even if the
disk controller does not support it. without it, IDE and SATA disks
with (base) volumes which are marked read-only/immutable on the storage
level prevent the template VM from starting for backup purposes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise backups of templates using UEFI fail with storages like LVM
thin, where the volumes are not writable. disk controllers like IDE and
SATA that don't support being read-only are still broken for UEFI.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ drop the readonly=off when not required, resolve merger conflict
from Dominik's EFI disk cache mode fix ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
KillMode 'none' is deprecated, and systemd loudly complains about that
in the journal. To avoid the warning, but keep the behaviour the same,
use KillMode 'process'.
This mode does two things differently, which we have to stop it from
doing:
* it sends SIGTERM right when the scope is cancelled (e.g. on shutdown)
-> but only to the "root" process, which in our case is the worker
instance forking QEMU, so it is already dead by the time this happens
* it sends SIGKILL to *all* children after a timeout
-> can be avoided by setting either SendSIGKILL to false, or
TimeoutStopUSec to infinity - for safety, we do both
In my testing, this replicated the previous behaviour exactly, but
without using the deprecated 'none' mode.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The 'aio' setting is not visible to the guest, and so can be changed
during migrations or snapshots without issue. It is thus only
dependendent on the actual QEMU version being >= 6.0, not machine
version.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and use it for the vdisk_list call too. This avoids scanning (and picking up
volumes from!) storages that are not even configured to hold images.
Previously, the content type was only enforced when a storage map was present.
Also serves a bit as a preparation to enforce content type on guest startup,
because now migration failure happens early and not only when trying to start
the guest on the remote node.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
storage_check_enabled simply checks for the 'disable' option and then calls
storage_check_node.
While not strictly necessary for a second call where only the storage differs,
e.g. in case of clone, it is more future-proof: if support for a target storage
is added at some point, it might be easy to miss adapting the call.
For the migration checks, the situation is improved by now always catching
disabled (target) storages.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to avoid potential problems with stringified numbers in Javascript and
elsewehere.
The vmid was not always an integer as the return schema expects, namely
when there was an opt_vmid argument, because the 'ne' comparision coerced the
vmid to be a string then.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reported in the community forum[0].
In QEMU's hw/scsi/vmw_pvscsi.c in the SCSIBusInfo struct, the max_lun property
is set to 0. This means that in our stack, one cannot have multiple disks and
use 'scsihw: pvscsi' currently, as kvm would fail with
bad scsi device lun: 1
Instead of increasing the lun number, increase the scsi-id, as we already do for
lsi.* (in hw/scsi/lsi53c895a.c the max_lun property is also 0).
[0]: https://forum.proxmox.com/threads/kvm-bad-scsi-device-lun-1.84318/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
on slower ceph clusters, the write pattern of the ovmf booting process
slows down the boot of the vm, so we turn on caching by default
it seems no other storage (until now) behaves like this. if it does in
the future, we can still add them too, or add a 'cache' property for
the efidisk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The only caller that didn't use 'images' was removed as part of the migration
refactoring in commit 62a4c963b8, so this is not
even a breaking change as the 'PVE 7' comment might've suggested.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
To bring it better in line with regular restore, also log the
repository, the snapshot and the target for each drive.
While at it, adjust capitalization of existing log line and clean up
repeated '$1' use.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
It's arguably not likely in practice that only an unused volume is still in use
as a base image, but do it for completeness sake.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
QEMU warns us about this:
kvm: -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait: warning: short-form boolean option 'server' deprecated
Please use server=on instead
kvm: -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait: warning: short-form boolean option 'nowait' deprecated
Please use wait=off instead
kvm: -vnc unix:/var/run/qemu-server/100.vnc,password: warning: short-form boolean option 'password' deprecated
Please use password=on instead
The new syntax is backwards compatible to at least QEMU 4.0.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>