Commit Graph

2336 Commits

Author SHA1 Message Date
Fabian Grünbichler
399ca0d66e migrate: improve start STDIN-parameter parsing
only do the compat fallback if no explicit spice ticket was given, and
warn on unknown parameters on STDIN.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-22 16:53:09 +01:00
Fabian Grünbichler
41c8671e78 migrate: skip tpmstate for NBD migration
This patch fixes the wrong attempt of setting up an NBD server for
the replicated TPM state volume, in contrast to the other volumes the
TPM state is managed by swtpm and isn't available to QEMU for
block-migration/bitmap tracking.

Note that we do migrate the state volume via a storage migration
anyway if necessary.

This code path was only triggered for replicated VMs with TPM.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-16 14:03:01 +01:00
Dominik Csapak
1319908f5d exclude efidisk and tpmstate for boot disk selection
else we cannot create a vm without a disk but with a tpmstate/efidisk,
since the api tries to generate the default bootorder with them included

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-15 16:57:52 +01:00
Dominik Csapak
9c85548fa1 pci: do not reserve pci-ids for mediated devices
else a user cannot use more than one mdev per card per host.
We do not need to reserve them at all, since sysfs will error out
on creation/reuse anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-11-15 15:01:12 +01:00
Thomas Lamprecht
ce3fbcd456 api: update: fix missing newline in background-delayed task error
this error path is mostly used for re-attaching disks and the like,
and the "check if task is already done" part uses a method to read
the task status that will never include a trailing newline, so add it
our self to avoid "... at /usr/share/perl5/PVE/API2/Qemu.pm line
1480. (500)"

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-15 09:21:50 +01:00
Oguz Bektas
bec8742495 cfg2cmd: disable SMM when display=none and SeaBIOS is both used
issue reported in community forum [0][1], like "serial[n]" display we
also need to set this option for "none", otherwise we get a boot
loop.

[0]: https://forum.proxmox.com/threads/99508
[1]: https://forum.proxmox.com/threads/97310/post-427129

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-11 14:16:09 +01:00
Thomas Lamprecht
f519ab0b76 api: move disk: schema indentation and style-nit fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-11 14:01:04 +01:00
Thomas Lamprecht
c41439ac5b qm: move-disk: to not make reassign specific options fixed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-11 14:01:04 +01:00
Thomas Lamprecht
e849ff6f91 qm: style/indentation/cleanup fixes for command definition
and record some possible FIXMEs for a next point/major release

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-11 14:01:04 +01:00
Fabian Grünbichler
9fb295d095 migrate: factor out storage checks
to re-use them for incoming remote migrations.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-11 10:47:26 +01:00
Fabian Grünbichler
a4d828e35e adapt to renamed storage-pair format
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-11 10:47:26 +01:00
Fabian Ebner
e5a6919c38 cfg2cmd: turn smm off when SeaBIOS and serial display are used
Since commit 277d33454f77ec1d1e0bc04e37621e4dd2424b67 in pve-qemu,
smm=off is no longer the default, but with SeaBIOS and serial display,
this can lead to a boot loop.

Reported in the community forum [0] and reproduced with a Debian 10
VM.

[0]: https://forum.proxmox.com/threads/pve-7-0-all-vms-with-cloud-init-seabios-fail-during-boot-process-bootloop-disk-not-found.97310/post-427129

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-11 10:32:17 +01:00
Aaron Lauterer
bf67da2bf7 disk reassign: add unused disks directly to config
Using $update_vm_api for unused disks will cause them to end up as a
pending change if the VM is running.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-11-11 10:08:26 +01:00
Fabian Grünbichler
f4e4c77984 disk reassign: fix assigning to unused slot
this broke with the previous simplification.

Tested-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 12:41:04 +01:00
Fabian Grünbichler
a6273aa8bf reassign disk: more cleanup
avoid re-using the toplevel variable name

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 12:11:03 +01:00
Fabian Grünbichler
441024921e reassign disk: fix permission checks
with `storage` being optional (and not allowed for reassign operations),
the ACL path in the schema can end up as `/storage/-`, which is wrong.
replace it with an explicit check:

- target `storage` for move disk
- storage from source disk for reassign disk (we only rename here, but
  it's still a new volume on that storage after all)

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 11:49:36 +01:00
Fabian Grünbichler
dbc817ba4a reassign disk: various improvements
some style, some missing checks. some duplication reduced a bit.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-11-10 11:49:36 +01:00
Aaron Lauterer
70c0ad6687 api: move-disk: cleanup very long lines
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-11-09 16:16:00 +01:00
Aaron Lauterer
a94532188d api: move-disk: add move to other VM
The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of the other VM.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-11-09 16:16:00 +01:00
Aaron Lauterer
1071373027 Drive: add valid_drive_names_with_unused
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-11-09 16:16:00 +01:00
Aaron Lauterer
2817091d33 cli: qm: change move_disk to move-disk
also add alias to keep move_disk working.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-11-09 16:16:00 +01:00
Fabian Ebner
7a16336dd2 config: rollback is possible: add blockers parameter
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-11-08 10:34:00 +01:00
Thomas Lamprecht
cc18103635 cfg2cmd: switch off ACPI hotplug on bridges for q35 VMs
See commit 17858a1695 (hw/acpi/ich9: Set ACPI PCI hot-plug as default
on Q35)[0] in upstream QEMU repository for details about why the change
was made.

As that change affects systemds predictable interface naming[1],
e.g., by going from a previously `ens18` name to `enp6s18`, it may
have rather bad effects for users that did not setup some .link files
to enforce a specific naming by an more stable information like the
NIC's MAC-Address

The alternative would be making the preferred mode of hotplug an
option like `hotplug-mode=<acpi|pcie>`, but it does not seems like
one would like to change that much in the first place...

Note the changes to the tests and especially the tests with q35
machines that did not change.

[0]: https://gitlab.com/qemu-project/qemu/-/commit/17858a1695
[1]: https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html#Naming

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-04 15:30:30 +01:00
Thomas Lamprecht
af2a1a1cdb config: meta: also save the QEMU version installed during creation
This is intended to be used to apply some workarounds for the
non-windows ostyped VMs which we'd still like to not pin on a
specific machine version, as normally Linux et al. can cope with such
changes on fresh boot just fine and until now this was a once every
few year issue (albeit systemd's "predictable" interface naming has
some potential to pick up on churn frequency).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-04 15:30:30 +01:00
Thomas Lamprecht
26b443c846 config: add new meta property with the VM creation time
currently we only add the creation time (ctime), that was requested
as low priority wish from some users from time to time.

Note that the meta info is not available in the update API endpoints,
and at the moment the code should not change/add/delete it either in
any place.

We may want to update in on actions like clone or backup-restore in
the future, e.g., to also save the time of that event and possibly
the original source VMID, put that can be thought out later.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-04 15:30:22 +01:00
Thomas Lamprecht
115cb432bc cloud init: add comment regarding 3 MiB size limit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Originally-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-11-04 13:14:17 +01:00
Constantin Herold
101beafe0d fix #2429: allow to specify cloud-init vendor snippet via cicustom
Signed-off-by: Constantin Herold <proxmox8914@herold.me>
Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-11-04 12:46:07 +01:00
Thomas Lamprecht
33f8b88782 agent hotplug: small style cleanups & comment addition
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-04 08:46:22 +01:00
Alexandre Derumier
74ea2c65a9 qemu-agent: allow hotplug of fstrim_cloned_disk option.
This option don't have any impact on device itself.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-11-04 08:37:03 +01:00
Thomas Lamprecht
e8a268100b vm_commandline: reduce line bloat
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-03 16:35:57 +01:00
Thomas Lamprecht
6971c38ed9 print_keyboarddevice_full: drop unused machine parameter
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-03 16:35:57 +01:00
Thomas Lamprecht
f606d5bd6f scsi_inquiry: refactor and code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-03 16:35:57 +01:00
Thomas Lamprecht
8eb73377c1 kvm_user_version: add explicit return statement
while perl returns the (scalar) result of the last expression
automatically its still nicer to explicitly do so..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-11-03 16:21:22 +01:00
Thomas Lamprecht
1f91f7b464 drives: ro: code reduction/refactor
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-27 17:00:24 +02:00
Dominik Csapak
12e1d472e3 drives: expose 'readonly' flag of qemu for scsi/virtio
this allows a user to set a drive to 'read-only'. This can be useful
if a disk should not be written to, or if the backing file/source is
not writable (like a mapped pbs backup to /dev/loopX).

the option is named 'ro', to achieve consistency with containers

while this could also be achieved by setting 'snapshot=1', this would
create a temporary file in /var/tmp which can get quite big.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-27 13:53:11 +02:00
Stefan Reiter
6a5589010e vzdump: increase timeout for QMP 'cont' after backup start
Since 'backup' can now work asynchronously, QEMU may not be ready to
receive the next QMP command ('cont') immediately. Thus, increase the
timeout, to avoid aborted backups in slow environments.

There may be a deeper QEMU bug hidden under the covers here too, but at
least one user reported success with simply increasing the timeout:
https://forum.proxmox.com/threads/pve7-pbs2-backup-timeout-qmp-command-cont-failed-got-timeout.95212/page-2#post-426261

See also:
https://bugzilla.proxmox.com/show_bug.cgi?id=3693
https://forum.proxmox.com/threads/problem-seit-update-auf-7-0.97388/
https://forum.proxmox.com/threads/error-with-backup-when-backing-up-qmp-command-query-backup-failed-got-wrong-command-id.88017/page-3#post-416339

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-27 13:51:45 +02:00
Fabian Ebner
23bee97d05 vm start: only print tpm-related message if there is an instance
Otherwise, this can produce an undef warning and be misleading.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-27 13:23:18 +02:00
Dominik Csapak
25de70ae59 fix removing cpulimit on running vm
like in pve-container:
04a62bd ("fix #3506: config: fix removing the cpulimit of a running CT")

reported in the forums (no bug# yet):
https://forum.proxmox.com/threads/issue-with-removing-cpu-limit-from-running-vm.97799/

note that this will break CGv1 without the following fix installed:
https://git.proxmox.com/?p=pve-common.git;a=commitdiff;h=d37a71867

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-22 11:55:08 +02:00
Dominik Csapak
90b20b152c use non SMM ovmf code file for i440fx machines
ovmf with SMM enabled will not boot on i440fx (hangs on graphics
initialization), so load the non SMM variant.

should be no issue regarding live-migration since it never worked with
this anyway.

adapts the test and adds one with q35

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-21 12:38:58 +02:00
Thomas Lamprecht
5a08fb9c8b config properties: refactor skipping internal options to declarative
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-21 08:23:11 +02:00
Thomas Lamprecht
3326ae19de code and indentation cleanups
fix the classic indentation error on `additionalProperties` in the
main QEMU API

drop some not so useful empty lines to avoid making rather huge
methods even bigger (more intimidating, less on screen to grasp the
full picture).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-20 17:31:45 +02:00
Thomas Lamprecht
fa3b3ce067 config2cmd: code cleanup and indentation reduction
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-20 15:39:20 +02:00
Thomas Lamprecht
483ceeabef indentation and fixes
with some style/tw thrown in-between

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-20 12:56:59 +02:00
Thomas Lamprecht
8d88a59433 fix overly long/short lines and typos
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-20 08:24:08 +02:00
Stefan Reiter
179b9f1ba5 ostype: support Windows 11/Server 2022
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-18 09:53:44 +02:00
Thomas Lamprecht
2c29655663 Revert "config_aware_timeout: add 5s if TPM is configured"
This reverts commit d4e1e1f862.

It's bogus, the VM start timeout is only starting to tick after we
started the TPM already...
2021-10-18 09:47:42 +02:00
Thomas Lamprecht
d4e1e1f862 config_aware_timeout: add 5s if TPM is configured
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-18 09:43:46 +02:00
Thomas Lamprecht
90c41bac8f swtmp: die early in startup check
no point in waiting another 50 ms if we know that we'd die already
anyway..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-18 09:43:46 +02:00
Thomas Lamprecht
6bbcd71f94 code style: readability cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-18 09:43:46 +02:00
Stefan Reiter
f85951dc82 swtpm: wait for pidfile
swtpm may take a little bit to daemonize, so the pidfile might not be
available right after run_command. Causes an ugly warning about using an
undefined value in a match, so wait up to 5s for it to appear.

Note that in testing this loop only ever got to the first or second
iteration, so I believe the timeout duration should be more than enough.

Also add a missing 'usleep' import, 'usleep' was used before but never
imported, apparently the other case never got triggered...

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-18 09:43:46 +02:00
Stefan Reiter
9d83932d7c snapshot: fix tpmstate with rbd
QEMU doesn't know about the tpmstate, so 'do_snapshots_with_qemu' should
never return true in that case. Note that inconsistencies related to
snapshot timing do not matter much, as the actual TPM data is exported
together with other device state by QEMU anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-18 09:43:45 +02:00
Thomas Lamprecht
212220a4fa vm_start: better name systemd scope property variable
`properties` is a bit ambiguous and as we have scope and start
runtime properties in the same scope it's good to avoid that
ambiguity.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-18 09:43:45 +02:00
Thomas Lamprecht
c077cc166e cloudinit: opennebula: refactor to reduce code bloat
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 19:58:16 +02:00
Thomas Lamprecht
2eee6748a0 cloudinit: better use of string variable interpolation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 19:58:16 +02:00
Thomas Lamprecht
d01de38cb6 pci: prepare: improve no-IOMMU error message
give some context

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 19:58:16 +02:00
Dominik Csapak
1fb1822ec9 fix #3258: block vm start when pci device is already in use
on vm start, we reserve all pciids that we use, and
remove the reservation again in vm_stop_cleanup

first with only a time-based reservation but after the vm is started,
we reserve again but with the pid.

for this, we have to move the start_timeout calculation above the
hostpci handling.

also moved the pci initialization out of the conf parsing loop
so that we can reserve all ids before we actually touch any of them

while touching the lines, fix the indentation

this way, when a vm starts with a pci device that is already configured
for a different running vm, will not be started and the user gets
the error that the device is already in use

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 19:58:16 +02:00
Thomas Lamprecht
a01593676c pci reservation: rework helpers style and readability wise
both style and readability are naturally subjective to a certain
degree...

Also, this patch mixes a bit much into one thing, but splitting that
up would mean lots of work I just wanted to avoid, sorry about that.

Among other things:

- avoid a level of indentation in the reserve loop
- rename pciids to reservation_list where it was a better fit
- make reserve set either pid or time to avoid suggesting that we
  save both
- rename parameters to requested/dropped IDs for easier understanding
  what's going on in the code
- avoid old_pid/pid, use running_pid and reserver_pid instead to
  clarify what they actually mean
- drop useless returns to avoid suggesting the return value has any
  use and save some lnes
- use a hash slice to delete all dropped IDs at once, shorter and
  faster
- use 5 second timeout for reservation, this does nothing intensive
  nor does it wait for anything, so the critical section should be
  really short, 5s is really long enough for a wait..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 19:58:16 +02:00
Thomas Lamprecht
bda0ebff2d pci reservation: move lock/reservation file into /run/qemu-server
lck needs to die, the days of any 8.3 file naming schemes are long
gone (in the server space that is ;)

/var/run is /run so use the shorter, and while /var/lock is a OK
place for the locks we try to keep lock and lock-object together
nowadays. The qemu-server sub-directory avoids overly cluttering the
already crowded top-level /run dir

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 18:17:34 +02:00
Thomas Lamprecht
cda95d5223 pci reservation: encode locklessness of parsers in name
to avoid that they're misused

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-15 14:44:50 +02:00
Dominik Csapak
3bfee796f4 pci: add helpers to (un)reserve pciids for a vm
saves a list of pciid <-> vmid mappings in /var/run
that we can check when we start a vm

if we're not given a pid but a timeout, we save the time when the
reservation will run out (current time + timeout + 5s) since each
vm start (until we can save the pid) varies from config to config

reserve_pci_usage and remove_pci_reservation always expect a list of ids
so that we can update the reservation for a vm all at once

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-11 09:07:52 +02:00
Thomas Lamprecht
71cb8e0f87 pci related code cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 08:39:28 +02:00
Thomas Lamprecht
e2b42bee6d pci: use local helper to generated generate_mdev_uuid
avoid (API) leaking qemu-server specific stuff into pve-common

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 08:38:28 +02:00
Thomas Lamprecht
82712fcd3c pci: prepare_pci_device: fixup parameter name
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-11 08:37:35 +02:00
Dominik Csapak
acd4b77745 pci: refactor pci device preparation
makes the vm start a bit less crowded

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-08 06:27:19 +02:00
Thomas Lamprecht
a064e5117f efi: use vendor-agonstic "pre-enrolled-keys" + description fix
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-05 18:35:25 +02:00
Stefan Reiter
b5099b4f6c ovmf: support secure boot with 4m and 4m-ms efidisk types
Provide support for secure boot by using the new "4m" and "4m-ms"
variants of the OVMF code/vars templates. This is specified on the
efidisk via the 'efitype' and 'ms-keys' parameters.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-05 18:04:03 +02:00
Dominik Csapak
a4d5b84c9c pci: to not capture first group in PCIRE
we do not need this group, but want to use the regex where we have
multiple groups, so make it a non-capture group

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-10-05 16:14:42 +02:00
Thomas Lamprecht
132683274a start: warn about terminating the swtpm instance
if only to notice the user about the PID if the termination fails

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-05 07:05:42 +02:00
Thomas Lamprecht
2b9ee9441a trivial: indentation/formatting fixup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-10-05 07:05:07 +02:00
Stefan Reiter
f9dde219f2 fix #3075: add TPM v1.2 and v2.0 support via swtpm
Starts an instance of swtpm per VM in it's systemd scope, it will
terminate by itself if the VM exits, or be terminated manually if
startup fails.

Before first use, a TPM state is created via swtpm_setup. State is
stored in a 'tpmstate0' volume, treated much the same way as an efidisk.

It is migrated 'offline', the important part here is the creation of the
target volume, the actual data transfer happens via the QEMU device
state migration process.

Move-disk can only work offline, as the disk is not registered with
QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
moving a backing storage at runtime.

For backups, a bit of a workaround is necessary (this may later be
replaced by NBD support in swtpm): During the backup, we attach the
backing file of the TPM as a read-only drive to QEMU, so our backup
code can detect it as a block device and back it up as such, while
ensuring consistency with the rest of disk state ("snapshot" semantic).

The name for the ephemeral drive is specifically chosen as
'drive-tpmstate0-backup', diverging from our usual naming scheme with
the '-backup' suffix, to avoid it ever being treated as a regular drive
from the rest of the stack in case it gets left over after a backup for
some reason (shouldn't happen).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-10-05 06:51:02 +02:00
Fabian Grünbichler
d2ceac56b5 api: template: invert lock and fork
like for other API calls, repeat the cheap checks done for early abort
before forking and without locks after forking and obtaining the lock,
and only hold the flock in the forked worker instead of across the fork.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-04 09:46:57 +02:00
Fabian Grünbichler
b297918ce2 api: return UPID in template call
as reported on the forum, this is currently missing, making status
queries via the API impossible:

https://forum.proxmox.com/threads/create-vm-via-api-interface.95942/#post-416084

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2021-10-04 09:46:52 +02:00
Fabian Grünbichler
3e07c6d54b vm_destroy: remove pending volumes as well
if a volume is only referenced in the pending section of a config it was
previously not removed when removing the VM, unless the non-default
'remove unreferenced disks' option was enabled.

keeping track of volume IDs which we attempt to remove gets rid of false
warnings in case a volume is referenced both in the config and the
pending section, or multiple times in the config for other reasons.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-09-30 17:37:27 +02:00
Thomas Lamprecht
f8830c4d6e migrate: code style, use up to 100cc if it helps to reduce line-bloat
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 09:26:18 +02:00
Thomas Lamprecht
95b3583b5e migrate: simplify code and add comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-22 09:25:53 +02:00
Fabian Ebner
d213ba299d migrate: use correct target storage id for checks
The '--targetstorage' parameter does not apply to shared storages.

Example for a problem solved with the enabled check: Given a VM with
images only on a shared storage 'storeA', not available on the target
node (i.e. restricted by the nodes property). Then using
'--targetstorage storeB' would make offline migration suddenly
"work", but of course the disks would not be accessible and then
trying to migrate back would fail...

Example for a problem solved with the content type check: if a
VM had a shared ISO image, and there was a '--targetstorage storeA'
option, availablity of the 'iso' content type is checked for
'storeA', which is wrong as the ISO would not be moved to that
storage.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-09-22 08:57:35 +02:00
Thomas Lamprecht
a8d0fec3c2 whitespace/indentation fixes & cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-09-09 17:50:21 +02:00
Fabian Grünbichler
9a66c311ac fix #3608: unbreak removal of scsi controller
the assumption that the index of the controller matches that of the last
removed drive only holds for virtio-scsi-single controller, which makes
the old code print a warning when removing the last drive of a
non-virtio-scsi-single controller except when the indices line up by
chance.

we can simply only call a simplified qemu_iothread_del when removing a
scsi disk of a VM with the virtio-scsi-single controller, and skip the
call for the other controllers which don't support io-threads anyway.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-09-09 17:46:02 +02:00
Constantin Herold
ae776a6288 fix #3581: pass size via argument for memory-backend-ram qmp call
Signed-off-by: Constantin Herold <proxmox8914@herold.me>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-08-18 10:41:04 +02:00
Dominik Csapak
a2e22f9fb2 api2: only add ide drives for non-legacy bootorders
@bootorder only contains entries for non-legacy bootorder entries,
but the default one contains all cdroms anyway, and if the user
explicitely disabled cdroms, it is ok to not add them back
for the new cdrom drive.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-08-05 13:58:35 +02:00
Dominik Csapak
5170f6282d bootorder: fix double entry on cdrom edit
We unconditionally added an entry into the bootorder whenever we
edited the drive, even if it was already in there. Instead we only want to do
that if the bootorder list does not contain it already.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-07-30 15:07:10 +02:00
Dominic Jäger
8717d89d92 Fix #3371: parse ovf: Allow dots in VM name
Dots are allow in PVE VM names, so they should not be dropped during import.

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2021-07-29 17:17:39 +02:00
Mira Limbeck
104f47a9f8 fix #2563: allow live migration with local cloud-init disk
The content of the ISO should be the same on both nodes, so offline
migrate the ISO, but don't regenerate it on VM start on the target node.

This way even with snippets the content will not change during live
migration.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-07-23 11:04:22 +02:00
Stefan Reiter
deb734e348 api: always add new CD drives to bootorder
Attaching an ISO image to a VM is usually/often done for two reasons:
* booting an installer image
* supplying additional drivers to an installer (e.g. virtio)

Both of these cases (the latter at least with SeaBIOS and the Windows
installer) require the disk to be marked as bootable.

For this reason, enable the bootable flag for all new CDROM drives
attached to a VM by adding it to the bootorder list. It is appended to
the end, as otherwise it would cause new drives to boot before already
existing boot targets, which would be a more grave (and IMO bad)
behaviour change.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-23 11:04:19 +02:00
Stefan Reiter
55c7f9cf66 live-restore: fail early if target storage doesn't exist
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-23 11:04:15 +02:00
Thomas Lamprecht
78a3ada744 lvm: avoid the use of IO uring
there may be a kernel issue or a bug in how QEMU uses io_uring, but
we have users that report crashes which f.ebner could see on some
workloads, not really deterministic though and it seems that in newer
kernel versions (5.12+) the crash becomes a hang

While we're closing in on the actual issue here (which could be the
same as for RBD) let's disable io_uring for LVM.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-23 10:55:20 +02:00
Thomas Lamprecht
e83dd50a36 nic: support e1000e
That bit of code seems to be enough here, tested with

qm set VMID --net1 e1000e=EA:93:42:22:10:D8,bridge=vmbr0

on a Alpine Linux and a Windows Server 2016 VM.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-16 19:10:01 +02:00
Thomas Lamprecht
f7bc17ca6d nic: one per line and sort
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-16 18:57:38 +02:00
Fabian Ebner
ec5d198e5b cfg2cmd: avoid io_uring with LVM and write{back, through} cache
Reported in the community forum[0]. Also tried with LVM-thin, but it
doesn't seem to be affected.

See also 628937f53a for the same fix for
krbd.

[0]: https://forum.proxmox.com/threads/after-upgrade-to-7-0-all-vms-dont-boot.92019/post-401017

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-07-07 14:55:17 +02:00
Thomas Lamprecht
d3f9db4d7a fix cpuunits defaults regression
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-07 12:07:39 +02:00
Thomas Lamprecht
67498860a4 conf: cpuunits: adapt description and defaults for cgroup v2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-07 10:27:05 +02:00
Thomas Lamprecht
6c71a52acd cpu weight: clamp to maximum for cgroup v2
In v2 the range is [1, 10000], but the API allows the old limits from
2 to 262144, so clamp the upper for v2.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-07 09:36:20 +02:00
Alexandre Derumier
4a5cb613d3 api2: fix vmconfig_apply_pending errors handling
commit
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=eb5e482ded9ae6aeb6575de9441b79b90a5de531

have introduced error handling for offline pending apply,

-               PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, $storecfg, $running);
+               PVE::QemuServer::vmconfig_apply_pending($vmid, $conf, $storecfg, $running, $errors);

 sub vmconfig_apply_pending {
-    my ($vmid, $conf, $storecfg) = @_;
+    my ($vmid, $conf, $storecfg, $errors) = @_;

but they was wrong nonused $running param, so currently $errors are not correctly handled

Fixes: eb5e482ded ("vmconfig_apply_pending: add error handling")
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Tested-by: Oguz Bektas <o.bektas@proxmox.com>
2021-07-06 12:40:43 +02:00
Thomas Lamprecht
738dc81cba further improve on #3329, ensure write-back is used over write-around
Suggested-by: Rick Altherr <kc8apf@kc8apf.net>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-05 20:47:50 +02:00
Thomas Lamprecht
9de049b0ad live-restore: add another comment for efidisk special case just to be sure
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-07-01 12:57:11 +02:00
Stefan Reiter
6f94e1625b live-restore: preload efidisk before starting VM
The efidisk never got restored correctly before, since we don't use the
generic print_drive_commandline_full for it, and as such it didn't get a
backing image attached. This not only causes the efidisk data to be lost
on restore, but also an error at the end, since we try to remove a
non-existing PBS blockdev.

Since it is attached differently to a regular drive, adding PBS backing
would be more difficult, but not to worry: an efidisk is small enough
that it doesn't hurt performance to just restore it via the regular
mechanism before starting the VM, and simply excluding it from the live
restore entirely.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-01 12:54:32 +02:00
Stefan Reiter
628937f53a cfg2cmd/drive: don't use io_uring for krbd with wb/wt cache
As reported here and locally reproduced:
https://forum.proxmox.com/threads/efi-vms-wont-start-under-7-beta-with-writeback-cache.91629/

This configuration is currently broken. Until we figure out how to fix
it properly, we can just have this (luckily very narrow) config pattern
fall back to aio=threads as it used to.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-07-01 09:40:51 +02:00
Dominik Csapak
2c44ec4974 fix #2175: PVE/API2/Qemu: update_vm_api: check old drive for permissions too
otherwise a user with only VM.Config.CDROM can detach a disk from a VM
by updating it to a cdrom drive

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-24 18:56:04 +02:00
Dominik Csapak
bb660bc3ce PVE/API2/Qemu/update_vm_api: refactor drive permission check
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-24 18:56:04 +02:00
Wolfgang Bumiller
0fe779a62c don't default to O_DIRECT on btrfs without nocow
otherwise it'll produce a whole lot of checksum errors

and while this would be nice as a storage feature check,
it's hard to be 100% accurate there anyway since a directory
storage can point anywhere, like for instance a btrfs
directory, causing the same issue...

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-24 17:15:31 +02:00
Fabian Grünbichler
85fcf79e21 template: add -snapshot to KVM command
this allows effectively setting ALL volumes as read-only, even if the
disk controller does not support it. without it, IDE and SATA disks
with (base) volumes which are marked read-only/immutable on the storage
level prevent the template VM from starting for backup purposes.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-23 12:37:40 +02:00
Fabian Grünbichler
b4dc647557 template: mark efidisk as read-only
otherwise backups of templates using UEFI fail with storages like LVM
thin, where the volumes are not writable. disk controllers like IDE and
SATA that don't support being read-only are still broken for UEFI.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ drop the readonly=off when not required, resolve merger conflict
  from Dominik's EFI disk cache mode fix ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 12:36:20 +02:00
Fabian Grünbichler
75748d4492 drive: factor out read-only helper
we also need it for efidisks.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-23 12:27:54 +02:00
Fabian Ebner
872cfcf5bc api: update vm: correctly handle warnings status for delayed task
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-23 12:26:53 +02:00
Fabian Ebner
831ad442a2 cli tools: correctly handle warnings task status
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-23 12:26:49 +02:00
Wolfgang Bumiller
205dbf39b1 allow migrating raw btrfs volumes
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-06-23 12:26:40 +02:00
Thomas Lamprecht
db861a4617 migrate prepare: make content type check generic
to avoid false-positives, e.g., from a ISO on a ISO only storage.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 12:15:43 +02:00
Thomas Lamprecht
8a5bd88907 migrate prepare: use also explicit variable for storecfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 12:15:16 +02:00
Thomas Lamprecht
3148f0b053 check_storage_availability: make content type check generic
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 12:03:35 +02:00
Stefan Reiter
354e61aacc use KillMode 'process' for systemd scope
KillMode 'none' is deprecated, and systemd loudly complains about that
in the journal. To avoid the warning, but keep the behaviour the same,
use KillMode 'process'.

This mode does two things differently, which we have to stop it from
doing:
* it sends SIGTERM right when the scope is cancelled (e.g. on shutdown)
 -> but only to the "root" process, which in our case is the worker
 instance forking QEMU, so it is already dead by the time this happens
* it sends SIGKILL to *all* children after a timeout
 -> can be avoided by setting either SendSIGKILL to false, or
 TimeoutStopUSec to infinity - for safety, we do both

In my testing, this replicated the previous behaviour exactly, but
without using the deprecated 'none' mode.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-23 12:02:59 +02:00
Lorenz Stechauner
3f11f0d7e2 vm_start: check if storages of volumes support correct content-type
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-06-23 12:02:50 +02:00
Stefan Reiter
6d5673c3b6 cfg2cmd: make io_uring default
The 'aio' setting is not visible to the guest, and so can be changed
during migrations or snapshots without issue. It is thus only
dependendent on the actual QEMU version being >= 6.0, not machine
version.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-23 12:02:44 +02:00
Stefan Reiter
59e5934270 enable io-uring support
Note that the value in this enum directly represents the value passed to
QEMU, so we need to use the underscore.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-23 12:01:59 +02:00
Fabian Ebner
24b84b4766 migrate: enforce that image content type is available
and use it for the vdisk_list call too. This avoids scanning (and picking up
volumes from!) storages that are not even configured to hold images.

Previously, the content type was only enforced when a storage map was present.

Also serves a bit as a preparation to enforce content type on guest startup,
because now migration failure happens early and not only when trying to start
the guest on the remote node.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:17:48 +02:00
Fabian Ebner
0d2db08414 prefer storage_check_enabled over storage_check_node
storage_check_enabled simply checks for the 'disable' option and then calls
storage_check_node.

While not strictly necessary for a second call where only the storage differs,
e.g. in case of clone, it is more future-proof: if support for a target storage
is added at some point, it might be easy to miss adapting the call.

For the migration checks, the situation is improved by now always catching
disabled (target) storages.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-21 11:17:48 +02:00
Fabian Ebner
8a0addab87 vmstatus: don't set PID when VM is not running
by avoiding int(undef)

Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-18 14:05:35 +02:00
Thomas Lamprecht
a200af1084 config: limit description/comment length to 8 KiB
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-18 13:11:07 +02:00
Fabian Ebner
ad2cad72be vm status: force int where appropriate
to avoid potential problems with stringified numbers in Javascript and
elsewehere.

The vmid was not always an integer as the return schema expects, namely
when there was an opt_vmid argument, because the 'ne' comparision coerced the
vmid to be a string then.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-06-17 14:47:35 +02:00
Fabian Ebner
ef88eaaa58 avoid setting lun number for drives when pvscsi controller is used
Reported in the community forum[0].

In QEMU's hw/scsi/vmw_pvscsi.c in the SCSIBusInfo struct, the max_lun property
is set to 0. This means that in our stack, one cannot have multiple disks and
use 'scsihw: pvscsi' currently, as kvm would fail with
    bad scsi device lun: 1

Instead of increasing the lun number, increase the scsi-id, as we already do for
lsi.* (in hw/scsi/lsi53c895a.c the max_lun property is also 0).

[0]: https://forum.proxmox.com/threads/kvm-bad-scsi-device-lun-1.84318/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-16 20:26:27 +02:00
Thomas Lamprecht
26d717252a followup; shorter code for efidisk rbd cache handling
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-16 15:18:03 +02:00
Dominik Csapak
6aaad2306d fix #3329: turn on cache=writeback for efidisks on rbd
on slower ceph clusters, the write pattern of the ovmf booting process
slows down the boot of the vm, so we turn on caching by default

it seems no other storage (until now) behaves like this. if it does in
the future, we can still add them too, or add a 'cache' property for
the efidisk

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-06-16 15:14:30 +02:00
Fabian Ebner
16e66777a0 vm destroy: do not remove unreferenced disks by default
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 14:56:39 +02:00
Fabian Ebner
9a8ba1272c scan volids: remove superfluous parameter
The only caller that didn't use 'images' was removed as part of the migration
refactoring in commit 62a4c963b8, so this is not
even a breaking change as the 'PVE 7' comment might've suggested.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 14:56:30 +02:00
Fabian Ebner
692f604bb0 Revert "revert spice_ticket prefix change in 7827de4"
This reverts commit ff09c795ed. We wanted to wait
until PVE 7.0 for the change to not break migration new -> old until then.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-06-08 14:56:10 +02:00
Fabian Grünbichler
9bf522bc1e vzdump: add master key support
running outdated VMs without master key support will generate a warning
but proceed with a backup without encrypted key upload.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-06-02 16:49:00 +02:00
Thomas Lamprecht
7908e50263 vzdump: drop legacy fallback logging for dirty-bitmap
Users need to reboot at least once for the upgrade to 7.0, so any VM
running is then using a new enough QEMU...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-02 16:48:18 +02:00
Thomas Lamprecht
daf829ecae live-restore: merge snapshot/repo log lines into one
to many lines make task log harder to read

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-28 12:14:37 +02:00
Stefan Reiter
88cabb624d live-restore: add more logging
To bring it better in line with regular restore, also log the
repository, the snapshot and the target for each drive.

While at it, adjust capitalization of existing log line and clean up
repeated '$1' use.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-28 12:12:50 +02:00
Fabian Ebner
3ab0f9252a destroy VM: also check if unused volumes are base images
It's arguably not likely in practice that only an unused volume is still in use
as a base image, but do it for completeness sake.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-05-28 12:12:50 +02:00
Fabian Ebner
ba1a198481 destroy VM: always remove (referenced) VM state volumes
With --destroy-unreferenced-disks 0 they were not removed yet, but no use in
keeping them around.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-05-28 12:12:50 +02:00
Stefan Reiter
2dc0eb61e8 qm: assume correct VNC setup in 'vncproxy', disallow passwordless
The QMP 'change' command is no longer available since QEMU 6.0, so this
cannot work - instead of replacing it, we can just remove it however.

The 'if' branch would only set the VNC socket path anew and enable
password mode, which is always set and enabled on startup already.
The 'else' branch was intended for certificate login (?), which
according to the FIXME comment is long gone anyway - simply forbid
'vncproxy' without the PVE ticket environment variable set.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-28 11:31:15 +02:00
Stefan Reiter
378ad769dd cfg2cmd: use long form QEMU parameters to avoid warning in 6.0
QEMU warns us about this:

kvm: -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait: warning: short-form boolean option 'server' deprecated
Please use server=on instead
kvm: -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait: warning: short-form boolean option 'nowait' deprecated
Please use wait=off instead
kvm: -vnc unix:/var/run/qemu-server/100.vnc,password: warning: short-form boolean option 'password' deprecated
Please use password=on instead

The new syntax is backwards compatible to at least QEMU 4.0.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-05-28 11:31:15 +02:00
Fabian Ebner
75a2a42395 vmstatus: make template property optional
to avoid printing 'template: ' with 'qm status <id> --verbose' if it's false.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-05-26 17:40:28 +02:00
Lorenz Stechauner
1cb23b87b4 api: clone: sort vm disks to keep numbers consistent
reported by user in forum:
https://forum.proxmox.com/threads/problem-when-copying-template-with-2-discs.89851/

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2021-05-26 17:32:52 +02:00
Fabian Grünbichler
3dc33a728a fix #2862: allow sata/ide template backups
for IDE and SATA, setting the whole drive into readonly mode is not
possible. skip the readonly flag for such drives as a workaround until
we find a better solution.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-29 16:15:53 +02:00
Dominik Csapak
30664f14ff fix bootdisk_size for new bootorder config scheme
Previously, we ever only had a single boot *disk*, while possibly
having multiple cdroms/nics in the boot order

e.g. the config:

 boot: dnc
 bootdisk: scsi0
 ide0: media=cdrom,none
 scsi0: xxx
 net0: ...

would return the size of scsi0 even though it would first boot
from cdrom/network.

When editing the bootorder with such a legacy config, we
remove the 'bootdisk' property and replace the legacy notation
with an explicit order, but we only search the first disk
for the size now.

Restore that behaviour by iterating over all disks in the boot
order property string until we get one that is not a cdrom
and has a size.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-04-29 16:15:33 +02:00
Thomas Lamprecht
8f43ac4893 Revert "migration: do not set default speed limit"
The default was changed for 5.2, so while it is not 32 MiB/s anymore,
it is still 128 MiB/s which I did not notice on my 1 Gbps (or < 125
MiB/s) setup. For users with links faster than one gigabit it now did
some limiting - so setup a very high limit so than even 100G should
not max this out.

This reverts commit a89bd10084.
2021-04-29 15:48:21 +02:00
Fabian Ebner
9938d24df2 migrate: fix memory migration start time
The variable is only ever used for calculating the average speed of memory
migration, but it was set before disk mirroring already. But the disk
sizes are not included in the calculation, resulting in (very) wrong values.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-23 15:00:44 +02:00
Fabian Ebner
6629f976ac qemu_img_convert: add missing newline for progress output
which was accidentally removed by b5e9d97bdf.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-22 11:37:00 +02:00
Dylan Whyte
ebce523987 fix #3369: auto-start vm after failed stopmode backup
Fixes an issue in which a VM/CT fails to automatically restart after a
failed stop-mode backup.

Also fixes a minor typo in a comment

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-22 11:35:20 +02:00
Stefan Reiter
f755117071 live-restore: hold 'create' lock during operation
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:46:05 +02:00
Stefan Reiter
fefd65a1d9 live-restore: don't remove VM on error
Potentially an admin can still recover some data, or wants to inspect
the state.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:46:05 +02:00
Stefan Reiter
020dd358f9 qmrestore: add live-restore option
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-21 17:46:05 +02:00
Fabian Ebner
ab5b97d8a8 drive: volume in-use check: remove unused closure parameter
and simplify the calling iteration.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-19 22:09:17 +02:00
Thomas Lamprecht
b68a957b2e migration: keep log rate steady if polling gets more frequent
Either we're done in a few seconds anyway, or if the VM dirties lots
of pages we need quite a bit of time, and then it does not help to
output roughly the same status 10 times a second...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-19 22:08:19 +02:00
Thomas Lamprecht
0fca250af0 migration: rework logging to more humand friendly, less spammy
* use render_bytes where possible, to get quick to read and grasp
  units printed
* xbzrle is only interesting if actually pages/bytes are send using
  it, so only log in that case
* log if VM dirties more than we send
* log current speed we get from QEMU

In general there are less lines logged and huge integers are avoided.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-19 21:54:37 +02:00
Thomas Lamprecht
e693c49190 migration: factor out variable + code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-19 21:51:21 +02:00
Thomas Lamprecht
7de328c629 migration: log: s/migration_caps/migration capabilities/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-19 21:48:31 +02:00
Thomas Lamprecht
a89bd10084 migration: do not set default speed limit
the claim that QEMU limits this to 32M otherwise is bogus, at least
with any current QEMU version..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-19 21:46:52 +02:00
Thomas Lamprecht
6539865a9d migration: refactor and tidy-up code
Use an early die so that the rest can loose an indentation level for
the actual migration status reporting code

Extract common used members of the stat hash for shorter code.

use `git show -w --word-diff=color --word-diff-regex='\w+'` for
getting a better view of actual changes

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-19 14:59:54 +02:00
Fabian Ebner
0783c3c271 migration: move finishing block jobs to phase2 for better/uniform error handling
avoids the possibility to die during phase3_cleanup and instead of needing to
duplicate the cleanup ourselves, benefit from phase2_cleanup doing so.

The duplicate cleanup was also very incomplete: it didn't stop the remote kvm
process (leading to 'VM already running' when trying to migrate again
afterwards), but it removed its disks, and it didn't unlock the config, didn't
close the tunnel and didn't cancel the block-dirty bitmaps.

Since migrate_cancel should do nothing after the (non-storage) migrate process
has completed, even that cleanup step is fine here.

Since phase3 is empty at the moment, the order of operations is still the same.

Also add a test, that would complain about finish_tunnel not being called before
this patch. That test also checks that local disks are not already removed
before finishing the block jobs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
a6be63ac9b migration: split out replication from scan_local_volumes
and avoid one loop over the config, by extending foreach_volid to include the
drivename.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
4b26ffbfa5 migration: keep track of replicated volumes via local_volumes
by extending filter_local_volumes.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
efe0d457c6 migration: use storage_migration for checks instead of online_local_volumes
Like this we don't need to worry about auto-vivifaction.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
eb5751ba02 migration: cleanup_remotedisks: simplify and include more disks
Namely, those migrated with storage_migrate by using the information from
volume_map. Call cleanup_remotedisks in phase1_cleanup as well, because that's
where we end if sync_offline_local_volumes fails, and some disks might already
have been transfered successfully. Note that the local disks are still here, so
this is fine.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
ad8b9d5e2d migration: simplify removal of local volumes and get rid of self->{volumes}
This also changes the behavior to remove the local copies of offline migrated
volumes only after the migration has finished successfully (this is relevant
for mixed settings, e.g. online migration with unused/vmstate disks).

local_volumes contains both, the volumes previously in $self->{volumes}
and the volumes in $self->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
efbbe59da4 migration: add nbd migrated volumes to volume_map earlier
and avoid a little bit of duplication by creating a helper

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
c3417e3b6e migration: save targetstorage and bwlimit in local_volumes hash and re-use information
It is enough to call get_bandwith_limit once for each source_storage.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
2c4ba4c3ee migration: fix calculation of bandwith limit for non-disk migration
The case with:
1. no generic 'migration' limit from the storage plugin
2. a migrate_speed limit in the VM config
was broken. It would assign 0 to migrate_speed when picking the minimum value
and then default to the default value. Fix it by checking if bwlimit is 0
before picking the minimum.

Also, make it a bit more readable by avoiding the trick of //-assigning bwlimit
before the units match up and relying on getting back the original bwlimit value
as the minimum. Instead, only ||-assign after the units match up and don't rely
on other things.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
3276a43470 migration: split out config_update_local_disksizes from scan_local_volumes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
62a4c963b8 migration: avoid re-scanning all volumes
by using the information obtained in the first scan. This
also makes sure we only scan local storages.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
d10b78f4d2 migration: split sync_disks into two functions
by making local_volumes class-accessible. One functions is for scanning all local
volumes and one is for actually syncing offline volumes via storage_migrate. The
exception is replicated volumes, this still happens during the scan for now.

Also introduce a filter_local_volumes helper, to makes life easier.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:30:41 +02:00
Fabian Ebner
eabac302ba restore: update config: remove unused parameter
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:10:28 +02:00
Fabian Ebner
c62d7cf547 test: add tests for restoring config
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:10:28 +02:00
Fabian Ebner
d0ff75d9b4 filter by content type when using vdisk_list
except for migration, where it would be subtly backwards-incompatible. Since
there is a scan_volids call for migration, we can't default to filtering in
scan_volids just yet.

Also allows to get rid of the existing filtering hack in rescan().

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:03:44 +02:00
Stefan Reiter
d4be7f31b5 cfg2cmd: fix +pveN machine types with pxe
Pinned machine versions like "pc-i440fx-4.2+pve2.pxe" would otherwise
get a second "+pve0" suffix, which is incorrect.

Also deal with non-pve pinned versions correctly, i.e.
"pc-i440fx-5.2.pxe" becomes "pc-i440fx-5.2+pve0.pxe".

Handle .pxe suffixes in Machine.pm as well, and add two test cases.

Co-developed-by: Luca Berneking <luca@berneking.net>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-18 17:58:56 +02:00
Fabian Ebner
a0d8d8592a drive: volume in-use check: fix fallback path comparison
When checking whether a volume is still referenced by a snapshot, the volid
itself is first checked. When the volid is different, we fall back to comparing
the path.

As the first value to be compared is a volume's path, the second value better be
a volume's path too, and not a snapshot's path.

See also 77019edfe0 for historical context.

The error that led me here:
* had a VM with ZFS over iSCSI storage with an exsiting snapshot
* add new unused drive
* try to remove the unsued drive
* fails, because ZFS (not Pool!) Plugin does not support snapshot paths.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 17:26:05 +02:00
Thomas Lamprecht
67daf6921b drive mirror: stop logging progress for a disk after it got ready
If, why ever, got "not-ready" again we'd log again the next round.

Improves the behavior for multiple disks, especially on migration
where we mirrored the local disks one by one, but kept reporting on
prev. ones.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 17:52:54 +02:00
Thomas Lamprecht
b5e9d97bdf image convert: use human-readable units in progress report
similar to what driver mirror monitor was changed too

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 17:51:04 +02:00
Thomas Lamprecht
fd70c84362 indentation line-length cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 17:50:13 +02:00
Thomas Lamprecht
bb419195f9 restore PBS: use actual PVE::QemuConfig interface for destroying a config on error
avoid further spaghettification of our code base...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:43:47 +02:00
Thomas Lamprecht
bfb1267858 pbs_live_restore: code cleanup, avoid prefixin local package
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:43:03 +02:00
Thomas Lamprecht
3b56383bdb mirror monitor: rework periodic status reporting
orient on the backup output which got reworked for PVE 6.2/6.3

Avoid overwhelming the user with redundant information, and use human
readable units.

before:
> restore-drive-scsi5: transferred: 167772160 bytes remaining: 8422162432 bytes total: 8589934592 bytes progression: 1.95 % busy: 1 ready: 0

after:
> restore-drive-scsi0: transferred 720.0 MiB of 32.0 GiB (2.20%) in 12s

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:40:21 +02:00
Thomas Lamprecht
a09b39f163 live restore: slightly more status output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:40:08 +02:00
Thomas Lamprecht
1057fc7436 mirror monitor: avoid overlong hash access, use intermediate variable
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 17:46:19 +02:00
Thomas Lamprecht
0ea24bf080 mirror monitor: refactoring/code cleanup
mostly s/\$job/$job_id/ and s/foreach/for/ + sort.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 16:59:18 +02:00
Thomas Lamprecht
8986e36e85 live restore: start/delete blockdev jobs in deterministic order
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 16:51:04 +02:00
Thomas Lamprecht
b973806ef1 api: restore: better error messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 10:20:26 +02:00
Thomas Lamprecht
a0e27afb5e api: restore: start and live-restore do not conflict
if live-restore is set then the VM is actually started before, so we
can just skip it..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 10:12:49 +02:00
Thomas Lamprecht
a183df68a5 print drive: prefix drive-ID on errors
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 10:12:08 +02:00
Stefan Reiter
26697640d6 live-restore: register qmeventd handle
Similar to backups, prevent QEMU from being killed by qmeventd during
the live-restore, so a guest can shut itself down without aborting the
restore operation.

Note that the 'close' is only to be explicit, the handle will also be
closed in case an operation errors (i.e. when the 'eval' is left).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
65911545dd extract register_qmeventd_handle to QemuServer.pm
...to be reused by live-restore.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
26731a3c15 enable live-restore for PBS
Enables live-restore functionality using the 'alloc-track' QEMU driver.
This allows starting a VM immediately when restoring from a PBS
snapshot. The snapshot is mounted into the VM, so it can boot from that,
while guest reads and a 'block-stream' job handle the restore in the
background.

If an error occurs, the VM is deleted and all data written during the
restore is lost.

The VM remains locked during the restore, which automatically prohibits
any modifications to the config while restoring. Some modifications
might potentially be safe, however, this is experimental enough that I
believe this would cause more bad stuff(tm) than actually satisfy any
use cases.

Pool handling is slightly adjusted so the VM can be added to the pool
before the restore starts.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
5921764c26 cfg2cmd: allow PBS snapshots as backing files for drives
Uses the custom 'alloc-track' filter node to redirect writes to the
original drives target, while unwritten blocks will be read from the
specified PBS snapshot.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
9e67172296 make qemu_drive_mirror_monitor more generic
...so it works with other block jobs as well. Intended use case is
block-stream, which also requires a new "auto" (wait only) completion
mode, since it finishes automatically anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Fabian Ebner
98c3d99e64 schema: mention special syntax for allocating a new volume
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-30 18:37:16 +02:00
Mira Limbeck
988be8d052 fix #2670: cloudinit enable SLAAC
cloud-init's SLAAC option was disabled in 2018 because there was no
support for it. Now that cloud-init 19.4 or newer versions are more
widespread, we can finally reenable it.

Also include minimum required cloud-init version for SLAAC support in
format description.

Tested on Ubuntu 20.04 (ci 20.4), CentOS 8 (ci 19.4), Debian 10 (ci
20.2).

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-03-30 18:25:06 +02:00
Mira Limbeck
617a864ac2 fix #3314: IPv6 requires type 'static6'
A fix was also provided in bugzilla by user wsapplegate:
https://bugzilla.proxmox.com/show_bug.cgi?id=3314

Tested on Ubuntu 20.04, CentOS 8 and Debian 10.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-03-30 18:25:06 +02:00
Stefan Reiter
190c846141 increase timeout for QMP block_resize
In testing this usually completes almost immediately, but in theory this
is a storage/IO operation and as such can take a bit to finish. It's
certainly not unthinkable that it might take longer than the default *3
seconds* we've given it so far. Make it a minute.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-30 18:20:44 +02:00
Stefan Reiter
2cfb09053c vzdump: improve error logging for query-proxmox-support
Only show "not supported by QEMU version" message if we determine that
to be the actual cause, just print the error otherwise.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-24 06:51:40 +01:00
Fabian Ebner
bd61033e30 api: migrate: fix variable name
Commit abff03211f switched to iterating over the
values instead of the keys, but didn't update the variable name. Use target_sid,
because target is already in use for the target node.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-24 06:51:40 +01:00
Stefan Reiter
27a5be5376 snapshot: set migration caps before savevm-start
A "savevm" call (both our async variant and the upstream sync one) use
migration code internally. As such, they both expect migration
capabilities to be set.

This is usually not a problem, as the default set of capabilities is ok,
however, it leads to differing snapshot settings if one does a snapshot
after a machine has been live-migrated (as the capabilities will persist
from that), which could potentially lead to discrepencies in snapshots
(currently it seems to be fine, but it still makes sense to set them to
safeguard against future changes).

Note that we do set the "dirty-bitmaps" capability now (if
query-proxmox-support reports true), which has three effects:

1) PBS dirty-bitmaps are preserved in snapshots, enabling
   fast-incremental backups to work after rollback (as long as no newer
   backups exist), including for hibernate/resume
2) snapshots taken from now on, with a QEMU version supporting bitmap
   migration, *might* lead to incompatibility of these snapshots with
   QEMU versions that don't know about bitmaps at all (i.e. < 5.0 IIRC?)
   - forward compatibility is still given, and all other capabilities we
   set go back to very old versions
3) since we now explicitly disable bitmap saving if the version doesn't
   report support, we avoid crashes even with not-updated QEMU versions

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-16 20:44:51 +01:00
Fabian Ebner
c89642784d restore vma: fix applying storage-specific bandwidth limit
At this stage, there are no keys in %storage_limits to iterate over. The
refactoring in commit 9f3d73bc35 broke the logic
by accident.

Also explicitly set zero if there is no limit to avoid repeating the
get_bandwith_limit call for the same storage. When accessing the value later,
zero is already correctly handled as 'no limit'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-15 13:22:58 +01:00
Thomas Lamprecht
0761e6194a improve windows VM version pinning on VM creation
unify code paths to ensure more consistent behavior, especially on
future changes.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-12 10:00:46 +01:00
Thomas Lamprecht
7f0285e133 qm status: sort hash keys on verbose output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 15:04:26 +01:00
Fabian Ebner
98a4b3fbc4 restore: write new config to variable first
and use file_set_contents to really commit it afterwards. Mostly done as a
preparation for the later patch for sanitizing the config on restore, but
shouldn't hurt by itself either.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-08 17:10:49 +01:00
Stefan Reiter
46b676c0b1 vzdump: increase PBS 'backup' QMP call timeout
Commit "a941bbd0 client: raise HTTP_TIMEOUT to 120s" in proxmox-backup
did the same, however, we would now still fail after 60 seconds since
the QMP call would time out.

Increase the timeout here to the same +5 seconds to give some time to
receive a response, so if the HTTP call in proxmox-backup times out, we
can still get a useful error message instead of timing out the QMP call
too.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-08 17:10:07 +01:00
Fabian Ebner
0761ee013f api: create_vm: check serial and usb permissions
The existing check_vm_modify_config_perm doesn't do so anymore, but
the check only got re-added to the modify/delete paths. See commits
165be267eb and
e30f75c571 for context.

In the future, it might make sense to generalise the
check_vm_modify_config_perm and have it not only take keys, but both
new and old values, and use that generalised function everywhere.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 21:15:23 +01:00
Thomas Lamprecht
4dd1e83c75 always pin windows VMs to a machine version by default
A fix for violating a important standard for booting[0] in recently
packaged QEMU 5.2 surfaced some issues with Windows based VMs in our
forum[1], which seem to be quite sensitive for such changes (it seems
they derive lots of their device assignment from ACPI).
User visible effects are loss of any network configuration due to
windows thinking it was swapped with a new one, and starts with a
fresh config - this is mostly problematic for setups with static
address assignment.

There may be lots of other, more subtle, effects and the PVE admin is
also not always the VM admin, so we really need to avoid such
negative effects. Do this by pinning the version of any windows based
VMs to either the minimum of (5.1, kvm-version) for existing VMs or
the kvm-version at time of VM creation for new ones.

There are patches in pve-manager for user to be able to change the
pinned version themself in the webinterface, so this can now also get
adapted more easily if there surface any other issues (with new or
old version) in the future.

0: https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg08484.html
1: https://forum.proxmox.com/threads/warning-latest-patch-just-broke-all-my-windows-vms-6-3-4-patch-inside.84915/page-2#post-373331

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 20:46:46 +01:00
Thomas Lamprecht
1f5828f2de ostype schema: win10 is valid for win 2019 server too
the webinterface shows it like this since quite a while already.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 20:45:21 +01:00
Thomas Lamprecht
9edb618257 can_run_pve_machine_version: PVE version can really be optional
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 18:49:06 +01:00
Thomas Lamprecht
36b0269724 api: machine list: parse as JSON
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 16:33:08 +01:00
Stefan Reiter
304e51d369 api: add Machine module to query machine types
The file is provided by pve-qemu-kvm.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-05 16:25:28 +01:00
Fabian Ebner
949112c350 fix #3301: status: add currently running machine and QEMU version to full status
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-04 13:57:17 +01:00
Fabian Ebner
ea71be24d6 machine: split out helper for handling query-machines qmp command result
to be re-used in the vmstatus() call.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-04 13:57:11 +01:00
Fabian Ebner
f8d2a1ce99 config: parse: also warn about invalid lines
as we already do for containers.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-03 17:51:20 +01:00
Fabian Ebner
fdfdc80ece fix #3324: clone disk: use larger blocksize for EFI disk
Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
combination with the (currently) PVE-specific osize option to specify the
desired output size.

Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-01 13:58:34 +01:00
Aaron Lauterer
ee9255601e API: update_vm_api: check for CDROM on disk delete
Since CDRoms and disks share the same config keys, we need to check if
it actually is a CDRom and then check the permissions accordingly.

Otherwise it is possible for someone without VM.Config.CDROM
permissions, but with VM.Config.Disk permissions to remove a CD drive
while being unable to create a CDRom drive.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-02-22 17:06:10 +01:00
Thomas Lamprecht
5c3f782554 snapshot: clear up log messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 14:06:02 +01:00
Thomas Lamprecht
983088730b snapshot: reduce logging rate after one minute
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 14:05:20 +01:00
Thomas Lamprecht
f97224b1ef snapshot: log storage where VM state is saved too
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 14:04:19 +01:00
Stefan Reiter
8828460b1d savevm: show information about drives during snapshot
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Stefan Reiter
969eb0b84d savevm: periodically print progress
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Stefan Reiter
f1aca33dc3 vzdump: use renderers from Tools instead of duplicating code
...taking card not to lose the custom precision for byte conversion.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Alexandre Derumier
e6ec384fa7 cloudinit: remove pending delete on online regenerate image
currently only pending changes are applied when we regenerate
image on a running vm, but not the pending delete.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-02-06 14:44:38 +01:00
Alexandre Derumier
545eec65cd cloudinit: add opennebula config format
This is an alternative format for cloudinit use by opennebula,
https://cloudinit.readthedocs.io/en/latest/topics/datasources/opennebula.html

but it can be also used by opennebula context scripts

https://github.com/OpenNebula/addon-context-linux
https://github.com/OpenNebula/addon-context-windows

This context scripts are simple udev trigger/bash scripts
and allow live configuration changes.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-02-06 14:44:38 +01:00
Stefan Reiter
4893f9b970 anchor CPU flag regex to avoid arbitrary flag suffixes
Previously one could specify a CPU flag like 'pcidfoobar' and it would
be accepted, even though we attempt to filter VM-only flags for
security. AFAICT none of the flags we allow can be turned into any
others just by appending text, but better safe than sorry.

Reported-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-01-26 19:27:05 +01:00
Dominik Csapak
b08c37c363 fix #2788: do not resume vms after backup if they were paused before
by checking if the vm is paused at the beginning and skipping the
resume now we also skip the qga freeze/thaw (which cannot work if the
vm is paused)

moved the 'vm_is_paused' sub from the api to PVE/QemuServer.pm so it
is available everywhere we need it.

since a suspend backup would pause the vm anyway, we can skip that
step also

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2021-01-26 18:41:11 +01:00
Thomas Lamprecht
99676a6c1a api: destroy VM: fixup parameter description language
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-26 18:40:20 +01:00
Thomas Lamprecht
47f9f50b75 style fix: missing trailing comma
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 16:24:52 +01:00
Thomas Lamprecht
3e9d173caa vm destroy: destroy also unusedX config entries
this was previously covered by the "lets destroy ever disk which
matches the VMID" feature we disarmed a bit.

As unused disks are referenced in the config, it is not subtle to
destroy them (and we always did in the past) so fix that regression
again for explicitly referenced but unused disks.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 15:49:41 +01:00
Thomas Lamprecht
7585466269 vm destroy: allow opt-out of purging unreferenced disks
Since an old change released with a version bump on 2009-09-07, we
search all enabled storages for VMID maching volumes on VM removal
and purge those too.

This has multiple pitfalls and may be quite unexpected for some
users.

It can make problems when:
* on recovery a VM is created, before disks are reattached the admin
  notices some settings issues and chooses to just recreate the VM;
  but during destroying the dummy VM all related disks get destroyed
  unconditionally which may result in data loss. This actually
  happened and is the original reason for the decision to change
  this.

* a storage is shared between PVE instance (between a set of clusters
  and/or single nodes), while this is against our rules it may still
  come as a surprise if destroying a VM on node A may destroy
  unrelated and unreferenced disks on the unrelated node B without
  asking or allowing to avoid that.

As this the removal of matching but unreferenced disks can result in
permanent data loss (up to the last backup) and may be to subtle and
unforgiving, allow to opt-out of it.

In the long run we want to make this opt-in, but that is an API
change and so needs to wait for next major release. But, we can adapt
the GUI already to make it opt-in there, catching most of the cases.

side-note: CT do not have this behavior at all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 15:22:46 +01:00
Thomas Lamprecht
4405703d89 qm: import ovf: removed all imported disks on error
While the do_import method cleans up the current disk it was
importing on any error the following cases are not handled:
* multiple disks, first few succeed then one fails, only the last
  failed one was taken care of before this patch
* error after the import disk loop was not handled

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 14:58:57 +01:00
Mira Limbeck
1b485263b3 fix drive-mirror completion with cloudinit
On clone_vm when cloning the disks while the VM is running, we use
drive-mirror. We skip completion until the last disk, but with a
cloudinit disk there's no drive-mirror and so no completion done. If it
is the last disk in the hash, we never complete the drive-mirror jobs
and no further cloning is possible as there are already active jobs
using the disks.

To fix it we have to call qemu_drive_mirror_monitor directly in the case
of cloudinit when completion is requested and there are jobs defined.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-01-25 14:30:35 +01:00
Gilles Pietri
211785ee50 audio: add the none audio backend
Signed-off-by: Gilles Pietri <contact+dev@gilouweb.com>
2021-01-12 12:30:31 +01:00
Aaron Lauterer
0a4aff09bd improve description of fstrim_cloned_disks
The phrasing left some room for speculation when this would be triggered.
E.g. after cloning a full VM?

Currently the only instances where it is used is when a disk is moved or
a VM migrated.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-12-18 17:55:29 +01:00
Mira Limbeck
c997e24af5 fix cloning/restoring of cloudinit disks in raw format
We only added the format extension when it was not 'raw'. But on file level
storages we always require it. To fix this, always add the format
extension if the storage provides the 'path' property.
This is the same logic we use in create_disks for cloudinit disks.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-12-15 16:17:32 +01:00
Fabian Ebner
eb3acec88a migration: sort volumes migrated with storage_migrate
Having a deterministic order here is useful for testing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 15:21:37 +01:00
Fabian Ebner
7d730f953c migration: factor out starting remote tunnel
so it can be mocked when testing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 15:21:37 +01:00
Fabian Ebner
27fa645e66 use new move_config_to_node method
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 15:21:37 +01:00
Fabian Ebner
b5688f69a0 clone_disk: fix offline clone of efidisk
by partially reverting 4df98f2f14 and fixing the
line-length issue differently. The commit didn't update two later usages of
$size, breaking copying the efidisk. The other usage as a parameter to
qemu_img_convert() is luckily only cosmetic, for progress output.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 14:50:06 +01:00
Thomas Lamprecht
e00319af4d api: adapt VM destroy description
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-15 14:49:41 +01:00
Dominic Jäger
3eecc92525 qm destroy: Extend --purge description
Add replication jobs & HA. This makes the enumeration complete.

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2020-12-15 14:46:14 +01:00
Dominik Csapak
fbec3f894a use get_repository from PVE::PBSClient
this fixes the issue that we did not generate the correct repository
url for pbs storages that contained an ipv6 address or a port

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-03 17:25:32 +01:00
Thomas Lamprecht
3bae384f75 clone disk: avoid errors after disk was moved by QEMU
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 14:18:23 +01:00
Thomas Lamprecht
1b987638a8 api: cleanup code format of clone_disk call
showing off it's monstrosity of a method signature, needs to be
cleaned up in a followup commit

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 14:18:23 +01:00
Thomas Lamprecht
a2af1bbe89 add and use get_qga_key
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 14:18:23 +01:00
Fabian Grünbichler
e5b18771b8 status: skip query-proxmox-support if VM is offline
otherwise pvestatd will print lots of warnings..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 11:26:37 +01:00
Stefan Reiter
6891fd70ed print query-proxmox-support result in 'full' status
Extends print_recursive_hash for the CLI to handle JSON booleans so the
result will actually show up in 'qm status --verbose'.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-24 17:20:56 +01:00
Fabian Ebner
e219712561 deactivate volumes after storage_migrate
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-24 16:19:35 +01:00
Fabian Ebner
78bd57d9c3 adapt to new storage_migrate activation behavior
Offline migrated volumes are now activated within storage_migrate.
Online migrated volumes can be assumed to be already active.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-24 16:19:29 +01:00
Alexandre Derumier
6cbd3eb82c systemd scope: add CPUWeight for cgroupv2 2020-11-24 12:00:38 +01:00
Alexandre Derumier
5b65b00d04 replace cgroups_write by cgroup change_cpu_shares && change_cpu_quota 2020-11-24 12:00:38 +01:00
Wolfgang Bumiller
114d2e765a add PVE::QemuServer::Cgroup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-11-24 12:00:33 +01:00
Fabian Ebner
19ff368213 don't migrate replicated VM whose replication job is marked for removal
while it didn't actually fail, we probably want to avoid the behavior:

With remove_job=full:
    * run_replication called during migration causes the replicated volumes to
      be removed
    * migration continues by fully copying all volumes

With remove_job=local:
    * run_replication called during migration causes the job (and local
      replication snapshots) to be removed
    * migration continues by fully copying all volumes and renaming them to
      avoid collision with the still existing remote volumes

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-09 10:08:22 +01:00
Fabian Ebner
c2c96d7378 fix checks for transfering replication state/switching job target
In some cases $self->{replicated_volumes} will be auto-vivified
to {} by checks like
next if $self->{replicated_volumes}->{$volid}
and then {} would evaluate to true in a boolean context.

Now the replication information is retrieved once in prepare,
and used to decide whether to make the calls or not.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-09 10:08:22 +01:00
Fabian Ebner
68980d6626 Repeat check for replication target in locked section
No need to warn twice, so the warning from the outside check
was removed.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-09 10:08:22 +01:00
Stefan Reiter
8e0c97bbbf fix vm_resume and allow vm_start with QMP status 'shutdown'
When the VM is in status 'shutdown', i.e. after the guest issues a
powerdown while a backup is running, QEMU requires a 'system_reset' to
be issued before 'cont' can boot the guest again.

Additionally, when the VM has been powered down during a backup, the
logically correct call would be a 'vm_start', so automatically vm_resume
from vm_start in case this situation occurs. This also means the GUI can
cope with this almost unchanged.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-05 11:22:47 +01:00
Stefan Reiter
27b25d037e config_to_command: use -no-shutdown option
Ignore shutdowns triggered from within the guest in favor of detecting
them via qmeventd and stopping the QEMU process that way.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-05 11:22:47 +01:00
Stefan Reiter
962d4d647d vzdump: use dirty bitmap for not running VMs too
Now that VMs can be started during a backup, it makes sense to create a
dirty bitmap in these cases too, since the VM might be resumed and thus
continue running normally even after the backup is done.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-05 11:22:47 +01:00