Commit Graph

1149 Commits

Author SHA1 Message Date
Fabian Ebner
c62d7cf547 test: add tests for restoring config
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:10:28 +02:00
Fabian Ebner
d0ff75d9b4 filter by content type when using vdisk_list
except for migration, where it would be subtly backwards-incompatible. Since
there is a scan_volids call for migration, we can't default to filtering in
scan_volids just yet.

Also allows to get rid of the existing filtering hack in rescan().

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-18 18:03:44 +02:00
Stefan Reiter
d4be7f31b5 cfg2cmd: fix +pveN machine types with pxe
Pinned machine versions like "pc-i440fx-4.2+pve2.pxe" would otherwise
get a second "+pve0" suffix, which is incorrect.

Also deal with non-pve pinned versions correctly, i.e.
"pc-i440fx-5.2.pxe" becomes "pc-i440fx-5.2+pve0.pxe".

Handle .pxe suffixes in Machine.pm as well, and add two test cases.

Co-developed-by: Luca Berneking <luca@berneking.net>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-18 17:58:56 +02:00
Thomas Lamprecht
67daf6921b drive mirror: stop logging progress for a disk after it got ready
If, why ever, got "not-ready" again we'd log again the next round.

Improves the behavior for multiple disks, especially on migration
where we mirrored the local disks one by one, but kept reporting on
prev. ones.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 17:52:54 +02:00
Thomas Lamprecht
b5e9d97bdf image convert: use human-readable units in progress report
similar to what driver mirror monitor was changed too

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 17:51:04 +02:00
Thomas Lamprecht
fd70c84362 indentation line-length cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-15 17:50:13 +02:00
Thomas Lamprecht
bb419195f9 restore PBS: use actual PVE::QemuConfig interface for destroying a config on error
avoid further spaghettification of our code base...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:43:47 +02:00
Thomas Lamprecht
bfb1267858 pbs_live_restore: code cleanup, avoid prefixin local package
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:43:03 +02:00
Thomas Lamprecht
3b56383bdb mirror monitor: rework periodic status reporting
orient on the backup output which got reworked for PVE 6.2/6.3

Avoid overwhelming the user with redundant information, and use human
readable units.

before:
> restore-drive-scsi5: transferred: 167772160 bytes remaining: 8422162432 bytes total: 8589934592 bytes progression: 1.95 % busy: 1 ready: 0

after:
> restore-drive-scsi0: transferred 720.0 MiB of 32.0 GiB (2.20%) in 12s

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:40:21 +02:00
Thomas Lamprecht
a09b39f163 live restore: slightly more status output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 19:40:08 +02:00
Thomas Lamprecht
1057fc7436 mirror monitor: avoid overlong hash access, use intermediate variable
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 17:46:19 +02:00
Thomas Lamprecht
0ea24bf080 mirror monitor: refactoring/code cleanup
mostly s/\$job/$job_id/ and s/foreach/for/ + sort.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 16:59:18 +02:00
Thomas Lamprecht
8986e36e85 live restore: start/delete blockdev jobs in deterministic order
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 16:51:04 +02:00
Thomas Lamprecht
a183df68a5 print drive: prefix drive-ID on errors
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-06 10:12:08 +02:00
Stefan Reiter
26697640d6 live-restore: register qmeventd handle
Similar to backups, prevent QEMU from being killed by qmeventd during
the live-restore, so a guest can shut itself down without aborting the
restore operation.

Note that the 'close' is only to be explicit, the handle will also be
closed in case an operation errors (i.e. when the 'eval' is left).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
65911545dd extract register_qmeventd_handle to QemuServer.pm
...to be reused by live-restore.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
26731a3c15 enable live-restore for PBS
Enables live-restore functionality using the 'alloc-track' QEMU driver.
This allows starting a VM immediately when restoring from a PBS
snapshot. The snapshot is mounted into the VM, so it can boot from that,
while guest reads and a 'block-stream' job handle the restore in the
background.

If an error occurs, the VM is deleted and all data written during the
restore is lost.

The VM remains locked during the restore, which automatically prohibits
any modifications to the config while restoring. Some modifications
might potentially be safe, however, this is experimental enough that I
believe this would cause more bad stuff(tm) than actually satisfy any
use cases.

Pool handling is slightly adjusted so the VM can be added to the pool
before the restore starts.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
5921764c26 cfg2cmd: allow PBS snapshots as backing files for drives
Uses the custom 'alloc-track' filter node to redirect writes to the
original drives target, while unwritten blocks will be read from the
specified PBS snapshot.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Stefan Reiter
9e67172296 make qemu_drive_mirror_monitor more generic
...so it works with other block jobs as well. Intended use case is
block-stream, which also requires a new "auto" (wait only) completion
mode, since it finishes automatically anyway.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-04-01 10:58:13 +02:00
Mira Limbeck
988be8d052 fix #2670: cloudinit enable SLAAC
cloud-init's SLAAC option was disabled in 2018 because there was no
support for it. Now that cloud-init 19.4 or newer versions are more
widespread, we can finally reenable it.

Also include minimum required cloud-init version for SLAAC support in
format description.

Tested on Ubuntu 20.04 (ci 20.4), CentOS 8 (ci 19.4), Debian 10 (ci
20.2).

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-03-30 18:25:06 +02:00
Stefan Reiter
190c846141 increase timeout for QMP block_resize
In testing this usually completes almost immediately, but in theory this
is a storage/IO operation and as such can take a bit to finish. It's
certainly not unthinkable that it might take longer than the default *3
seconds* we've given it so far. Make it a minute.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-30 18:20:44 +02:00
Stefan Reiter
27a5be5376 snapshot: set migration caps before savevm-start
A "savevm" call (both our async variant and the upstream sync one) use
migration code internally. As such, they both expect migration
capabilities to be set.

This is usually not a problem, as the default set of capabilities is ok,
however, it leads to differing snapshot settings if one does a snapshot
after a machine has been live-migrated (as the capabilities will persist
from that), which could potentially lead to discrepencies in snapshots
(currently it seems to be fine, but it still makes sense to set them to
safeguard against future changes).

Note that we do set the "dirty-bitmaps" capability now (if
query-proxmox-support reports true), which has three effects:

1) PBS dirty-bitmaps are preserved in snapshots, enabling
   fast-incremental backups to work after rollback (as long as no newer
   backups exist), including for hibernate/resume
2) snapshots taken from now on, with a QEMU version supporting bitmap
   migration, *might* lead to incompatibility of these snapshots with
   QEMU versions that don't know about bitmaps at all (i.e. < 5.0 IIRC?)
   - forward compatibility is still given, and all other capabilities we
   set go back to very old versions
3) since we now explicitly disable bitmap saving if the version doesn't
   report support, we avoid crashes even with not-updated QEMU versions

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-16 20:44:51 +01:00
Fabian Ebner
c89642784d restore vma: fix applying storage-specific bandwidth limit
At this stage, there are no keys in %storage_limits to iterate over. The
refactoring in commit 9f3d73bc35 broke the logic
by accident.

Also explicitly set zero if there is no limit to avoid repeating the
get_bandwith_limit call for the same storage. When accessing the value later,
zero is already correctly handled as 'no limit'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-15 13:22:58 +01:00
Thomas Lamprecht
0761e6194a improve windows VM version pinning on VM creation
unify code paths to ensure more consistent behavior, especially on
future changes.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-12 10:00:46 +01:00
Fabian Ebner
98a4b3fbc4 restore: write new config to variable first
and use file_set_contents to really commit it afterwards. Mostly done as a
preparation for the later patch for sanitizing the config on restore, but
shouldn't hurt by itself either.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-08 17:10:49 +01:00
Thomas Lamprecht
4dd1e83c75 always pin windows VMs to a machine version by default
A fix for violating a important standard for booting[0] in recently
packaged QEMU 5.2 surfaced some issues with Windows based VMs in our
forum[1], which seem to be quite sensitive for such changes (it seems
they derive lots of their device assignment from ACPI).
User visible effects are loss of any network configuration due to
windows thinking it was swapped with a new one, and starts with a
fresh config - this is mostly problematic for setups with static
address assignment.

There may be lots of other, more subtle, effects and the PVE admin is
also not always the VM admin, so we really need to avoid such
negative effects. Do this by pinning the version of any windows based
VMs to either the minimum of (5.1, kvm-version) for existing VMs or
the kvm-version at time of VM creation for new ones.

There are patches in pve-manager for user to be able to change the
pinned version themself in the webinterface, so this can now also get
adapted more easily if there surface any other issues (with new or
old version) in the future.

0: https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg08484.html
1: https://forum.proxmox.com/threads/warning-latest-patch-just-broke-all-my-windows-vms-6-3-4-patch-inside.84915/page-2#post-373331

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 20:46:46 +01:00
Thomas Lamprecht
1f5828f2de ostype schema: win10 is valid for win 2019 server too
the webinterface shows it like this since quite a while already.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 20:45:21 +01:00
Fabian Ebner
949112c350 fix #3301: status: add currently running machine and QEMU version to full status
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-04 13:57:17 +01:00
Fabian Ebner
f8d2a1ce99 config: parse: also warn about invalid lines
as we already do for containers.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-03 17:51:20 +01:00
Fabian Ebner
fdfdc80ece fix #3324: clone disk: use larger blocksize for EFI disk
Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
combination with the (currently) PVE-specific osize option to specify the
desired output size.

Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-01 13:58:34 +01:00
Alexandre Derumier
e6ec384fa7 cloudinit: remove pending delete on online regenerate image
currently only pending changes are applied when we regenerate
image on a running vm, but not the pending delete.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-02-06 14:44:38 +01:00
Alexandre Derumier
545eec65cd cloudinit: add opennebula config format
This is an alternative format for cloudinit use by opennebula,
https://cloudinit.readthedocs.io/en/latest/topics/datasources/opennebula.html

but it can be also used by opennebula context scripts

https://github.com/OpenNebula/addon-context-linux
https://github.com/OpenNebula/addon-context-windows

This context scripts are simple udev trigger/bash scripts
and allow live configuration changes.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-02-06 14:44:38 +01:00
Dominik Csapak
b08c37c363 fix #2788: do not resume vms after backup if they were paused before
by checking if the vm is paused at the beginning and skipping the
resume now we also skip the qga freeze/thaw (which cannot work if the
vm is paused)

moved the 'vm_is_paused' sub from the api to PVE/QemuServer.pm so it
is available everywhere we need it.

since a suspend backup would pause the vm anyway, we can skip that
step also

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2021-01-26 18:41:11 +01:00
Thomas Lamprecht
3e9d173caa vm destroy: destroy also unusedX config entries
this was previously covered by the "lets destroy ever disk which
matches the VMID" feature we disarmed a bit.

As unused disks are referenced in the config, it is not subtle to
destroy them (and we always did in the past) so fix that regression
again for explicitly referenced but unused disks.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 15:49:41 +01:00
Thomas Lamprecht
7585466269 vm destroy: allow opt-out of purging unreferenced disks
Since an old change released with a version bump on 2009-09-07, we
search all enabled storages for VMID maching volumes on VM removal
and purge those too.

This has multiple pitfalls and may be quite unexpected for some
users.

It can make problems when:
* on recovery a VM is created, before disks are reattached the admin
  notices some settings issues and chooses to just recreate the VM;
  but during destroying the dummy VM all related disks get destroyed
  unconditionally which may result in data loss. This actually
  happened and is the original reason for the decision to change
  this.

* a storage is shared between PVE instance (between a set of clusters
  and/or single nodes), while this is against our rules it may still
  come as a surprise if destroying a VM on node A may destroy
  unrelated and unreferenced disks on the unrelated node B without
  asking or allowing to avoid that.

As this the removal of matching but unreferenced disks can result in
permanent data loss (up to the last backup) and may be to subtle and
unforgiving, allow to opt-out of it.

In the long run we want to make this opt-in, but that is an API
change and so needs to wait for next major release. But, we can adapt
the GUI already to make it opt-in there, catching most of the cases.

side-note: CT do not have this behavior at all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 15:22:46 +01:00
Mira Limbeck
1b485263b3 fix drive-mirror completion with cloudinit
On clone_vm when cloning the disks while the VM is running, we use
drive-mirror. We skip completion until the last disk, but with a
cloudinit disk there's no drive-mirror and so no completion done. If it
is the last disk in the hash, we never complete the drive-mirror jobs
and no further cloning is possible as there are already active jobs
using the disks.

To fix it we have to call qemu_drive_mirror_monitor directly in the case
of cloudinit when completion is requested and there are jobs defined.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-01-25 14:30:35 +01:00
Gilles Pietri
211785ee50 audio: add the none audio backend
Signed-off-by: Gilles Pietri <contact+dev@gilouweb.com>
2021-01-12 12:30:31 +01:00
Aaron Lauterer
0a4aff09bd improve description of fstrim_cloned_disks
The phrasing left some room for speculation when this would be triggered.
E.g. after cloning a full VM?

Currently the only instances where it is used is when a disk is moved or
a VM migrated.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-12-18 17:55:29 +01:00
Mira Limbeck
c997e24af5 fix cloning/restoring of cloudinit disks in raw format
We only added the format extension when it was not 'raw'. But on file level
storages we always require it. To fix this, always add the format
extension if the storage provides the 'path' property.
This is the same logic we use in create_disks for cloudinit disks.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-12-15 16:17:32 +01:00
Fabian Ebner
b5688f69a0 clone_disk: fix offline clone of efidisk
by partially reverting 4df98f2f14 and fixing the
line-length issue differently. The commit didn't update two later usages of
$size, breaking copying the efidisk. The other usage as a parameter to
qemu_img_convert() is luckily only cosmetic, for progress output.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 14:50:06 +01:00
Dominik Csapak
fbec3f894a use get_repository from PVE::PBSClient
this fixes the issue that we did not generate the correct repository
url for pbs storages that contained an ipv6 address or a port

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-03 17:25:32 +01:00
Thomas Lamprecht
3bae384f75 clone disk: avoid errors after disk was moved by QEMU
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 14:18:23 +01:00
Thomas Lamprecht
a2af1bbe89 add and use get_qga_key
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-25 14:18:23 +01:00
Fabian Grünbichler
e5b18771b8 status: skip query-proxmox-support if VM is offline
otherwise pvestatd will print lots of warnings..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-25 11:26:37 +01:00
Stefan Reiter
6891fd70ed print query-proxmox-support result in 'full' status
Extends print_recursive_hash for the CLI to handle JSON booleans so the
result will actually show up in 'qm status --verbose'.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-24 17:20:56 +01:00
Alexandre Derumier
6cbd3eb82c systemd scope: add CPUWeight for cgroupv2 2020-11-24 12:00:38 +01:00
Alexandre Derumier
5b65b00d04 replace cgroups_write by cgroup change_cpu_shares && change_cpu_quota 2020-11-24 12:00:38 +01:00
Stefan Reiter
8e0c97bbbf fix vm_resume and allow vm_start with QMP status 'shutdown'
When the VM is in status 'shutdown', i.e. after the guest issues a
powerdown while a backup is running, QEMU requires a 'system_reset' to
be issued before 'cont' can boot the guest again.

Additionally, when the VM has been powered down during a backup, the
logically correct call would be a 'vm_start', so automatically vm_resume
from vm_start in case this situation occurs. This also means the GUI can
cope with this almost unchanged.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-05 11:22:47 +01:00
Stefan Reiter
27b25d037e config_to_command: use -no-shutdown option
Ignore shutdowns triggered from within the guest in favor of detecting
them via qmeventd and stopping the QEMU process that way.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-11-05 11:22:47 +01:00
Fabian Grünbichler
acfc6ef8e0 fix #3113: unbreak drive hotplug
by adding the missing argument (otherwise all the other ones are shifted
one slot to the left, which is of course bogus).

this has been broken since 2018 (d559309), but was only made
visible/caused a failure with the recent changes adding

use strict;
use warnings;

to PVE::QemuServer::PCI

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-05 10:29:21 +01:00