Commit Graph

2971 Commits

Author SHA1 Message Date
Thomas Lamprecht
533cde8f33 bump version to 6.3-6
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 21:43:05 +01:00
Fabian Ebner
0761ee013f api: create_vm: check serial and usb permissions
The existing check_vm_modify_config_perm doesn't do so anymore, but
the check only got re-added to the modify/delete paths. See commits
165be267eb and
e30f75c571 for context.

In the future, it might make sense to generalise the
check_vm_modify_config_perm and have it not only take keys, but both
new and old values, and use that generalised function everywhere.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 21:15:23 +01:00
Thomas Lamprecht
4dd1e83c75 always pin windows VMs to a machine version by default
A fix for violating a important standard for booting[0] in recently
packaged QEMU 5.2 surfaced some issues with Windows based VMs in our
forum[1], which seem to be quite sensitive for such changes (it seems
they derive lots of their device assignment from ACPI).
User visible effects are loss of any network configuration due to
windows thinking it was swapped with a new one, and starts with a
fresh config - this is mostly problematic for setups with static
address assignment.

There may be lots of other, more subtle, effects and the PVE admin is
also not always the VM admin, so we really need to avoid such
negative effects. Do this by pinning the version of any windows based
VMs to either the minimum of (5.1, kvm-version) for existing VMs or
the kvm-version at time of VM creation for new ones.

There are patches in pve-manager for user to be able to change the
pinned version themself in the webinterface, so this can now also get
adapted more easily if there surface any other issues (with new or
old version) in the future.

0: https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg08484.html
1: https://forum.proxmox.com/threads/warning-latest-patch-just-broke-all-my-windows-vms-6-3-4-patch-inside.84915/page-2#post-373331

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 20:46:46 +01:00
Thomas Lamprecht
1f5828f2de ostype schema: win10 is valid for win 2019 server too
the webinterface shows it like this since quite a while already.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 20:45:21 +01:00
Thomas Lamprecht
9edb618257 can_run_pve_machine_version: PVE version can really be optional
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 18:49:06 +01:00
Thomas Lamprecht
36b0269724 api: machine list: parse as JSON
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-05 16:33:08 +01:00
Stefan Reiter
304e51d369 api: add Machine module to query machine types
The file is provided by pve-qemu-kvm.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-05 16:25:28 +01:00
Fabian Ebner
949112c350 fix #3301: status: add currently running machine and QEMU version to full status
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-04 13:57:17 +01:00
Fabian Ebner
ea71be24d6 machine: split out helper for handling query-machines qmp command result
to be re-used in the vmstatus() call.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
2021-03-04 13:57:11 +01:00
Fabian Ebner
f8d2a1ce99 config: parse: also warn about invalid lines
as we already do for containers.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-03 17:51:20 +01:00
Fabian Ebner
fdfdc80ece fix #3324: clone disk: use larger blocksize for EFI disk
Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
combination with the (currently) PVE-specific osize option to specify the
desired output size.

Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-01 13:58:34 +01:00
Stefan Reiter
483c9676f8 snapshot-test: mock query-savevm better
Otherwise the new printing functions produce warnings about undefined
numbers. These stats are guaranteed to be returned by real QEMU, so mock
them with some sensible values.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-02-25 21:20:06 +01:00
Aaron Lauterer
ee9255601e API: update_vm_api: check for CDROM on disk delete
Since CDRoms and disks share the same config keys, we need to check if
it actually is a CDRom and then check the permissions accordingly.

Otherwise it is possible for someone without VM.Config.CDROM
permissions, but with VM.Config.Disk permissions to remove a CD drive
while being unable to create a CDRom drive.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-02-22 17:06:10 +01:00
Thomas Lamprecht
42edf94804 qmeventd: fix more early broken lines
increases readabillity, and up to 100 cc is just fine

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-12 09:52:51 +01:00
Thomas Lamprecht
c1cac4c94f bump version to 6.3-5
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 18:05:24 +01:00
Thomas Lamprecht
654553a973 qmeventd: fix linker flags order
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 18:02:18 +01:00
Thomas Lamprecht
a2488e4c22 qmeventd: allow up to 100 columns per line
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 17:21:30 +01:00
Thomas Lamprecht
6d4f89b6a4 qmeventd: catch calloc error
even if close to impossible to happen, NULL dereferences are never
nice..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 16:48:53 +01:00
Thomas Lamprecht
aedf820870 qmeventd: rework description, mention s.reiter as author
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 16:03:31 +01:00
Thomas Lamprecht
649dbf4285 qmeventd: change license to AGPLv3 update copyright
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 16:02:44 +01:00
Stefan Reiter
0a279963b6 qmeventd: explicitly close() pidfds
In most circumstances a pidfd gets closed automatically once the child
dies, and that *should* be guaranteed by us calling SIGKILL - however,
it seems that sometimes that doesn't happen, leading to leaked file
descriptors[0].

Also add a small note to verbose mode showing when the late-cleanup
actually happens, helped during debug.

[0] https://forum.proxmox.com/threads/cannot-shutdown-vm.83911/

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-02-11 14:06:40 +01:00
Thomas Lamprecht
5c3f782554 snapshot: clear up log messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 14:06:02 +01:00
Thomas Lamprecht
983088730b snapshot: reduce logging rate after one minute
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 14:05:20 +01:00
Thomas Lamprecht
f97224b1ef snapshot: log storage where VM state is saved too
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-11 14:04:19 +01:00
Stefan Reiter
8828460b1d savevm: show information about drives during snapshot
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Stefan Reiter
969eb0b84d savevm: periodically print progress
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Thomas Lamprecht
8bf0fc5350 d/control: bump versioned dependency of libpve-common-perl to 6.3-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Stefan Reiter
f1aca33dc3 vzdump: use renderers from Tools instead of duplicating code
...taking card not to lose the custom precision for byte conversion.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-08 16:35:24 +01:00
Fabian Ebner
8b70288e7d test: migrate: correctly mock storage module
by fixing a typo. Since cfs_read_file within the storage module was not mocked,
the tests could fail on some setups. Now that get_bandwidth_limit is mocked,
cfs_read_file is not called anymore, but still mock it too for good measure and
to make it more future-proof.

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-02-08 16:25:49 +01:00
Alexandre Derumier
e6ec384fa7 cloudinit: remove pending delete on online regenerate image
currently only pending changes are applied when we regenerate
image on a running vm, but not the pending delete.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-02-06 14:44:38 +01:00
Alexandre Derumier
545eec65cd cloudinit: add opennebula config format
This is an alternative format for cloudinit use by opennebula,
https://cloudinit.readthedocs.io/en/latest/topics/datasources/opennebula.html

but it can be also used by opennebula context scripts

https://github.com/OpenNebula/addon-context-linux
https://github.com/OpenNebula/addon-context-windows

This context scripts are simple udev trigger/bash scripts
and allow live configuration changes.

Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
2021-02-06 14:44:38 +01:00
Thomas Lamprecht
4603e15b69 bump version to 6.3-4
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-28 17:46:09 +01:00
Stefan Reiter
4893f9b970 anchor CPU flag regex to avoid arbitrary flag suffixes
Previously one could specify a CPU flag like 'pcidfoobar' and it would
be accepted, even though we attempt to filter VM-only flags for
security. AFAICT none of the flags we allow can be turned into any
others just by appending text, but better safe than sorry.

Reported-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2021-01-26 19:27:05 +01:00
Dominik Csapak
b08c37c363 fix #2788: do not resume vms after backup if they were paused before
by checking if the vm is paused at the beginning and skipping the
resume now we also skip the qga freeze/thaw (which cannot work if the
vm is paused)

moved the 'vm_is_paused' sub from the api to PVE/QemuServer.pm so it
is available everywhere we need it.

since a suspend backup would pause the vm anyway, we can skip that
step also

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2021-01-26 18:41:11 +01:00
Thomas Lamprecht
99676a6c1a api: destroy VM: fixup parameter description language
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-26 18:40:20 +01:00
Thomas Lamprecht
47f9f50b75 style fix: missing trailing comma
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 16:24:52 +01:00
Thomas Lamprecht
3e9d173caa vm destroy: destroy also unusedX config entries
this was previously covered by the "lets destroy ever disk which
matches the VMID" feature we disarmed a bit.

As unused disks are referenced in the config, it is not subtle to
destroy them (and we always did in the past) so fix that regression
again for explicitly referenced but unused disks.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 15:49:41 +01:00
Thomas Lamprecht
7585466269 vm destroy: allow opt-out of purging unreferenced disks
Since an old change released with a version bump on 2009-09-07, we
search all enabled storages for VMID maching volumes on VM removal
and purge those too.

This has multiple pitfalls and may be quite unexpected for some
users.

It can make problems when:
* on recovery a VM is created, before disks are reattached the admin
  notices some settings issues and chooses to just recreate the VM;
  but during destroying the dummy VM all related disks get destroyed
  unconditionally which may result in data loss. This actually
  happened and is the original reason for the decision to change
  this.

* a storage is shared between PVE instance (between a set of clusters
  and/or single nodes), while this is against our rules it may still
  come as a surprise if destroying a VM on node A may destroy
  unrelated and unreferenced disks on the unrelated node B without
  asking or allowing to avoid that.

As this the removal of matching but unreferenced disks can result in
permanent data loss (up to the last backup) and may be to subtle and
unforgiving, allow to opt-out of it.

In the long run we want to make this opt-in, but that is an API
change and so needs to wait for next major release. But, we can adapt
the GUI already to make it opt-in there, catching most of the cases.

side-note: CT do not have this behavior at all

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 15:22:46 +01:00
Thomas Lamprecht
4405703d89 qm: import ovf: removed all imported disks on error
While the do_import method cleans up the current disk it was
importing on any error the following cases are not handled:
* multiple disks, first few succeed then one fails, only the last
  failed one was taken care of before this patch
* error after the import disk loop was not handled

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-25 14:58:57 +01:00
Mira Limbeck
1b485263b3 fix drive-mirror completion with cloudinit
On clone_vm when cloning the disks while the VM is running, we use
drive-mirror. We skip completion until the last disk, but with a
cloudinit disk there's no drive-mirror and so no completion done. If it
is the last disk in the hash, we never complete the drive-mirror jobs
and no further cloning is possible as there are already active jobs
using the disks.

To fix it we have to call qemu_drive_mirror_monitor directly in the case
of cloudinit when completion is requested and there are jobs defined.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2021-01-25 14:30:35 +01:00
Gilles Pietri
211785ee50 audio: add the none audio backend
Signed-off-by: Gilles Pietri <contact+dev@gilouweb.com>
2021-01-12 12:30:31 +01:00
Thomas Lamprecht
0435f8798c buildsys: clean: remove migration test runtime files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-01-12 12:01:41 +01:00
Fabian Ebner
c97a9c6ed8 tests: mock storage locking for migration tests
by doing it in a local directory instead of /var/lock/pve-manager, which is
used by the installed/non-test PVE code. This also covers the shared case,
which will become relevant after fixing #3229 (currently migration doesn't
touch disks on shared storages).

Reported-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-01-12 11:56:55 +01:00
Aaron Lauterer
0a4aff09bd improve description of fstrim_cloned_disks
The phrasing left some room for speculation when this would be triggered.
E.g. after cloning a full VM?

Currently the only instances where it is used is when a disk is moved or
a VM migrated.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2020-12-18 17:55:29 +01:00
Fabian Ebner
0494299f57 tests: allow running migration tests in parallel
It's not easily possible to use separate JSON files for the test configuration,
because part of it is generated with perl code. While this could be encoded too,
it seems cleaner to use the "run a single test by specifing the name"
functionality while adding a way for make to obtain a list of the test names.

Each test has (and needs) its own directory now, meaning the log files do not
need to be renamed anymore.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-18 17:47:27 +01:00
Thomas Lamprecht
16782e5fdf bump version to 6.3-3
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-16 08:35:37 +01:00
Mira Limbeck
c997e24af5 fix cloning/restoring of cloudinit disks in raw format
We only added the format extension when it was not 'raw'. But on file level
storages we always require it. To fix this, always add the format
extension if the storage provides the 'path' property.
This is the same logic we use in create_disks for cloudinit disks.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2020-12-15 16:17:32 +01:00
Thomas Lamprecht
eabd73492a d/control: bump versioned dependency on libpve-guest-common-perl (>= 3.1-3)
for new move VM config helper

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-15 15:56:49 +01:00
Fabian Ebner
48831384b8 create test environment for migration
and the associated parts for 'qm start'.

Each test will first populate the MigrationTest/run directory
with the relevant configuration files and files keeping track of the
state of everything necessary. Second, the mock-script for migration
is executed, which in turn will execute the 'qm start' mock-script
(if it's an online test that gets far enough). The scripts will simulate
a migration and update the relevant files in the MigrationTest/run directory.
Finally, the main test script will evaluate the state.

The main checks are the volume IDs on the source and target and the VM
configuration itself. Additional checks are the vm_status and expected_calls,
keeping track if certain calls have been made.

The rationale behind creating two mock-scripts is two-fold:
1. It removes the need to hard code responses for the tunnel
   and to recycle logic for determining and allocating migration volumes.
   Some of that logic already happens in the API part, so it was necessary
   to mock the whole CLI-Handler.
2. It allows testing the code relevant for migration in 'qm start' as well,
   and it should even be possible to test different versions of the
   mock-scripts against each other. With a bit of extra work and things
   like 'git worktree', it might even be possible to automate this.

The helper get_patched config is useful to change pre-defined configuration
files on the fly, avoiding the new to explicitly define whole configurations to
test for something in many cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 15:21:37 +01:00
Fabian Ebner
eb3acec88a migration: sort volumes migrated with storage_migrate
Having a deterministic order here is useful for testing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-12-15 15:21:37 +01:00