QEMU doesn't know about the tpmstate, so 'do_snapshots_with_qemu' should
never return true in that case. Note that inconsistencies related to
snapshot timing do not matter much, as the actual TPM data is exported
together with other device state by QEMU anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
`properties` is a bit ambiguous and as we have scope and start
runtime properties in the same scope it's good to avoid that
ambiguity.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
on vm start, we reserve all pciids that we use, and
remove the reservation again in vm_stop_cleanup
first with only a time-based reservation but after the vm is started,
we reserve again but with the pid.
for this, we have to move the start_timeout calculation above the
hostpci handling.
also moved the pci initialization out of the conf parsing loop
so that we can reserve all ids before we actually touch any of them
while touching the lines, fix the indentation
this way, when a vm starts with a pci device that is already configured
for a different running vm, will not be started and the user gets
the error that the device is already in use
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
both style and readability are naturally subjective to a certain
degree...
Also, this patch mixes a bit much into one thing, but splitting that
up would mean lots of work I just wanted to avoid, sorry about that.
Among other things:
- avoid a level of indentation in the reserve loop
- rename pciids to reservation_list where it was a better fit
- make reserve set either pid or time to avoid suggesting that we
save both
- rename parameters to requested/dropped IDs for easier understanding
what's going on in the code
- avoid old_pid/pid, use running_pid and reserver_pid instead to
clarify what they actually mean
- drop useless returns to avoid suggesting the return value has any
use and save some lnes
- use a hash slice to delete all dropped IDs at once, shorter and
faster
- use 5 second timeout for reservation, this does nothing intensive
nor does it wait for anything, so the critical section should be
really short, 5s is really long enough for a wait..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
lck needs to die, the days of any 8.3 file naming schemes are long
gone (in the server space that is ;)
/var/run is /run so use the shorter, and while /var/lock is a OK
place for the locks we try to keep lock and lock-object together
nowadays. The qemu-server sub-directory avoids overly cluttering the
already crowded top-level /run dir
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
saves a list of pciid <-> vmid mappings in /var/run
that we can check when we start a vm
if we're not given a pid but a timeout, we save the time when the
reservation will run out (current time + timeout + 5s) since each
vm start (until we can save the pid) varies from config to config
reserve_pci_usage and remove_pci_reservation always expect a list of ids
so that we can update the reservation for a vm all at once
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Provide support for secure boot by using the new "4m" and "4m-ms"
variants of the OVMF code/vars templates. This is specified on the
efidisk via the 'efitype' and 'ms-keys' parameters.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
we do not need this group, but want to use the regex where we have
multiple groups, so make it a non-capture group
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Starts an instance of swtpm per VM in it's systemd scope, it will
terminate by itself if the VM exits, or be terminated manually if
startup fails.
Before first use, a TPM state is created via swtpm_setup. State is
stored in a 'tpmstate0' volume, treated much the same way as an efidisk.
It is migrated 'offline', the important part here is the creation of the
target volume, the actual data transfer happens via the QEMU device
state migration process.
Move-disk can only work offline, as the disk is not registered with
QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
moving a backing storage at runtime.
For backups, a bit of a workaround is necessary (this may later be
replaced by NBD support in swtpm): During the backup, we attach the
backing file of the TPM as a read-only drive to QEMU, so our backup
code can detect it as a block device and back it up as such, while
ensuring consistency with the rest of disk state ("snapshot" semantic).
The name for the ephemeral drive is specifically chosen as
'drive-tpmstate0-backup', diverging from our usual naming scheme with
the '-backup' suffix, to avoid it ever being treated as a regular drive
from the rest of the stack in case it gets left over after a backup for
some reason (shouldn't happen).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
like for other API calls, repeat the cheap checks done for early abort
before forking and without locks after forking and obtaining the lock,
and only hold the flock in the forked worker instead of across the fork.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
if a volume is only referenced in the pending section of a config it was
previously not removed when removing the VM, unless the non-default
'remove unreferenced disks' option was enabled.
keeping track of volume IDs which we attempt to remove gets rid of false
warnings in case a volume is referenced both in the config and the
pending section, or multiple times in the config for other reasons.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The '--targetstorage' parameter does not apply to shared storages.
Example for a problem solved with the enabled check: Given a VM with
images only on a shared storage 'storeA', not available on the target
node (i.e. restricted by the nodes property). Then using
'--targetstorage storeB' would make offline migration suddenly
"work", but of course the disks would not be accessible and then
trying to migrate back would fail...
Example for a problem solved with the content type check: if a
VM had a shared ISO image, and there was a '--targetstorage storeA'
option, availablity of the 'iso' content type is checked for
'storeA', which is wrong as the ISO would not be moved to that
storage.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
the assumption that the index of the controller matches that of the last
removed drive only holds for virtio-scsi-single controller, which makes
the old code print a warning when removing the last drive of a
non-virtio-scsi-single controller except when the indices line up by
chance.
we can simply only call a simplified qemu_iothread_del when removing a
scsi disk of a VM with the virtio-scsi-single controller, and skip the
call for the other controllers which don't support io-threads anyway.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
@bootorder only contains entries for non-legacy bootorder entries,
but the default one contains all cdroms anyway, and if the user
explicitely disabled cdroms, it is ok to not add them back
for the new cdrom drive.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We unconditionally added an entry into the bootorder whenever we
edited the drive, even if it was already in there. Instead we only want to do
that if the bootorder list does not contain it already.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The content of the ISO should be the same on both nodes, so offline
migrate the ISO, but don't regenerate it on VM start on the target node.
This way even with snippets the content will not change during live
migration.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Attaching an ISO image to a VM is usually/often done for two reasons:
* booting an installer image
* supplying additional drivers to an installer (e.g. virtio)
Both of these cases (the latter at least with SeaBIOS and the Windows
installer) require the disk to be marked as bootable.
For this reason, enable the bootable flag for all new CDROM drives
attached to a VM by adding it to the bootorder list. It is appended to
the end, as otherwise it would cause new drives to boot before already
existing boot targets, which would be a more grave (and IMO bad)
behaviour change.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
there may be a kernel issue or a bug in how QEMU uses io_uring, but
we have users that report crashes which f.ebner could see on some
workloads, not really deterministic though and it seems that in newer
kernel versions (5.12+) the crash becomes a hang
While we're closing in on the actual issue here (which could be the
same as for RBD) let's disable io_uring for LVM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
That bit of code seems to be enough here, tested with
qm set VMID --net1 e1000e=EA:93:42:22:10:D8,bridge=vmbr0
on a Alpine Linux and a Windows Server 2016 VM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In v2 the range is [1, 10000], but the API allows the old limits from
2 to 262144, so clamp the upper for v2.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The efidisk never got restored correctly before, since we don't use the
generic print_drive_commandline_full for it, and as such it didn't get a
backing image attached. This not only causes the efidisk data to be lost
on restore, but also an error at the end, since we try to remove a
non-existing PBS blockdev.
Since it is attached differently to a regular drive, adding PBS backing
would be more difficult, but not to worry: an efidisk is small enough
that it doesn't hurt performance to just restore it via the regular
mechanism before starting the VM, and simply excluding it from the live
restore entirely.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
otherwise a user with only VM.Config.CDROM can detach a disk from a VM
by updating it to a cdrom drive
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
otherwise it'll produce a whole lot of checksum errors
and while this would be nice as a storage feature check,
it's hard to be 100% accurate there anyway since a directory
storage can point anywhere, like for instance a btrfs
directory, causing the same issue...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this allows effectively setting ALL volumes as read-only, even if the
disk controller does not support it. without it, IDE and SATA disks
with (base) volumes which are marked read-only/immutable on the storage
level prevent the template VM from starting for backup purposes.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
otherwise backups of templates using UEFI fail with storages like LVM
thin, where the volumes are not writable. disk controllers like IDE and
SATA that don't support being read-only are still broken for UEFI.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ drop the readonly=off when not required, resolve merger conflict
from Dominik's EFI disk cache mode fix ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
KillMode 'none' is deprecated, and systemd loudly complains about that
in the journal. To avoid the warning, but keep the behaviour the same,
use KillMode 'process'.
This mode does two things differently, which we have to stop it from
doing:
* it sends SIGTERM right when the scope is cancelled (e.g. on shutdown)
-> but only to the "root" process, which in our case is the worker
instance forking QEMU, so it is already dead by the time this happens
* it sends SIGKILL to *all* children after a timeout
-> can be avoided by setting either SendSIGKILL to false, or
TimeoutStopUSec to infinity - for safety, we do both
In my testing, this replicated the previous behaviour exactly, but
without using the deprecated 'none' mode.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The 'aio' setting is not visible to the guest, and so can be changed
during migrations or snapshots without issue. It is thus only
dependendent on the actual QEMU version being >= 6.0, not machine
version.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Note that the value in this enum directly represents the value passed to
QEMU, so we need to use the underscore.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and use it for the vdisk_list call too. This avoids scanning (and picking up
volumes from!) storages that are not even configured to hold images.
Previously, the content type was only enforced when a storage map was present.
Also serves a bit as a preparation to enforce content type on guest startup,
because now migration failure happens early and not only when trying to start
the guest on the remote node.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
storage_check_enabled simply checks for the 'disable' option and then calls
storage_check_node.
While not strictly necessary for a second call where only the storage differs,
e.g. in case of clone, it is more future-proof: if support for a target storage
is added at some point, it might be easy to miss adapting the call.
For the migration checks, the situation is improved by now always catching
disabled (target) storages.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to avoid potential problems with stringified numbers in Javascript and
elsewehere.
The vmid was not always an integer as the return schema expects, namely
when there was an opt_vmid argument, because the 'ne' comparision coerced the
vmid to be a string then.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reported in the community forum[0].
In QEMU's hw/scsi/vmw_pvscsi.c in the SCSIBusInfo struct, the max_lun property
is set to 0. This means that in our stack, one cannot have multiple disks and
use 'scsihw: pvscsi' currently, as kvm would fail with
bad scsi device lun: 1
Instead of increasing the lun number, increase the scsi-id, as we already do for
lsi.* (in hw/scsi/lsi53c895a.c the max_lun property is also 0).
[0]: https://forum.proxmox.com/threads/kvm-bad-scsi-device-lun-1.84318/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
on slower ceph clusters, the write pattern of the ovmf booting process
slows down the boot of the vm, so we turn on caching by default
it seems no other storage (until now) behaves like this. if it does in
the future, we can still add them too, or add a 'cache' property for
the efidisk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The only caller that didn't use 'images' was removed as part of the migration
refactoring in commit 62a4c963b8, so this is not
even a breaking change as the 'PVE 7' comment might've suggested.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
This reverts commit ff09c795ed. We wanted to wait
until PVE 7.0 for the change to not break migration new -> old until then.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
running outdated VMs without master key support will generate a warning
but proceed with a backup without encrypted key upload.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Users need to reboot at least once for the upgrade to 7.0, so any VM
running is then using a new enough QEMU...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To bring it better in line with regular restore, also log the
repository, the snapshot and the target for each drive.
While at it, adjust capitalization of existing log line and clean up
repeated '$1' use.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
It's arguably not likely in practice that only an unused volume is still in use
as a base image, but do it for completeness sake.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The QMP 'change' command is no longer available since QEMU 6.0, so this
cannot work - instead of replacing it, we can just remove it however.
The 'if' branch would only set the VNC socket path anew and enable
password mode, which is always set and enabled on startup already.
The 'else' branch was intended for certificate login (?), which
according to the FIXME comment is long gone anyway - simply forbid
'vncproxy' without the PVE ticket environment variable set.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QEMU warns us about this:
kvm: -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait: warning: short-form boolean option 'server' deprecated
Please use server=on instead
kvm: -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait: warning: short-form boolean option 'nowait' deprecated
Please use wait=off instead
kvm: -vnc unix:/var/run/qemu-server/100.vnc,password: warning: short-form boolean option 'password' deprecated
Please use password=on instead
The new syntax is backwards compatible to at least QEMU 4.0.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
for IDE and SATA, setting the whole drive into readonly mode is not
possible. skip the readonly flag for such drives as a workaround until
we find a better solution.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Previously, we ever only had a single boot *disk*, while possibly
having multiple cdroms/nics in the boot order
e.g. the config:
boot: dnc
bootdisk: scsi0
ide0: media=cdrom,none
scsi0: xxx
net0: ...
would return the size of scsi0 even though it would first boot
from cdrom/network.
When editing the bootorder with such a legacy config, we
remove the 'bootdisk' property and replace the legacy notation
with an explicit order, but we only search the first disk
for the size now.
Restore that behaviour by iterating over all disks in the boot
order property string until we get one that is not a cdrom
and has a size.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The default was changed for 5.2, so while it is not 32 MiB/s anymore,
it is still 128 MiB/s which I did not notice on my 1 Gbps (or < 125
MiB/s) setup. For users with links faster than one gigabit it now did
some limiting - so setup a very high limit so than even 100G should
not max this out.
This reverts commit a89bd10084.
The variable is only ever used for calculating the average speed of memory
migration, but it was set before disk mirroring already. But the disk
sizes are not included in the calculation, resulting in (very) wrong values.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Fixes an issue in which a VM/CT fails to automatically restart after a
failed stop-mode backup.
Also fixes a minor typo in a comment
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Either we're done in a few seconds anyway, or if the VM dirties lots
of pages we need quite a bit of time, and then it does not help to
output roughly the same status 10 times a second...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* use render_bytes where possible, to get quick to read and grasp
units printed
* xbzrle is only interesting if actually pages/bytes are send using
it, so only log in that case
* log if VM dirties more than we send
* log current speed we get from QEMU
In general there are less lines logged and huge integers are avoided.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the claim that QEMU limits this to 32M otherwise is bogus, at least
with any current QEMU version..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use an early die so that the rest can loose an indentation level for
the actual migration status reporting code
Extract common used members of the stat hash for shorter code.
use `git show -w --word-diff=color --word-diff-regex='\w+'` for
getting a better view of actual changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
avoids the possibility to die during phase3_cleanup and instead of needing to
duplicate the cleanup ourselves, benefit from phase2_cleanup doing so.
The duplicate cleanup was also very incomplete: it didn't stop the remote kvm
process (leading to 'VM already running' when trying to migrate again
afterwards), but it removed its disks, and it didn't unlock the config, didn't
close the tunnel and didn't cancel the block-dirty bitmaps.
Since migrate_cancel should do nothing after the (non-storage) migrate process
has completed, even that cleanup step is fine here.
Since phase3 is empty at the moment, the order of operations is still the same.
Also add a test, that would complain about finish_tunnel not being called before
this patch. That test also checks that local disks are not already removed
before finishing the block jobs.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Namely, those migrated with storage_migrate by using the information from
volume_map. Call cleanup_remotedisks in phase1_cleanup as well, because that's
where we end if sync_offline_local_volumes fails, and some disks might already
have been transfered successfully. Note that the local disks are still here, so
this is fine.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This also changes the behavior to remove the local copies of offline migrated
volumes only after the migration has finished successfully (this is relevant
for mixed settings, e.g. online migration with unused/vmstate disks).
local_volumes contains both, the volumes previously in $self->{volumes}
and the volumes in $self->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The case with:
1. no generic 'migration' limit from the storage plugin
2. a migrate_speed limit in the VM config
was broken. It would assign 0 to migrate_speed when picking the minimum value
and then default to the default value. Fix it by checking if bwlimit is 0
before picking the minimum.
Also, make it a bit more readable by avoiding the trick of //-assigning bwlimit
before the units match up and relying on getting back the original bwlimit value
as the minimum. Instead, only ||-assign after the units match up and don't rely
on other things.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by using the information obtained in the first scan. This
also makes sure we only scan local storages.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by making local_volumes class-accessible. One functions is for scanning all local
volumes and one is for actually syncing offline volumes via storage_migrate. The
exception is replicated volumes, this still happens during the scan for now.
Also introduce a filter_local_volumes helper, to makes life easier.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
except for migration, where it would be subtly backwards-incompatible. Since
there is a scan_volids call for migration, we can't default to filtering in
scan_volids just yet.
Also allows to get rid of the existing filtering hack in rescan().
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Pinned machine versions like "pc-i440fx-4.2+pve2.pxe" would otherwise
get a second "+pve0" suffix, which is incorrect.
Also deal with non-pve pinned versions correctly, i.e.
"pc-i440fx-5.2.pxe" becomes "pc-i440fx-5.2+pve0.pxe".
Handle .pxe suffixes in Machine.pm as well, and add two test cases.
Co-developed-by: Luca Berneking <luca@berneking.net>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
When checking whether a volume is still referenced by a snapshot, the volid
itself is first checked. When the volid is different, we fall back to comparing
the path.
As the first value to be compared is a volume's path, the second value better be
a volume's path too, and not a snapshot's path.
See also 77019edfe0 for historical context.
The error that led me here:
* had a VM with ZFS over iSCSI storage with an exsiting snapshot
* add new unused drive
* try to remove the unsued drive
* fails, because ZFS (not Pool!) Plugin does not support snapshot paths.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
If, why ever, got "not-ready" again we'd log again the next round.
Improves the behavior for multiple disks, especially on migration
where we mirrored the local disks one by one, but kept reporting on
prev. ones.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
orient on the backup output which got reworked for PVE 6.2/6.3
Avoid overwhelming the user with redundant information, and use human
readable units.
before:
> restore-drive-scsi5: transferred: 167772160 bytes remaining: 8422162432 bytes total: 8589934592 bytes progression: 1.95 % busy: 1 ready: 0
after:
> restore-drive-scsi0: transferred 720.0 MiB of 32.0 GiB (2.20%) in 12s
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Similar to backups, prevent QEMU from being killed by qmeventd during
the live-restore, so a guest can shut itself down without aborting the
restore operation.
Note that the 'close' is only to be explicit, the handle will also be
closed in case an operation errors (i.e. when the 'eval' is left).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Enables live-restore functionality using the 'alloc-track' QEMU driver.
This allows starting a VM immediately when restoring from a PBS
snapshot. The snapshot is mounted into the VM, so it can boot from that,
while guest reads and a 'block-stream' job handle the restore in the
background.
If an error occurs, the VM is deleted and all data written during the
restore is lost.
The VM remains locked during the restore, which automatically prohibits
any modifications to the config while restoring. Some modifications
might potentially be safe, however, this is experimental enough that I
believe this would cause more bad stuff(tm) than actually satisfy any
use cases.
Pool handling is slightly adjusted so the VM can be added to the pool
before the restore starts.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the custom 'alloc-track' filter node to redirect writes to the
original drives target, while unwritten blocks will be read from the
specified PBS snapshot.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
...so it works with other block jobs as well. Intended use case is
block-stream, which also requires a new "auto" (wait only) completion
mode, since it finishes automatically anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
cloud-init's SLAAC option was disabled in 2018 because there was no
support for it. Now that cloud-init 19.4 or newer versions are more
widespread, we can finally reenable it.
Also include minimum required cloud-init version for SLAAC support in
format description.
Tested on Ubuntu 20.04 (ci 20.4), CentOS 8 (ci 19.4), Debian 10 (ci
20.2).
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
A fix was also provided in bugzilla by user wsapplegate:
https://bugzilla.proxmox.com/show_bug.cgi?id=3314
Tested on Ubuntu 20.04, CentOS 8 and Debian 10.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
In testing this usually completes almost immediately, but in theory this
is a storage/IO operation and as such can take a bit to finish. It's
certainly not unthinkable that it might take longer than the default *3
seconds* we've given it so far. Make it a minute.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Only show "not supported by QEMU version" message if we determine that
to be the actual cause, just print the error otherwise.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Commit abff03211f switched to iterating over the
values instead of the keys, but didn't update the variable name. Use target_sid,
because target is already in use for the target node.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
A "savevm" call (both our async variant and the upstream sync one) use
migration code internally. As such, they both expect migration
capabilities to be set.
This is usually not a problem, as the default set of capabilities is ok,
however, it leads to differing snapshot settings if one does a snapshot
after a machine has been live-migrated (as the capabilities will persist
from that), which could potentially lead to discrepencies in snapshots
(currently it seems to be fine, but it still makes sense to set them to
safeguard against future changes).
Note that we do set the "dirty-bitmaps" capability now (if
query-proxmox-support reports true), which has three effects:
1) PBS dirty-bitmaps are preserved in snapshots, enabling
fast-incremental backups to work after rollback (as long as no newer
backups exist), including for hibernate/resume
2) snapshots taken from now on, with a QEMU version supporting bitmap
migration, *might* lead to incompatibility of these snapshots with
QEMU versions that don't know about bitmaps at all (i.e. < 5.0 IIRC?)
- forward compatibility is still given, and all other capabilities we
set go back to very old versions
3) since we now explicitly disable bitmap saving if the version doesn't
report support, we avoid crashes even with not-updated QEMU versions
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
At this stage, there are no keys in %storage_limits to iterate over. The
refactoring in commit 9f3d73bc35 broke the logic
by accident.
Also explicitly set zero if there is no limit to avoid repeating the
get_bandwith_limit call for the same storage. When accessing the value later,
zero is already correctly handled as 'no limit'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and use file_set_contents to really commit it afterwards. Mostly done as a
preparation for the later patch for sanitizing the config on restore, but
shouldn't hurt by itself either.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Commit "a941bbd0 client: raise HTTP_TIMEOUT to 120s" in proxmox-backup
did the same, however, we would now still fail after 60 seconds since
the QMP call would time out.
Increase the timeout here to the same +5 seconds to give some time to
receive a response, so if the HTTP call in proxmox-backup times out, we
can still get a useful error message instead of timing out the QMP call
too.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The existing check_vm_modify_config_perm doesn't do so anymore, but
the check only got re-added to the modify/delete paths. See commits
165be267eb and
e30f75c571 for context.
In the future, it might make sense to generalise the
check_vm_modify_config_perm and have it not only take keys, but both
new and old values, and use that generalised function everywhere.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A fix for violating a important standard for booting[0] in recently
packaged QEMU 5.2 surfaced some issues with Windows based VMs in our
forum[1], which seem to be quite sensitive for such changes (it seems
they derive lots of their device assignment from ACPI).
User visible effects are loss of any network configuration due to
windows thinking it was swapped with a new one, and starts with a
fresh config - this is mostly problematic for setups with static
address assignment.
There may be lots of other, more subtle, effects and the PVE admin is
also not always the VM admin, so we really need to avoid such
negative effects. Do this by pinning the version of any windows based
VMs to either the minimum of (5.1, kvm-version) for existing VMs or
the kvm-version at time of VM creation for new ones.
There are patches in pve-manager for user to be able to change the
pinned version themself in the webinterface, so this can now also get
adapted more easily if there surface any other issues (with new or
old version) in the future.
0: https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg08484.html
1: https://forum.proxmox.com/threads/warning-latest-patch-just-broke-all-my-windows-vms-6-3-4-patch-inside.84915/page-2#post-373331
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
combination with the (currently) PVE-specific osize option to specify the
desired output size.
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Since CDRoms and disks share the same config keys, we need to check if
it actually is a CDRom and then check the permissions accordingly.
Otherwise it is possible for someone without VM.Config.CDROM
permissions, but with VM.Config.Disk permissions to remove a CD drive
while being unable to create a CDRom drive.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
...taking card not to lose the custom precision for byte conversion.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
currently only pending changes are applied when we regenerate
image on a running vm, but not the pending delete.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Previously one could specify a CPU flag like 'pcidfoobar' and it would
be accepted, even though we attempt to filter VM-only flags for
security. AFAICT none of the flags we allow can be turned into any
others just by appending text, but better safe than sorry.
Reported-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
by checking if the vm is paused at the beginning and skipping the
resume now we also skip the qga freeze/thaw (which cannot work if the
vm is paused)
moved the 'vm_is_paused' sub from the api to PVE/QemuServer.pm so it
is available everywhere we need it.
since a suspend backup would pause the vm anyway, we can skip that
step also
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
this was previously covered by the "lets destroy ever disk which
matches the VMID" feature we disarmed a bit.
As unused disks are referenced in the config, it is not subtle to
destroy them (and we always did in the past) so fix that regression
again for explicitly referenced but unused disks.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since an old change released with a version bump on 2009-09-07, we
search all enabled storages for VMID maching volumes on VM removal
and purge those too.
This has multiple pitfalls and may be quite unexpected for some
users.
It can make problems when:
* on recovery a VM is created, before disks are reattached the admin
notices some settings issues and chooses to just recreate the VM;
but during destroying the dummy VM all related disks get destroyed
unconditionally which may result in data loss. This actually
happened and is the original reason for the decision to change
this.
* a storage is shared between PVE instance (between a set of clusters
and/or single nodes), while this is against our rules it may still
come as a surprise if destroying a VM on node A may destroy
unrelated and unreferenced disks on the unrelated node B without
asking or allowing to avoid that.
As this the removal of matching but unreferenced disks can result in
permanent data loss (up to the last backup) and may be to subtle and
unforgiving, allow to opt-out of it.
In the long run we want to make this opt-in, but that is an API
change and so needs to wait for next major release. But, we can adapt
the GUI already to make it opt-in there, catching most of the cases.
side-note: CT do not have this behavior at all
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While the do_import method cleans up the current disk it was
importing on any error the following cases are not handled:
* multiple disks, first few succeed then one fails, only the last
failed one was taken care of before this patch
* error after the import disk loop was not handled
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On clone_vm when cloning the disks while the VM is running, we use
drive-mirror. We skip completion until the last disk, but with a
cloudinit disk there's no drive-mirror and so no completion done. If it
is the last disk in the hash, we never complete the drive-mirror jobs
and no further cloning is possible as there are already active jobs
using the disks.
To fix it we have to call qemu_drive_mirror_monitor directly in the case
of cloudinit when completion is requested and there are jobs defined.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The phrasing left some room for speculation when this would be triggered.
E.g. after cloning a full VM?
Currently the only instances where it is used is when a disk is moved or
a VM migrated.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
We only added the format extension when it was not 'raw'. But on file level
storages we always require it. To fix this, always add the format
extension if the storage provides the 'path' property.
This is the same logic we use in create_disks for cloudinit disks.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
by partially reverting 4df98f2f14 and fixing the
line-length issue differently. The commit didn't update two later usages of
$size, breaking copying the efidisk. The other usage as a parameter to
qemu_img_convert() is luckily only cosmetic, for progress output.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
this fixes the issue that we did not generate the correct repository
url for pbs storages that contained an ipv6 address or a port
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
showing off it's monstrosity of a method signature, needs to be
cleaned up in a followup commit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extends print_recursive_hash for the CLI to handle JSON booleans so the
result will actually show up in 'qm status --verbose'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Offline migrated volumes are now activated within storage_migrate.
Online migrated volumes can be assumed to be already active.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
while it didn't actually fail, we probably want to avoid the behavior:
With remove_job=full:
* run_replication called during migration causes the replicated volumes to
be removed
* migration continues by fully copying all volumes
With remove_job=local:
* run_replication called during migration causes the job (and local
replication snapshots) to be removed
* migration continues by fully copying all volumes and renaming them to
avoid collision with the still existing remote volumes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In some cases $self->{replicated_volumes} will be auto-vivified
to {} by checks like
next if $self->{replicated_volumes}->{$volid}
and then {} would evaluate to true in a boolean context.
Now the replication information is retrieved once in prepare,
and used to decide whether to make the calls or not.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
No need to warn twice, so the warning from the outside check
was removed.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When the VM is in status 'shutdown', i.e. after the guest issues a
powerdown while a backup is running, QEMU requires a 'system_reset' to
be issued before 'cont' can boot the guest again.
Additionally, when the VM has been powered down during a backup, the
logically correct call would be a 'vm_start', so automatically vm_resume
from vm_start in case this situation occurs. This also means the GUI can
cope with this almost unchanged.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Ignore shutdowns triggered from within the guest in favor of detecting
them via qmeventd and stopping the QEMU process that way.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Now that VMs can be started during a backup, it makes sense to create a
dirty bitmap in these cases too, since the VM might be resumed and thus
continue running normally even after the backup is done.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Connect and send the vmid of the VM being backed up. This prevents
qmeventd from SIGTERMing the underlying QEMU instance, even if the guest
shuts itself down, until we close the socket connection (in cleanup,
which happens on success and abort, or if we crash the file handle will
be closed as well).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
by adding the missing argument (otherwise all the other ones are shifted
one slot to the left, which is of course bogus).
this has been broken since 2018 (d559309), but was only made
visible/caused a failure with the recent changes adding
use strict;
use warnings;
to PVE::QemuServer::PCI
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the 'backup' qmp call itself times out or fails, we still want to
try to cancel the backup, else it can happen that there is still
a backup running even when vzdump thinks it was canceled
qapi docs says that backup cancel always returns success, even
if no backup is running
since we hold a global and a per vm lock for the backup, this should be
ok, since we should not reach this code without that lock
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We query QEMU if it's safe before enabling it, as on versions without
the necessary patches it not only would be useless, but can actually
lead to hangs.
PBS state is always migrated, as it's a small amount of data anyway, so
we don't need to set a specific flag for it.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Specifying 'boot: order=' was intended to be used for an empty bootorder
(i.e. no boot devices), but as it turns out our format parser doesn't
like empty '-list' properties if they are nested in a subformat.
Fixing this in JSONSchema sounds like a risky move, so instead just
write 'boot: ' (without 'order=') to indicate an empty bootorder. The
rest of the code handles it just fine, as this was valid before too.
Incidentally also fixes a bug where you couldn't create a new VM without
any disks if no explicit 'boot' property was specified (i.e. a simple
'qm create 100' without any parameters would fail).
Reported-by: Dominic Jäger <d.jaeger@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
fixes commit 74c17b7a23 which moved
this code here, but forgot to pass $vga ref, as the module was not
using warning nor strict mode this was not caught..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
After migration or a rollback the cloudinit disk might not be allocated, so
volume_size_info() fails. As we override the value anyway for cloudinit
and efi disks simply move the volume_size_info() call into the 'else'
branch.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
All volumes contained in $vollist are activated. In this case a snapshot
of the volume. For cloudinit disks no snapshots are created so don't add
it to the list of volumes to activate as it otherwise fails with no
logical volume found.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The API is updated to handle the deprecation correctly, i.e. when
updating the 'order' attribute, the old 'legacy' (default_key) values
are removed (would now be ignored anyway).
When removing a device that is in the bootorder list, it will be removed
from the aforementioned. Note that non-existing devices in the list will
not cause an error - they will simply be ignored - but it's still nice
to not have them in there.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(also fixes#3011)
Deprecates the old-style 'boot' and 'bootdisk' options by adding a new
'order=' subproperty to 'boot'.
This allows a user to specify more than one disk in the boot order,
helping with newer versions of SeaBIOS/OVMF where disks without a
bootindex won't be initialized at all (breaks soft-raid and some LVM
setups).
This also allows specifying a bootindex for USB and hostpci devices,
which was not possible before. Floppy boot support is not supported in
the new model, but I doubt that will be a problem (AFAICT we can't even
attach floppy disks to a VM?).
Default behaviour is intended to stay the same, i.e. while new VMs will
receive the new 'order' property, it will be set so the VM starts the
same as before (using get_default_bootorder).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The format is unused in this commit, but will replace the current
string-based format of the 'boot' property. It is included since the
parameter of bootorder_from_legacy follows it.
Two helper methods are introduced:
* bootorder_from_legacy: Parses the legacy format into a hash closer to
what the new format represents
* get_default_bootdevices: Encapsulates the legacy default behaviour if
nothing is specified in the boot order
resolve_first_disk is simplified and gets a new $cdrom parameter to
control the behaviour of excluding CD-ROMs or instead searching for only
them.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
We use verification for something more in-depth on the PBS server, so
avoid that term to avoid misunderstandings.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We already keep hugepages if they are created with the kernel
commandline (hugepagesz=x hugepages=y), but some setups (specifically
hugepages across multiple NUMA nodes) cannot be configured that way.
Since we always clear these hugepages at VM shutdown, rebooting a VM
that uses them might not work, since the requested count might not be
available anymore by the time we want to use them (also, we would then
no longer allocate them correctly on the NUMA nodes).
Add a 'keephugepages' parameter to skip cleanup and simply leave them
untouched.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
until we maybe have a 'pbs-backup' that links Qemu and PBS like
'pbs-restore', we need to do a regular backup for the template case to
support all storage types and image formats.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Otherwise a warning is printed if the bios is not set in the config.
reported via community forum:
https://forum.proxmox.com/threads/warning-in-qemuserver.74683/
reproduced and tested that the patch fixes the issue.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
This still works even if all drives were clean. It then shows the very
magical line:
INFO: backup was done incrementally, reused 34.00 GiB (100%)
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QEMU handles it just as well as with VMA, so this was probably just
forgotten to implement for PBS.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Edid support was added with Qemu 5. Windows guests seem to not be able
to get all possible resolutions if the default std VGA device is used as
GPU and the VM boots in BIOS mode. The result is that only one of the
following three resolutions can be configured:
800x600
1024x768
1920x1080
It is important to note that just booting a Windows VM with the edid=off
parameter will not make the large list of resolutions available. It
seems that Windows is caching the list of possible resolutions
somewhere [0].
Uninstalling the 'Microsoft Basic Display Adapter' in the device manager
and rebooting the VM is one way I found to force Windows to recreate the
list of possible resolutions.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[0] https://lists.nongnu.org/archive/html/qemu-devel/2020-07/msg07128.html
There can't be a dirty bitmap when the VM was off, and if it was off we
will also shut it down after the backup, so no point in creating one.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
When $target is 0, that means we don't have to upload any data, in which
case we're immediately done.
Otherwise incremental backups with no changes display a really weird
status: 0% (0.0 B of 0.0 B), duration 0, read: 0 B/s, write: 0 B/s
when they're actually done already.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Previously 'read' and 'write' would always show the same value, which is
of little use. Change it so 'write' excludes reused bytes, thus
displaying the actual upload speed.
$last_reused needs to be initialized to contain reused data from 'clean'
dirty bitmaps to ensure the first output line is correct.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the new 'query-pbs-bitmap-info' QMP call to retrieve additional
information about each drive's dirty bitmap. Returned info is also used
to calculate $target by simply adding all the dirty values (dirty is
equal to size in case the entire drive will be backed up).
"Backup is sparse" message is suppressed for PBS, since it makes little
sense (if zero chunks appear in the clean area of a bitmap, they won't
be counted, and a user is probably more interested in the 'reused' data
anyway).
Also removes the need for the hacky $first_round query-backup handling.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
In config_to_command, '-loadstate' will be added whenever there is a
vmstate in the config. But in vm_start_nolock, the resume parameter
is used to calculate the appropriate timeout and to remove the vmstate
after the start. The resume parameter was only set if there is a
'suspended' lock, but apparently [0] we cannot rely on the lock to be
set if and only if there is a vmstate.
[0]: https://forum.proxmox.com/threads/task-error-start-failed.72450
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
during refactoring, the vmid got lost, but is necessary to get
the correct mdev id
Fixes commit 74c17b7a23
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ reference fixed commit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pbs-restore might not stay there like that forever and if
this code path changes we should remember to also remove the
environment variables
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
If 'query-proxmox-support' is not known to QEMU, assume that no other
features are supported either.
If 'pbs' is not supported at all, error out with a nice message.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the new register_format(3) call to use a validator (instead of a
parser) for 'pve-(vm-)?cpu-conf'. This way the $cpu_fmt hash can be used for
generating the documentation, while still applying the same verification
rules as before.
Since the function no longer parses but only verifies, the parsing in
print_cpu_device/get_cpu_options has to go via JSONSchema directly.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
reuse can also come from the current backup - so drop the "from last
backup" as this can be very confusing if one reads it after making
the first backup ever, with no last backup existing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
happened due to moving the code from another scope which had no $res,
and not noticing as it was still working after all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
normally this is done centrally in the managers code, but we do not
have the info for PBS there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Pass new size directly, so the function doesn't need to know about
how some hash is organized. And return a message directly, instead
of both size-strings. Also dropped the wantarray, because both
existing callers use the message anyways.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The $total != $transferred check is changed to a log, as QEMU reports
only actually transferred bytes, and it is indeed correct for
incremental backups to have differing values from $total.
The 'incremental' parameter is always set, QEMU will figure out if it should
re-use an existing bitmap or create a new one on its own.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This allows setting ciuser, cipassword and all other cloudinit settings that
are not part of the network without VM.Config.Network permissions.
Keep VM.Config.Network still as fallback so custom roles that add
VM.Config.Network but not VM.Config.Cloudinit don't break.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Legacy IGD passthrough requires address 00:1f.0 to not be assigned to
anything on QEMU startup (currently it's assigned to bridge pci.2).
Changing this in general would break live-migration, so introduce a new
hostpci parameter "legacy-igd", which if set to 1 will move that bridge
to be nested under bridge 1.
This is safe because:
* Bridge 1 is unconditionally created on i440fx, so nesting is ok
* Defaults are not changed, i.e. PCI layout only changes when the new
parameter is specified manually
* hostpci forbids migration anyway
Additionally, the PT device has to be assigned address 00:02.0 in the
guest as well, which is usually used for VGA assignment. Luckily, IGD PT
requires vga=none, so that is not an issue either.
See https://git.qemu.org/?p=qemu.git;a=blob;f=docs/igd-assign.txt
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Move the logic which volumes are included in the backup job to its own
method and adapt the VZDump code accordingly. This makes it possible to
develop other features around backup jobs.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
should not really happen on modern systems, but random_bytes just
returns false if it fails to generate random bytes, in which case we
want to die instead of returning an empty 'random' string.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We used the VNC API $ticket as password for VNC, but QEMU limits the
password to the first 8 chars and ignores the rest[0].
As our tickets start with a static string (e.g., "PVE") the entropy
was a bit limited.
For Proxmox VE this does not matters much as the noVNC viewer
provided by has to go always over the API call, and so a valid
ticket and correct permissions for the requested VM are enforced
anyway.
This patch helps external users, which often use NoVNC-Websockify,
circumventing the API and relying solely on the VNC password to avoid
snooping on VNC sessions.
A 'generate-password' parameter is added, if set a password from good
entropy (using libopenssl) is generated.
For simplicity of mapping random bits to ranges we extract 6 bit of
entropy per character and add the integer value of '!' (first
printable ASCII char) to that. This way we get 64^8 possibilities,
which even with millions of guesses per second one would need years
of guessing and mostly just DDOS the server with websocket upgrade
requests.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
'vga' is a property string, we can't just assume it starts with the default key's value here either.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
'vga' is a property string, we can't just assume it starts with the
default key's value.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
when checking whether a to-be-added drive's and the VM's replication
status are matching. otherwise, we end up in a failing generic
'parse_volume_id' with no mention of the actual reason.
adding 'replicate=0' to the new drive string fixes the underlying issue
with and without this patch, so this is just a cosmetic/usability
improvement.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As perl hashes have random order, sort them before iterating through.
This makes the output of 'qm cloudinit dump <vmid> network' consistent
between calls if the config has not changed.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
netdev_add is now a proper qmp command, which means that it verifies
the parameter types properly
instead of sending strings, we now have to choose the correct
types for the parameters
bool for vhost
and uint64 for queues
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the special case was dropped when moving this to pve-storage.
fixes commit c6d517835a
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
More API calls will follow for this path, for now add the 'index' call to
list all custom and default CPU models.
Any user can list the default CPU models, as these are public anyway, but
custom models are restricted to users with Sys.Audit on /nodes.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Explicitly allows changing other properties than the cputype, even if
the currently set cputype is not accessible by the user. This way, an
administrator can assign a custom CPU type to a VM for a less privileged
user without breaking edit functionality for them.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
fixing the following two issues:
- the legacy code path was never converted to the new fork_tunnel
signature (which probably means that nothing triggers it in practice
anymore?)
- the NBD Unix socket got forwarded multiple times if more than one disk
was migrated via NBD (this is harmless, but wrong)
for the second issue I opted to keep the code compatible with the
possibility that Qemu starts supporting multiple NBD servers in the
future (and the target node could thus return multiple UNIX socket
paths). currently we can only start one NBD server on one socket, and
each drive-mirror simply starts a new connection over that single
socket.
I took the liberty of renaming the variables/keys since I found
'tunnel_addr' and 'sock_addr' rather confusing.
Reviewed-By: Mira Limbeck <m.limbeck@proxmox.com>
Tested-By: Mira Limbeck <m.limbeck@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes commit 940e2a3a06
QEMU 4.1 will fail to start a guest with an audio device set with:
> Property '.audiodev' not found
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If /dev/hwrng exists, but no actual generator is connected (or it is
disabled on the host), QEMU will happily start the VM but crash as soon
as the guest accesses the VirtIO RNG device.
To prevent this unfortunate behaviour, check if a useable hwrng is
connected to the host before allowing the VM to be started.
While at it, clean up config_to_command by moving new and existing rng
source checks to a seperate sub.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
redirecting to the saved STDOUT in case of a template backup or a VM
without any disks failed because of the erroneous '=':
Backup of VM 123123 failed - command '/usr/bin/vma create -v -c [...]' failed:
Bad filehandle: =5 at /usr/share/perl/5.28/IPC/Open3.pm line 58.
https://forum.proxmox.com/threads/vzdump-to-stdout.69364
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
It's possible to have a VM with OVMF but without an efidisk, so don't
call parse_drive on a potential undef value.
Partial revert of 818c3b8d91 ("cfg2cmd: ovmf: code cleanup")
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and move the lock call and decision logic closer together
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
This reverts commit b5490d8a98.
When resizing a volume of a running VM, a qmp block_resize command
is issued. This is non-blocking, so the size on the storage immediately
after issuing the command might still be the old one.
This is part of the issue reported in bug #2621.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we really only want to rescan the disk size of the disks we actually
need, and that are only the local disks (for which we have to allocate
the correct size on the target)
also we want to always skip the efidisk, since we get the wanted
size after the loop, and this produced a confusing log line
(for details why we do not want the 'real' size,
see commit 818ce80ec1)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by avoiding auto-vivification of $self->{online_local_volumes} via
iteration. most code paths don't care whether it's undef or a reference
to an empty list, but this caused the (already) fixed bug of calling
nbd_stop without having started an NBD server in the first place.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
lock_file is used by PVE::QemuServer::Memory, but it does properly 'use
PVE::Tools ...' itself so we can drop them in the main module.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>