Moving to Ceph is very slow when bs=1. Instead, use a larger block size in
combination with the (currently) PVE-specific osize option to specify the
desired output size.
Suggested-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Since CDRoms and disks share the same config keys, we need to check if
it actually is a CDRom and then check the permissions accordingly.
Otherwise it is possible for someone without VM.Config.CDROM
permissions, but with VM.Config.Disk permissions to remove a CD drive
while being unable to create a CDRom drive.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
...taking card not to lose the custom precision for byte conversion.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
currently only pending changes are applied when we regenerate
image on a running vm, but not the pending delete.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Previously one could specify a CPU flag like 'pcidfoobar' and it would
be accepted, even though we attempt to filter VM-only flags for
security. AFAICT none of the flags we allow can be turned into any
others just by appending text, but better safe than sorry.
Reported-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
by checking if the vm is paused at the beginning and skipping the
resume now we also skip the qga freeze/thaw (which cannot work if the
vm is paused)
moved the 'vm_is_paused' sub from the api to PVE/QemuServer.pm so it
is available everywhere we need it.
since a suspend backup would pause the vm anyway, we can skip that
step also
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
this was previously covered by the "lets destroy ever disk which
matches the VMID" feature we disarmed a bit.
As unused disks are referenced in the config, it is not subtle to
destroy them (and we always did in the past) so fix that regression
again for explicitly referenced but unused disks.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since an old change released with a version bump on 2009-09-07, we
search all enabled storages for VMID maching volumes on VM removal
and purge those too.
This has multiple pitfalls and may be quite unexpected for some
users.
It can make problems when:
* on recovery a VM is created, before disks are reattached the admin
notices some settings issues and chooses to just recreate the VM;
but during destroying the dummy VM all related disks get destroyed
unconditionally which may result in data loss. This actually
happened and is the original reason for the decision to change
this.
* a storage is shared between PVE instance (between a set of clusters
and/or single nodes), while this is against our rules it may still
come as a surprise if destroying a VM on node A may destroy
unrelated and unreferenced disks on the unrelated node B without
asking or allowing to avoid that.
As this the removal of matching but unreferenced disks can result in
permanent data loss (up to the last backup) and may be to subtle and
unforgiving, allow to opt-out of it.
In the long run we want to make this opt-in, but that is an API
change and so needs to wait for next major release. But, we can adapt
the GUI already to make it opt-in there, catching most of the cases.
side-note: CT do not have this behavior at all
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While the do_import method cleans up the current disk it was
importing on any error the following cases are not handled:
* multiple disks, first few succeed then one fails, only the last
failed one was taken care of before this patch
* error after the import disk loop was not handled
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On clone_vm when cloning the disks while the VM is running, we use
drive-mirror. We skip completion until the last disk, but with a
cloudinit disk there's no drive-mirror and so no completion done. If it
is the last disk in the hash, we never complete the drive-mirror jobs
and no further cloning is possible as there are already active jobs
using the disks.
To fix it we have to call qemu_drive_mirror_monitor directly in the case
of cloudinit when completion is requested and there are jobs defined.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The phrasing left some room for speculation when this would be triggered.
E.g. after cloning a full VM?
Currently the only instances where it is used is when a disk is moved or
a VM migrated.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
We only added the format extension when it was not 'raw'. But on file level
storages we always require it. To fix this, always add the format
extension if the storage provides the 'path' property.
This is the same logic we use in create_disks for cloudinit disks.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
by partially reverting 4df98f2f14 and fixing the
line-length issue differently. The commit didn't update two later usages of
$size, breaking copying the efidisk. The other usage as a parameter to
qemu_img_convert() is luckily only cosmetic, for progress output.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
this fixes the issue that we did not generate the correct repository
url for pbs storages that contained an ipv6 address or a port
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
showing off it's monstrosity of a method signature, needs to be
cleaned up in a followup commit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extends print_recursive_hash for the CLI to handle JSON booleans so the
result will actually show up in 'qm status --verbose'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Offline migrated volumes are now activated within storage_migrate.
Online migrated volumes can be assumed to be already active.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
while it didn't actually fail, we probably want to avoid the behavior:
With remove_job=full:
* run_replication called during migration causes the replicated volumes to
be removed
* migration continues by fully copying all volumes
With remove_job=local:
* run_replication called during migration causes the job (and local
replication snapshots) to be removed
* migration continues by fully copying all volumes and renaming them to
avoid collision with the still existing remote volumes
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In some cases $self->{replicated_volumes} will be auto-vivified
to {} by checks like
next if $self->{replicated_volumes}->{$volid}
and then {} would evaluate to true in a boolean context.
Now the replication information is retrieved once in prepare,
and used to decide whether to make the calls or not.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
No need to warn twice, so the warning from the outside check
was removed.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When the VM is in status 'shutdown', i.e. after the guest issues a
powerdown while a backup is running, QEMU requires a 'system_reset' to
be issued before 'cont' can boot the guest again.
Additionally, when the VM has been powered down during a backup, the
logically correct call would be a 'vm_start', so automatically vm_resume
from vm_start in case this situation occurs. This also means the GUI can
cope with this almost unchanged.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Ignore shutdowns triggered from within the guest in favor of detecting
them via qmeventd and stopping the QEMU process that way.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Now that VMs can be started during a backup, it makes sense to create a
dirty bitmap in these cases too, since the VM might be resumed and thus
continue running normally even after the backup is done.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Connect and send the vmid of the VM being backed up. This prevents
qmeventd from SIGTERMing the underlying QEMU instance, even if the guest
shuts itself down, until we close the socket connection (in cleanup,
which happens on success and abort, or if we crash the file handle will
be closed as well).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
by adding the missing argument (otherwise all the other ones are shifted
one slot to the left, which is of course bogus).
this has been broken since 2018 (d559309), but was only made
visible/caused a failure with the recent changes adding
use strict;
use warnings;
to PVE::QemuServer::PCI
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the 'backup' qmp call itself times out or fails, we still want to
try to cancel the backup, else it can happen that there is still
a backup running even when vzdump thinks it was canceled
qapi docs says that backup cancel always returns success, even
if no backup is running
since we hold a global and a per vm lock for the backup, this should be
ok, since we should not reach this code without that lock
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We query QEMU if it's safe before enabling it, as on versions without
the necessary patches it not only would be useless, but can actually
lead to hangs.
PBS state is always migrated, as it's a small amount of data anyway, so
we don't need to set a specific flag for it.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Specifying 'boot: order=' was intended to be used for an empty bootorder
(i.e. no boot devices), but as it turns out our format parser doesn't
like empty '-list' properties if they are nested in a subformat.
Fixing this in JSONSchema sounds like a risky move, so instead just
write 'boot: ' (without 'order=') to indicate an empty bootorder. The
rest of the code handles it just fine, as this was valid before too.
Incidentally also fixes a bug where you couldn't create a new VM without
any disks if no explicit 'boot' property was specified (i.e. a simple
'qm create 100' without any parameters would fail).
Reported-by: Dominic Jäger <d.jaeger@proxmox.com>
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
fixes commit 74c17b7a23 which moved
this code here, but forgot to pass $vga ref, as the module was not
using warning nor strict mode this was not caught..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
After migration or a rollback the cloudinit disk might not be allocated, so
volume_size_info() fails. As we override the value anyway for cloudinit
and efi disks simply move the volume_size_info() call into the 'else'
branch.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
All volumes contained in $vollist are activated. In this case a snapshot
of the volume. For cloudinit disks no snapshots are created so don't add
it to the list of volumes to activate as it otherwise fails with no
logical volume found.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The API is updated to handle the deprecation correctly, i.e. when
updating the 'order' attribute, the old 'legacy' (default_key) values
are removed (would now be ignored anyway).
When removing a device that is in the bootorder list, it will be removed
from the aforementioned. Note that non-existing devices in the list will
not cause an error - they will simply be ignored - but it's still nice
to not have them in there.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
(also fixes#3011)
Deprecates the old-style 'boot' and 'bootdisk' options by adding a new
'order=' subproperty to 'boot'.
This allows a user to specify more than one disk in the boot order,
helping with newer versions of SeaBIOS/OVMF where disks without a
bootindex won't be initialized at all (breaks soft-raid and some LVM
setups).
This also allows specifying a bootindex for USB and hostpci devices,
which was not possible before. Floppy boot support is not supported in
the new model, but I doubt that will be a problem (AFAICT we can't even
attach floppy disks to a VM?).
Default behaviour is intended to stay the same, i.e. while new VMs will
receive the new 'order' property, it will be set so the VM starts the
same as before (using get_default_bootorder).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The format is unused in this commit, but will replace the current
string-based format of the 'boot' property. It is included since the
parameter of bootorder_from_legacy follows it.
Two helper methods are introduced:
* bootorder_from_legacy: Parses the legacy format into a hash closer to
what the new format represents
* get_default_bootdevices: Encapsulates the legacy default behaviour if
nothing is specified in the boot order
resolve_first_disk is simplified and gets a new $cdrom parameter to
control the behaviour of excluding CD-ROMs or instead searching for only
them.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
We use verification for something more in-depth on the PBS server, so
avoid that term to avoid misunderstandings.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We already keep hugepages if they are created with the kernel
commandline (hugepagesz=x hugepages=y), but some setups (specifically
hugepages across multiple NUMA nodes) cannot be configured that way.
Since we always clear these hugepages at VM shutdown, rebooting a VM
that uses them might not work, since the requested count might not be
available anymore by the time we want to use them (also, we would then
no longer allocate them correctly on the NUMA nodes).
Add a 'keephugepages' parameter to skip cleanup and simply leave them
untouched.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
until we maybe have a 'pbs-backup' that links Qemu and PBS like
'pbs-restore', we need to do a regular backup for the template case to
support all storage types and image formats.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Otherwise a warning is printed if the bios is not set in the config.
reported via community forum:
https://forum.proxmox.com/threads/warning-in-qemuserver.74683/
reproduced and tested that the patch fixes the issue.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
This still works even if all drives were clean. It then shows the very
magical line:
INFO: backup was done incrementally, reused 34.00 GiB (100%)
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QEMU handles it just as well as with VMA, so this was probably just
forgotten to implement for PBS.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Edid support was added with Qemu 5. Windows guests seem to not be able
to get all possible resolutions if the default std VGA device is used as
GPU and the VM boots in BIOS mode. The result is that only one of the
following three resolutions can be configured:
800x600
1024x768
1920x1080
It is important to note that just booting a Windows VM with the edid=off
parameter will not make the large list of resolutions available. It
seems that Windows is caching the list of possible resolutions
somewhere [0].
Uninstalling the 'Microsoft Basic Display Adapter' in the device manager
and rebooting the VM is one way I found to force Windows to recreate the
list of possible resolutions.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[0] https://lists.nongnu.org/archive/html/qemu-devel/2020-07/msg07128.html
There can't be a dirty bitmap when the VM was off, and if it was off we
will also shut it down after the backup, so no point in creating one.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
When $target is 0, that means we don't have to upload any data, in which
case we're immediately done.
Otherwise incremental backups with no changes display a really weird
status: 0% (0.0 B of 0.0 B), duration 0, read: 0 B/s, write: 0 B/s
when they're actually done already.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Previously 'read' and 'write' would always show the same value, which is
of little use. Change it so 'write' excludes reused bytes, thus
displaying the actual upload speed.
$last_reused needs to be initialized to contain reused data from 'clean'
dirty bitmaps to ensure the first output line is correct.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Uses the new 'query-pbs-bitmap-info' QMP call to retrieve additional
information about each drive's dirty bitmap. Returned info is also used
to calculate $target by simply adding all the dirty values (dirty is
equal to size in case the entire drive will be backed up).
"Backup is sparse" message is suppressed for PBS, since it makes little
sense (if zero chunks appear in the clean area of a bitmap, they won't
be counted, and a user is probably more interested in the 'reused' data
anyway).
Also removes the need for the hacky $first_round query-backup handling.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
In config_to_command, '-loadstate' will be added whenever there is a
vmstate in the config. But in vm_start_nolock, the resume parameter
is used to calculate the appropriate timeout and to remove the vmstate
after the start. The resume parameter was only set if there is a
'suspended' lock, but apparently [0] we cannot rely on the lock to be
set if and only if there is a vmstate.
[0]: https://forum.proxmox.com/threads/task-error-start-failed.72450
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
during refactoring, the vmid got lost, but is necessary to get
the correct mdev id
Fixes commit 74c17b7a23
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ reference fixed commit ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pbs-restore might not stay there like that forever and if
this code path changes we should remember to also remove the
environment variables
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
If 'query-proxmox-support' is not known to QEMU, assume that no other
features are supported either.
If 'pbs' is not supported at all, error out with a nice message.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the new register_format(3) call to use a validator (instead of a
parser) for 'pve-(vm-)?cpu-conf'. This way the $cpu_fmt hash can be used for
generating the documentation, while still applying the same verification
rules as before.
Since the function no longer parses but only verifies, the parsing in
print_cpu_device/get_cpu_options has to go via JSONSchema directly.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
reuse can also come from the current backup - so drop the "from last
backup" as this can be very confusing if one reads it after making
the first backup ever, with no last backup existing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
happened due to moving the code from another scope which had no $res,
and not noticing as it was still working after all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
normally this is done centrally in the managers code, but we do not
have the info for PBS there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Pass new size directly, so the function doesn't need to know about
how some hash is organized. And return a message directly, instead
of both size-strings. Also dropped the wantarray, because both
existing callers use the message anyways.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The $total != $transferred check is changed to a log, as QEMU reports
only actually transferred bytes, and it is indeed correct for
incremental backups to have differing values from $total.
The 'incremental' parameter is always set, QEMU will figure out if it should
re-use an existing bitmap or create a new one on its own.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This allows setting ciuser, cipassword and all other cloudinit settings that
are not part of the network without VM.Config.Network permissions.
Keep VM.Config.Network still as fallback so custom roles that add
VM.Config.Network but not VM.Config.Cloudinit don't break.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Legacy IGD passthrough requires address 00:1f.0 to not be assigned to
anything on QEMU startup (currently it's assigned to bridge pci.2).
Changing this in general would break live-migration, so introduce a new
hostpci parameter "legacy-igd", which if set to 1 will move that bridge
to be nested under bridge 1.
This is safe because:
* Bridge 1 is unconditionally created on i440fx, so nesting is ok
* Defaults are not changed, i.e. PCI layout only changes when the new
parameter is specified manually
* hostpci forbids migration anyway
Additionally, the PT device has to be assigned address 00:02.0 in the
guest as well, which is usually used for VGA assignment. Luckily, IGD PT
requires vga=none, so that is not an issue either.
See https://git.qemu.org/?p=qemu.git;a=blob;f=docs/igd-assign.txt
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Move the logic which volumes are included in the backup job to its own
method and adapt the VZDump code accordingly. This makes it possible to
develop other features around backup jobs.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
should not really happen on modern systems, but random_bytes just
returns false if it fails to generate random bytes, in which case we
want to die instead of returning an empty 'random' string.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We used the VNC API $ticket as password for VNC, but QEMU limits the
password to the first 8 chars and ignores the rest[0].
As our tickets start with a static string (e.g., "PVE") the entropy
was a bit limited.
For Proxmox VE this does not matters much as the noVNC viewer
provided by has to go always over the API call, and so a valid
ticket and correct permissions for the requested VM are enforced
anyway.
This patch helps external users, which often use NoVNC-Websockify,
circumventing the API and relying solely on the VNC password to avoid
snooping on VNC sessions.
A 'generate-password' parameter is added, if set a password from good
entropy (using libopenssl) is generated.
For simplicity of mapping random bits to ranges we extract 6 bit of
entropy per character and add the integer value of '!' (first
printable ASCII char) to that. This way we get 64^8 possibilities,
which even with millions of guesses per second one would need years
of guessing and mostly just DDOS the server with websocket upgrade
requests.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-By: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
'vga' is a property string, we can't just assume it starts with the default key's value here either.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
'vga' is a property string, we can't just assume it starts with the
default key's value.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
when checking whether a to-be-added drive's and the VM's replication
status are matching. otherwise, we end up in a failing generic
'parse_volume_id' with no mention of the actual reason.
adding 'replicate=0' to the new drive string fixes the underlying issue
with and without this patch, so this is just a cosmetic/usability
improvement.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As perl hashes have random order, sort them before iterating through.
This makes the output of 'qm cloudinit dump <vmid> network' consistent
between calls if the config has not changed.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
netdev_add is now a proper qmp command, which means that it verifies
the parameter types properly
instead of sending strings, we now have to choose the correct
types for the parameters
bool for vhost
and uint64 for queues
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the special case was dropped when moving this to pve-storage.
fixes commit c6d517835a
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
More API calls will follow for this path, for now add the 'index' call to
list all custom and default CPU models.
Any user can list the default CPU models, as these are public anyway, but
custom models are restricted to users with Sys.Audit on /nodes.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Explicitly allows changing other properties than the cputype, even if
the currently set cputype is not accessible by the user. This way, an
administrator can assign a custom CPU type to a VM for a less privileged
user without breaking edit functionality for them.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
fixing the following two issues:
- the legacy code path was never converted to the new fork_tunnel
signature (which probably means that nothing triggers it in practice
anymore?)
- the NBD Unix socket got forwarded multiple times if more than one disk
was migrated via NBD (this is harmless, but wrong)
for the second issue I opted to keep the code compatible with the
possibility that Qemu starts supporting multiple NBD servers in the
future (and the target node could thus return multiple UNIX socket
paths). currently we can only start one NBD server on one socket, and
each drive-mirror simply starts a new connection over that single
socket.
I took the liberty of renaming the variables/keys since I found
'tunnel_addr' and 'sock_addr' rather confusing.
Reviewed-By: Mira Limbeck <m.limbeck@proxmox.com>
Tested-By: Mira Limbeck <m.limbeck@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes commit 940e2a3a06
QEMU 4.1 will fail to start a guest with an audio device set with:
> Property '.audiodev' not found
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If /dev/hwrng exists, but no actual generator is connected (or it is
disabled on the host), QEMU will happily start the VM but crash as soon
as the guest accesses the VirtIO RNG device.
To prevent this unfortunate behaviour, check if a useable hwrng is
connected to the host before allowing the VM to be started.
While at it, clean up config_to_command by moving new and existing rng
source checks to a seperate sub.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
redirecting to the saved STDOUT in case of a template backup or a VM
without any disks failed because of the erroneous '=':
Backup of VM 123123 failed - command '/usr/bin/vma create -v -c [...]' failed:
Bad filehandle: =5 at /usr/share/perl/5.28/IPC/Open3.pm line 58.
https://forum.proxmox.com/threads/vzdump-to-stdout.69364
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
It's possible to have a VM with OVMF but without an efidisk, so don't
call parse_drive on a potential undef value.
Partial revert of 818c3b8d91 ("cfg2cmd: ovmf: code cleanup")
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
and move the lock call and decision logic closer together
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
This reverts commit b5490d8a98.
When resizing a volume of a running VM, a qmp block_resize command
is issued. This is non-blocking, so the size on the storage immediately
after issuing the command might still be the old one.
This is part of the issue reported in bug #2621.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we really only want to rescan the disk size of the disks we actually
need, and that are only the local disks (for which we have to allocate
the correct size on the target)
also we want to always skip the efidisk, since we get the wanted
size after the loop, and this produced a confusing log line
(for details why we do not want the 'real' size,
see commit 818ce80ec1)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
by avoiding auto-vivification of $self->{online_local_volumes} via
iteration. most code paths don't care whether it's undef or a reference
to an empty list, but this caused the (already) fixed bug of calling
nbd_stop without having started an NBD server in the first place.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
lock_file is used by PVE::QemuServer::Memory, but it does properly 'use
PVE::Tools ...' itself so we can drop them in the main module.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
regex to reduce the code duplication, as archive_info and
decompressor_info provides the same information as well.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
VM was can be true for stop mode backup, we cannot check the "is VM
currently running" as that doesn't tells us anything (could be the
backup process), so check the mode also..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as the nbd server could have been stopped by something else.
Further, it makes no sense to die and mark the migration thus as
failed, just because of a NBD server stop issue.
At this point the migration hand off to the target was done already,
so normally we're good, if it fails we have other (followup) problems
anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and refactor the test_volid closure. Like this get_replicatable_volumes doesn't
need a separate loop for unused volumes anymore. For get_vm_volumes, which is used
for activation/deactivation of volumes at migration and deactivation in vm_stop_cleanup,
includes those volumes now. For migration it's an improvement, because those volumes
might need to be migrated and for vm_stop_cleanup it shouldn't hurt. The last user
of foreach_volid is check_vm_disks_local used by migrate_vm_precondition,
where information about the additional volumes doesn't hurt either.
Note that replicate is (still) set by default, so the behavior for
get_replicatable_volumes for unused volumes should not change.
Hibernation vmstate files are now also included and recognized as 'is_vmstate'.
The 'size' attribute will not be overwritten by subsequent iterations for the
same volid anymore (a volid may appear both in the config and in snapshots),
so the size from the current config is now preferred.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
when a backup includes a cloudinit disk on a non-existent storage,
the restore fails with 'storage' does not exist
this happens because we want to get the format of the disk, by
checking the source storage
we fix this by using the target storage first and only the source as
fallback
this will still fail if neither storage exists
(which is ok, since we cannot restore then anyway)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Some OVF files to not declare 'rasd' as a default namespace (in the
top level Envelope element), but inline in each element (e.g.
<rasd:HostResource xmlns:rasd="foo">...</rasd:HostResource>)
This trips up our relative findvalue with
> XPath error : Undefined namespace prefix
To avoid this, search in the global XPathContext (where we register
those namespaces ourselves) and pass the item_node as context
parameter.
This works then for both cases
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is only used for migration via 'qm mtunnel', regular users should
never need to resume a VM that does not logically belong to the node it
is running on
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by counting only local volumes that will be live-migrated via qemu_drive_mirror,
i.e. those listed in $self->{online_local_volumes}.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
With Qemu 4.2 a new `audiodev` property was introduced [0] to explicitly
specify the backend to be used for the audio device. This is accompanied
with a warning that the fallback to the default audio backend is
deprecated.
[0] https://wiki.qemu.org/ChangeLog/4.2#Audio
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
If storage_migrate dies, the error message might not include the
volume ID or the target storage ID, but those might be good to know.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This makes it possible to migrate a VM with volumes store1:vm-123-disk-0
store2:vm-123-disk-0 to some targetstorage. Also prevents migration failure
when there is an orphaned disk with the same volid on the target.
To avoid confusion, the name should not change for 'vmstate'-volumes.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It was necessary to move foreach_volid back to QemuServer.pm
In VZDump/QemuServer.pm and QemuMigrate.pm the dependency on
QemuConfig.pm was already there, just the explicit "use" was missing.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Upstream marks these as having a micro-version of >=90, unfortunately the
machine versions are bumped earlier so testing them is made unnecessarily
difficult, since the version checking code would abort on migrations etc...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
[ Thomas: do so refactor ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so that pve-container and qemu-server use the same one, in preparation
for moving it to JSONSchema and having a bridgepair format.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).
QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
If a cputype is custom (check via prefix), try to load options from the
custom CPU model config, and set values accordingly.
While at it, extract currently hardcoded values into seperate sub and add
reasonings.
Since the new flag resolving outputs flags in sorted order for
consistency, adapt the test cases to not break. Only the order is
changed, not which flags are present.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Ebner <f.ebner@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
To avoid hardcoding even more CPU-flag related things for custom CPU
models, introduce a dynamic approach to resolving flags.
resolve_cpu_flags takes a list of hashes (as documented in the
comment) and resolves them to a valid "-cpu" argument without
duplicates. This also helps by providing a reason why specific CPU flags
have been added, and thus allows for useful warning messages should a
flag be overwritten by another.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-By: Fabian Ebner <f.ebner@proxmox.com>
Tested-By: Fabian Ebner <f.ebner@proxmox.com>
Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.
Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.
No functional change with non-custom CPU types intended.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This is required to support custom CPU models, since the
"cpu-models.conf" file is not versioned, and can be changed while a VM
using a custom model is running. Changing the file in such a state can
lead to a different "-cpu" argument on the receiving side.
This patch fixes this by passing the entire "-cpu" option (extracted
from /proc/.../cmdline) as a "qm start" parameter. Note that this is
only done if the VM to migrate is using a custom model (which we can
check just fine, since the <vmid>.conf *is* versioned with pending
changes), thus not breaking any live-migration directionality.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
in addition to printing it. preparation for remote cluster migration,
where we want to return this in a structured fashion over the migration
tunnel instead of parsing stdout via SSH.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
into one sub that retrieves the local disks, and the actual NBD
allocation. that way, remote incoming migration can just call the NBD
allocation with a custom list of volume names/storages/..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
both where previously missing. the existing 'check_storage_access'
helper is not applicable here since it operates on a full set of VM
config options, not just storage IDs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the syntax is backwards compatible, providing a single storage ID or '1'
works like before. the new helper ensures consistent behaviour at all
call sites.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
485449e37 ("qmp: use migrate-set-parameters in favor of deprecated options")
changed the initial "migrate_set_downtime" QMP call to the more recent
"migrate-set-parameters", but forgot to do so for the auto-increase code
further below.
Since the units of the two calls don't match, this would have caused the
auto-increase to increase the limit to absurd levels as soon as it kicked
in (ms treated as s).
Update the second call to the new version as well, and while at it remove
the unnecessary "defined()" check for $migrate_downtime, which is always
initialized from the defaults anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
as preparation of targetstorage mapping and remote migration. this also
removes re-using of the $local_volumes hash in the original code.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to start breaking up vm_start before extending parts for new migration
features like storage and network mapping.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
as preparation for refactoring it further. remote migration will add
another 1-2 parameters, and it is already unwieldly enough as it is.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to also handle cases where disk allocation failed in the remote
vm_start, and we only have a bitmap but no target drive information.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
on storages where the minimum size of images is bigger than the real
OVMF_VARS.fd file, they get padded to their minimum size
when using such an image, qemu maps it fully to the vm, but the efi
does not find the vars region and creates a file on the first efi
partition it finds
this breaks some settings in the ovmf, such as resolution
to fix this, we have to specify the size for the pflash, so that
qemu only maps the first n bytes in the vm (this only works for
raw files, not for qcow2)
we also have to use the correct size when converting between storages
in 'clone_disk' (used for move disk and cloning vms) and when
live migrating to different storages
when we now expect that the source image is always correctly used/created
(e.g. raw with size=x in pflash argument) then we always create the
target correctly
when encountering users which have a non-valid image (e.g. a efidisk
moved from zfs to qcow2 before this patch), we have to tell them to
recreate the efidisk and the settings on it
we have to version_guard it to 4.1+pve2 (since we haven't bumped yet
since the change to pve2)
also add 2 tests, one for the old version and one for the new
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
[ Thomas: rebased to master ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since bitmaps are set early on, and 'qm start' potentially has allocated
the disks but still failed. we can only clean up what we know about
anyway, so the disk part is still only best effort.
also use replicated_volumes instead of bitmap existence to check for
replicated volumes, since 'qm start' on an old node that does not
understand replicated volumes might have allocated a new volume that we
DO want to clean up, and not skip.
also cleanup disks after stopping target VM, otherwise we might end up
in a situation where the target VM is still running and using the disks,
thus blocking the disk cleanup.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
by only checking for replicatable volumes when a replication job is
defined, and passing only actually replicated volumes to the target node
via STDIN, and back via STDOUT.
otherwise this can pick up theoretically replicatable, but not actually
replicated volumes and treat them wrong.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes commit 0b2f574b4c
enforce_vm_running_for_backup is now witout return value, for the PBS
I forgot to remove an now outdated call to handle_vm_powerstate, drop
that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
$cpu_fmt is being reused for custom CPUs as well as VM-specific CPU
settings. The "pve-vm-cpu-conf" format is introduced to verify a config
specifically for use as VM-specific settings.
"pve-cpu-conf" is registered for use in custom CPU API calls (where no
additional checks are required).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Turn CPUConfig into a SectionConfig with parsing/writing support for
custom CPU models. IO is handled using cfs.
Namespacing will be provided using "custom-" prefix for custom model
names (in VM config only, cpu-models.conf will contain unprefixed
names).
Includes two overrides to avoid writing redundant information to the
config file, additionally get_custom_model is used to retrieve a custom
model configuration by name.
Resolve custom names in print_cpu_device when a custom cpu is passed.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
There is a need to set $noerr, because otherwise migration for a
VM with a non-replicatable volume fails with:
missing replicate feature on volume 'myfs:107/vm-107-disk-2.raw'
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
as we need at least pve-qemu in 4.2 for this to work, the target side
is implicitly checked with "to old version" check for migrate or the
mirror will fail anyway.
Just use the simple "qemu binary version check", as we could stil
live migrate an older snapshot with older machine versions if both
sides have a recent enough qemu.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with incremental drive-mirror and dirty-bitmap tracking.
1.) get replicated disks that are currently referenced by running VM
2.) add a block-dirty-bitmap to each of them
3.) replicate ALL replicated disks
4.) pass bitmaps from 2) to drive-mirror for disks from 1)
5.) skip replicated disks when cleaning up volumes on either source or
target
added error handling is just removing the bitmaps if an error occurs at
any point after 2, except when the handover to the target node has
already happened, since the bitmaps are cleaned up together with the
source VM in that case.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
to make migration logs a bit easier to grasp with a quick glance.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
by re-using a dirty bitmap that represents changes since the divergence
of source and target volume. requires a qemu that supports incremental
drive-mirroring, and will die otherwise.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
Moved code so that initialization of drivedesc_hash stays a single block.
Avoid auto-vivication in parse_drive.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
E.g.: If a feature requires 4.1+pveN and we're using machine version 4.2
we don't need to increase the pve version to N (4.2+pve0 is enough).
We check this by doing a min_version call against a non-existant higher
pve-version for the major/minor tuple we want to test for, which can
only work if the major/minor alone is high enough.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Clarify why a cancel is actually not really canceling here, because
we're already finished with storage migration and the block jobs are
all in ready state and we (source) are going to stop soon to hand
over to target.
> Note that if you issue 'block-job-cancel' after 'drive-mirror' has
> indicated (via the event BLOCK_JOB_READY) that the source and
> destination are synchronized, then the event triggered by this
> command changes to BLOCK_JOB_COMPLETED, to indicate that the
> mirroring has ended and the destination now has a point-in-time
> copy tied to the time of the cancellation
-- qapi/block-core.json (QEMU 4.2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The change to the prefixed version broke migration from new to old
qemu-server version. This reverts the change and adds a TODO comment for
7.0 to change it to the prefixed version then.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
...instead of booting with an invalid config once and then silently
changing the memory size for consequent VM starts.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Alwin Antreich <a.antreich@proxmox.com>
This cannot work, since we adjust the 'memory' property of the VM config
on hotplugging, but then the user-defined NUMA topology won't match for
the next start attempt.
Check needs to happen here, since it otherwise fails early with "total
memory for NUMA nodes must be equal to vm static memory".
With this change the error message reflects what is actually happening
and doesn't allow VMs with exactly 1GB of RAM either.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Tested-by: Alwin Antreich <a.antreich@proxmox.com>
The reuse of the tunnel, which we're opening to communicate with the target
node and to forward the unix socket for the state migration, for the NBD unix
socket requires adding support for an array of sockets to forward, not just a
single one. We also have to change the $sock_addr variable to an array
for the cleanup of the socket file as SSH does not remove the file.
To communicate to the target node the support of unix sockets for NBD
storage migration, we're specifying an nbd_protocol_version which is set
to 1. This version is then passed to the target node via STDIN. Because
we don't want to be dependent on the order of arguments being passed
via STDIN, we also prefix the spice ticket with 'spice_ticket: '. The
target side handles both the spice ticket and the nbd protocol version
with a fallback for old source nodes passing the spice ticket without a
prefix.
All arguments are line based and require a newline in between.
When the NBD server on the target node is started with a unix socket, we
get a different line containing all the information required to start
the drive-mirror. This contains the unix socket path used on the target node
which we require for forwarding and cleanup.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
For secure live migration with local disks via NBD over a unix socket,
we have to somehow communicate from the source node to the target node
if it supports it. This is because there can only be one NBD server with
exactly one socket bound.
The source node passes that information via STDIN. Support for
'spice_ticket: (...)' is added in addition to 'nbd_protocol_version:
<version>'. As old source nodes send the spice ticket without a prefix,
we still have to have a fallback for this case. New information should
always be passed via a prefix that is matched, otherwise it will be
recognized as spice ticket.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
As the NBD server spawned by qemu can only listen on a single socket,
we're dependent on a version being passed to vm_start that indicates
which protocol can be used, TCP or Unix, by the source node.
The change in socket type (TCP to Unix) comes with a different URI. For
unix sockets it has the form: 'nbd:unix:<path/to/socket>:exportname=<device>'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
With Qemu 4.2 we encountered a problem with unix sockets and SSH socket
forwarding for drive-mirror. It seems the socket gets reopened again and
again after it closes for some reason. This can be worked around by
specifying 'block-job-cancel' instead of 'block-job-complete' when we're
not interested in swapping the disks again from NBD to their original
protocol. This is always the case when we use drive-mirror for live
migrating a VM.
qemu_drive_mirror is used for migration and for clone_disk. All in all
we have 3 cases to handle. Either the 'skip' case which skips the
completion of the job. The 'wait' case which was the default before and
still is when $completion is undefined. And the new 'wait_noswap' case
which is used for the live migration.
If 'wait_noswap' is specified, we issue a 'block-job-cancel' once the block
job is in 'ready' state. This completes the block job without swapping the
disks.
clone_disk always uses 'block-job-cancel' via the qemu_blockjobs_cancel
sub.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
and make it match with what parse_drive does. Even though the 'real' format
was pve-volume-id, callers already expected that parse_drive returns a hash
with a valid 'file' key (e.g. PVE/API2/Qemu.pm:1147ff).
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Since the MacOS Mojave Apple ships AppleQEMUGuestAgent by default.
However, it does not fully adhere to QGA specs as they do expect each
command to be newline delimited.
This makes each command to be newline delimited which is harmless for
all other systems (Windows, Linux), but enable guest agent by default
without any changes on OSX.
Signed-off-by: Kamil Trzcinski <ayufan@ayufan.eu>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
avoids a genisoimage output like:
> Total translation table size: 0
> Total rockridge attributes bytes: 417
> Total directory bytes: 0
> Path table size(bytes): 10
> Max brk space used 0
> 178 extents written (0 MB)
on every VM start.
Rather than that useless output, tell genisoimage to be quiet, which
still prints errors but nothing else. Additionally print a short
single line about that we're to create the cloud-init iso.
Reformat while at it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
removes safe_string_ne and safe_num_ne code which is now shared in
GuestHelpers. also change all the calls to use the shared definitions.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
This fixes an issue when migrating a VM with an unused volume with format
qcow2 or vmdk. Since 'snapshots' wasn't set, storage_migrate wanted to
export/import with format raw+size instead. Therefore it used (instead of
just 'dd') 'qemu-img convert', which fails when its output leaves through
a pipe. Upon importing, a second error is present, because the format from
the volume ID doesn't match the format of the stream and there is no
conversion yet.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
LGTM-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
If for whatever reason there is no size in the property string
of a drive, 'qm rescan' would do nothing for that drive and
live migration would also fail.
Also adds a check to avoid potential auto-vivification of volid_hash->{$volid}
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The initialization for the drive keys in $confdesc is changed
to be a single for-loop iterating over the keys of $drivedesc_hash and
the initialization of the unusedN keys is move to directly below it.
To avoid the need to change all the call sites, functions with more than
a few callers are exported from the submodule and imported into QemuServer.pm.
For callers of the now imported functions within QemuServer.pm, the prefix
PVE::QemuServer is dropped, because it is unnecessary and now even confusing.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which contains the full descriptions of the drives, and
make parse_drive not depend on $confdesc anymore.
In preparation to moving drive-related code to its own module.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Allow a user to add a virtio-rng-pci (an emulated hardware random
number generator) to a VM with the rng0 setting. The setting is
version_guard()-ed.
Limit the selection of entropy source to one of three:
/dev/urandom (preferred): Non-blocking kernel entropy source
/dev/random: Blocking kernel source
/dev/hwrng: Hardware RNG on the host for passthrough
QEMU itself defaults to /dev/urandom (or the equivalent getrandom()
call) if no source file is given, but I don't fully trust that
behaviour to stay constant, considering the documentation [0] already
disagrees with the code [1], so let's always specify the file ourselves.
/dev/urandom is preferred, since it prevents host entropy starvation.
The quality of randomness is still good enough to emulate a hwrng, since
a) it's still seeded from the kernel's true entropy pool periodically
and b) it's mixed with true entropy in the guest as well.
Additionally, all sources about entropy predicition attacks I could find
mention that to predict /dev/urandom results, /dev/random has to be
accessed or manipulated in one way or the other - this is not possible
from a VM however, as the entropy we're talking about comes from the
*hosts* blocking pool.
More about the entropy and security implications of the non-blocking
interface in [2] and [3].
Note further that only one /dev/hwrng exists at any given time, if
multiple RNGs are available, only the one selected in
'/sys/devices/virtual/misc/hw_random/rng_current' will feed the file.
Selecting this is left as an exercise to the user, if at all required.
We limit the available entropy to 1 KiB/s by default, but allow the user
to override this. Interesting to note is that the limiter does not work
linearly, i.e. max_bytes=1024/period=1000 means that up to 1 KiB of data
becomes available on a 1000 millisecond timer, not that 1 KiB is
streamed to the guest over the course of one second - hence the
configurable period.
The default used here is the same as given in the QEMU documentation [0]
and has been verified to affect entropy availability in a guest by
measuring /dev/random throughput. 1 KiB/s is enough to avoid any
early-boot entropy shortages, and already has a significant impact on
/dev/random availability in the guest.
[0] https://wiki.qemu.org/Features/VirtIORNG
[1] https://git.qemu.org/?p=qemu.git;a=blob;f=crypto/random-platform.c;h=f92f96987d7d262047c7604b169a7fdf11236107;hb=HEAD
[2] https://lwn.net/Articles/261804/
[3] https://lwn.net/Articles/808575/
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The http-server has a 64KB payload limit for post requests, so note
that explicit even if it's a theoretical maximum as the reamainig
params also need some space in the request
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
'input-data' can be used to pass arbitrary data to a guest when running
an agent command with 'guest-exec'. Most guest-agent implementations
treat this as STDIN to the command given by "path"/"arg", but some go as
far as relying solely on this parameter, and even fail if "path" or
"arg" are set (e.g. Mikrotik Cloud Hosted Router) - thus "command" needs
to be made optional.
Via the API, an arbitrary string can be passed, on the command line ('qm
guest exec'), an additional '--pass-stdin' flag allows to forward STDIN
of the qm process to 'input-data', with a size limitation of 1 MiB to
not overwhelm QMP.
Without 'input-data' (API) or '--pass-stdin' (CLI) behaviour is unchanged.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
1. Avoids the error
"VM 111 qmp command 'block_resize' failed - The new size must be a multiple of 512"
for qcow2 disks.
2. Because volume_import expects disk sizes to be a multiple of 1 KiB.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Machines running with SeaBIOS don't have the efidisk attached, so QEMU
cannot back it up and fails with "unknown drive".
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Some of the recent QMP changes require at least 2.8.0, but since the
oldest version we officially package for 6.x is 4.0.0 anyway, checking
for at least 3.0 should not break anyone's setup.
Note that this does not affect machine version checks, only the
installed QEMU binary version.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Live-migrating a VM with more than 14 SCSI disks to a node that doesn't
support it yet is broken. Use a bumped pve-version to represent that and
give the user a nice error message instead.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The previously introduced approach can fail for pinned versions when a
new QEMU release is introduced. The saner approach is to use a mapping
that gives one pve-version for each QEMU release.
Fortunately, the old system has not been bumped yet, so we can still
change it without too much effort.
QEMU versions without a mapping are assumed to be pve0, 4.1 is mapped to
pve1 since thats what we had as our default previously.
Pinned machine versions (i.e. pc-i440fx-4.1) are always assumed to be
pve0, for specific pve-versions they'd have to be pinned as well (i.e.
pc-i440fx-4.1+pve1).
The new logic also makes the pve-version dynamic, and starts VMs with
the lowest possible 'feature-level', i.e. if a feature is only available
with 4.1+pve2, but the VM isn't using it, we still start it with
4.1+pve0.
We die if we don't support a version that is requested from us. This
allows us to use the pve-version as live-migration blocks (i.e. bumping
the version and then live-migrating a VM which uses the new feature (so
is running with the bumped version) to an outdated node will present the
user with a helpful error message and fail instead of silently modifying
the config and only failing *after* the migration).
$version_guard is introduced in config_to_command to use for features
that need to check pve-version, it automatically handles selecting the
newest necessary pve-version for the VM.
Tests have to be adjusted, since all of them now resolve to pve0 instead
of pve1. EXPECT_ERROR matching is changed to use 'eq' instead of regex
to allow special characters in error messages.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Because of alignment and rounding in the storage backend, the effective
size might not match the 'newsize' parameter we passed along.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
query-cpus has been deprecated since 2.12.0 [0] in favor of
query-cpus-fast, which no longer incurs a guest performance penalty on
the guest. The returned information is the same as far as our use case
is concerned.
[0] https://qemu.weilnetz.de/doc/qemu-doc.html#Deprecated-features
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
migrate_set_downtime, migrate_set_speed and migrate-set-cachesize have
all been deprecated since 2.8 or 2.11 [0]. They still work, but no
reason not to use the correct version.
Note that the downtime-limit parameter switched from seconds to
milliseconds, so convert to that. Slightly improve log output with units
while at it.
[0] https://qemu.weilnetz.de/doc/qemu-doc.html#Deprecated-features
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
'device' is deprecated since 2.8 in favor of 'id' [0], but since we
always consistently set the id on our drives anyway we can substitute it
easily.
[0] see files qapi/block.json and qapi/block-core.json in QEMU source
code, the online documentation doesn't mention it AFAICT
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
...and cleanup surrounding code a bit.
'change' is deprecated, and according to the qapi definition in QEMU it
is 'strongly recommended' to avoid using it.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
The description for vm_config was out of date and from the description
for vm_pending it was hard to tell what the difference to vm_config was.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
regression introduced with commit a85ff91b
previously we set $target to undef if it's localnode or localhost, then
we check if node exists.
with regression commit, behaviour changes as we do the node check in
else, but $target may be undef. this causes an error:
no such cluster node ''
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
improved readability
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
to achieve this we have to add 3 new scsihw addresses since lsi
controllers can only hold 7 scsi drives
we go up to 31, since this is the limit for virtio-scsi-single devices
we have reserved (we can increase this in the future)
to make it more future proof, we add a new pci bridge under pci
bridge 1, so we have to adapt the bridge adding code (we did not
need this for q35 previously)
impact on live migration:
since on older versions of qemu-server we do not have those config
settings, there is no problem from old -> new
new->old is not supported anyway and this breaks so that
the vm crashes and loses the configs for scsi15-30
(same behaviour as e.g. with audio0 and migration from new->old)
tested with 31 scsi disk on
i440fx + virtio-scsi
i440fx + lsi
q35 + virtio-scsi
q35 + lsi
with ovmf + seabios
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and adapt the tests
this does not impact live migration, since the order here does not
change the device layout
we want this to consistently have the readconfig first
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
VM.Audit can see the current config and the list of snapshots
already, so there is no real reason to disallow
the config of snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
from hotplug_pending we go into 'vmconfig_update_disk', where we check the
hotpluggability of options.
add 'ssd' there as a non-hotpluggable option (since we'd have to unplug/plug to
change the drive type)
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
The package will be used for custom CPU models as a SectionConfig, hence
the name. For now we simply move some CPU related helper functions and
declarations over from QemuServer to reduce clutter there.
Exports are to avoid changing all call sites, functions have useful
names on their own.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
As 'qemu_img_format' just matches a regex, this doesn't make much of
a difference, but AFAICT all other calls of 'qemu_img_format' use 'volname'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
since we handle errors gracefully now, we don't need to write & save
config every time we change a setting.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
get_basic_machine_info was removed by commit 045749f2fc.
Use get_host_arch to get the default machine type instead, and
optionally allow to specify architecture as parameter.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
This is the guarantee that this call operates on it's created config.
A VMID cannot be reused afterall. So only remove the guarantee at the
last step, just before throwing up the error message about the clone
failure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We clone the source VM firewall config before forking the "realcmd"
worker, but did not mind cleaning it up again if the clone failed
somewhere in the worker.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
* query_understood_cpu_flags returns all flags that QEMU/KVM knows about
* query_supported_cpu_flags returns all flags that QEMU/KVM can use on
this particular host.
To get supported flags, a temporary VM is started with QEMU, so we can
issue the "query-cpu-model-expansion" QMP command. This is how libvirt
queries supported flags for its "host-passthrough" CPU type.
query_supported_cpu_flags is thus rather slow and shouldn't be called
unnecessarily.
Note that KVM and TCG accelerators provide different expansions for the
"host" CPU type, so we need to query both.
Currently only supports x86_64, because QEMU-aarch64 doesn't provide the
necessary querying functions.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
wrap around code which can possibly fail in evals to handle them
gracefully, and log errors.
note: this results in a change of behavior in the API. since errors
are handled gracefully instead of "die"ing, when there is a pending
change which cannot be applied for some reason, it will get logged in
the tasklog but the vm will continue booting regardless. the
non-applied change will stay in the pending section of the
configuration.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of writing the config after every change, we can do it once for
all the changes in the end to avoid redundant i/o.
we also don't need to load_config after writing fastplug changes.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Do the same as for the "create" case, only trigger the "start after
create/restore" task after the locked "realcmd" was done. Else, the
start can never succeed, it also acquires a lock, but restore only
release it once outside of realcmd.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
bump versioned build-dependency, as qemu-server has tests checking
for errors, and we fixed an grammar error in pve-storage, so we need
the newer version to ensure our test go through
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
run_command only passes defined and chomped strings to the callback,
so no need to do that twice.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
QEMU usually only prints warnings and errors and stays silent otherwise,
so it makes sense to just log all of it's output.
Prefix it with '[<target_hostname>]' to indicate that the output is
coming from the remote node, so users know where to search for the
error.
Side effect is that the 'VM start' task created by the migration will
now show the "QEMU:" prefix, but it's still very readable IMHO.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
By default run_command prints the entire commandline executed when an
error occurs, but QEMU and our migrate command are not only
uninteresting to the user[*] but also annoyingly long. Hide them and only
print the exit code.
[*] Especially our migrate command, since it can't be manually executed
anyway. QEMU's commandline *might* contain something interesting, but is
so long that it's tricky to parse anyway, any a user can always call 'qm
showcmd --pretty'.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Split out 'update_disksize' from the renamed 'update_disk_config' to
allow code reuse in QemuMigrate.
Remove dots after messages to keep style consistent for migration log.
After updating in sync_disks (phase1) of migration, write out updated
config. This means that even if migration fails or is aborted in later
stages, we keep the fixed config - this is not an issue, as it would
have been fixed on the next attempt anyway, and it can't hurt to have
the correct size instead of a wrong one either way.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
only VM.PowerMgmt is not enough, since we allocate space on a storage,
so we need VM.Config.Disk on the vm and Datastore.AllocateSpace on the storage
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the user set a device as hostpci with the 'shorthand' syntax:
hostpciX: 00:12
we ignored it on starting and showcmd and continued.
Since the user explicitly wanted to passthrough a device, we now check
if there is actually a device with that id
for explicitly configured devices (00:12.1), we did not check if it exists,
but the kvm call failed with a non-obvious error message
now we always call 'lspci' from SysFSTools to check if it actually exists,
and fail if not. With this, we can drop the workaround for adding
'0000' if no domain was given, since lspci does it already for us
this fixes#2510, an issue with using mediated devices where the users did not have
the domain in the config, since we forgot to add the default domain there
the only issue with this patch is that it changes the behaviour of
'showcmd' slightly, as in now, we die if the device was explicitly
given, but did not exists (we showed the commandline, now we fail)
this also slightly changes the commandline for qemu (adding always
the domain), which is not a problem since we cannot live migrate
or snapshot such vms, but we have to adapt the tests
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
some storage backends have bigger granularity than the default 128k
size from the EFIVARS template file, so we actually need to poll the
real created disk size, as it will be used to create the target
volume for local storage migration on running VMs, if it's to small
the target will be to small and migration will fail.
Just a fix for newly created EFIDISKS, for others we need to rescan
the size after we've got the migrate lock and write the updated info
out, so that the target node has the correct one (protected from
migrate lock).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Sometimes, a user wants to remove the 'suspended' state without
resuming the vm from that state. Since the vm is locked with
'suspended', this was not possible without help from root@pam
This patch allows to delete the vmstate and the suspended lock and
related config entries with it. The user still has to have the right
priviliges and the vm cannot be 'protected' for this to work
Inspired-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we did not actually delete the state if we deleted the 'vmstate' config,
leaving stray vmstates on the disks
actually implement the removal, requiring 'VM.Config.Disk' and
'VM.PowerMgmt' privs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if a user removed the vmstate from the config for whatever reason,
a vmstart did not remove the 'suspended' lock
so always delete it and delete the vmstate only if it really was there
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while it's a disk from our storage POV, in QEMU it's a pflash, and
those cannot be hot-plugged
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With our QEMU 4.1.1 package we can pass a additional internal version
to QEMU's machine, it will be split out there and ignored, but
returned on a QMP 'query-machines' call.
This allows us to use it for increasing the granularity with which we
can roll-out HW layout changes/additions for VMs. Until now we
required a machine version bump, happening normally every major
release of QEMU, with seldom, for us irrelevant, exceptions.
This often delays rolling out a feature, which would break
live-migration, by several months. That can now be avoided, the new
"pve-version" component of the machine can be bumped at will, and
thus we are much more flexible.
That versions orders after the ($major, $minor) version components
from an stable release - it can thus also be reset on the next
release.
The implementation extends the qemu-machine REGEX, remembers
"pve-version" when doing a "query-machines" and integrates support
into the min_version and extract_version helpers.
We start out with a version of 1.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
if we don't know which format the source volume/file has, let qemu-img
decide.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
or any other variant of the word 'pending'.
note that we can actually allow this snapshot after PVE 7.0, since
pending section and snapshots will be properly namespaced.
([pve:pending] and [snap:$snapname] or similar).
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
'pve-qm-machine' is auto-registered, but for re-use for a new
runningmachine we added the newer pve-qemu-machine standard option.
Use that one to avoid confusion.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the noerr flag set in parse_volume_id we have to check if
$volname is defined before comparing it to 'cloudinit'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
this is useful as meta information for e.g., provisioning or config
management systems
adding the info also to the 'status' api call to make it easier to show
it in the gui
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
...into:
* PVE::QemuServer::Helpers::min_version: check a major.minor version
string with a given major/minor version (this is equivalent to calling
the old qemu_machine_feature_enabled with only $kvmver)
* PVE::QemuServer::Machine::extract_version: get major.minor version
string from arbitrary machine type (e.g. pc-q35-4.0, ...)
* PVE::QemuServer::Machine::machine_version: helper to call
extract_version automatically before min_version
Includes a cfg2cmd test case with pinned machine version.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
...PVE::QemuServer::Machine.
qemu_machine_feature_enabled is exported since it has a *lot* of users
in PVE::QemuServer and a long enough name as it is.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
QMP and monitor helpers are moved from QemuServer.pm.
By using only vm_running_locally instead of check_running, a cyclic
dependency to QemuConfig is avoided. This also means that the $nocheck
parameter serves no more purpose, and has thus been removed along with
vm_mon_cmd_nocheck.
Care has been taken to avoid errors resulting from this, and
occasionally a manual check for a VM's existance inserted on the
callsite.
Methods have been renamed to avoid redundant naming:
* vm_qmp_command -> qmp_cmd
* vm_mon_cmd -> mon_cmd
* vm_human_monitor_command -> hmp_cmd
mon_cmd is exported since it has many users. This patch also changes all
non-package users of vm_qmp_command to use the mon_cmd helper. Includes
mocking for tests.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
vm_exists_on_node in PVE::QemuConfig checks if a config file for a vmid
exists
vm_running_locally in PVE::QemuServer::Helpers checks if a VM is running
on the local machine by probing its pidfile and checking /proc/.../cmdline
check_running is left in QemuServer for compatibility, but changed to
simply call the two new helper functions.
Both methods are also correctly mocked for testing snapshots.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
parse_cmdline is required for upcoming changes related to custom CPU
types and live migration, and this way we can re-use existing code.
Provides the necessary infrastructure to parse QEMU /proc/.../cmdline.
Changing the single user (check_running) is trivial too.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Also remove unused $confdir variable in QemuConfig, but leave it and
$lock_dir there, since those paths should only be used with
cfs_config_path anyway.
nodename() is still called in multiple places, but since it's cached by
INotify it doesn't really matter.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
found no use with tree-wide search, so remove:
* nic_models
* os_list_description
Both were introduced before the import to SVN happened.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Out code behaves like either l26 or other when the ostype is
undefined, both are not common as our webinterface _always_ sets the
ostype.
If one configured QXL with a VM as output device but does not has an
ostype set, and that works without "max_outputs=4" it really should
work with none too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with pve-qemu-4.0.1-3 or higher it was not possible in a spice remote
session to enable more displays on the fly in linux guests.
Adding the `max_outputs` parameter to the qxl device manually restores
the functionality.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
While we may not want to copy the cloudinit disk/drive, we still need
to create+allocate the volume, else the next start complains about a
missing CI drive..
fixes commit 7d6c99f0a0.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This adds an extra field to agent_fmt that specifes the type of guest
agent connection to use. Currently there is no choice, and defaults to
virtio-serial. Since qemu-ga also runs over isa-serial, this allows OSes
such as NetBSD and OpenBSD, which do not have support for virtio-serial,
to run a qemu-ga.
This is an optional field, which leaves the default as virtio-serial. As
it doesn't change the default, it will require no change to older
configuration files.
Signed-off-by: Matt Dunwoodie <ncon@noconroy.net>
Show storages configured for the target node and not for the current one
because they can be different.
Duplicated the `complete_storage` sub and extended it to extract the
targetnode from the parameters to pass it into the storage_check_enabled
function.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
since PVE::Cluster::get_local_migration_ip does not exist anymore. this
is basically an inlined version, since this is the only remaining caller
that we actually want to keep.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We are in the QemuServer package not in LXC, so use the correct
package for the Config, namely QemuConfig
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the reboot request is only cleaned in the vm_start path, so if reboot
fails for some reason, the request still exists. this causes an
unintentional reboot when a shutdown/stop/hibernate is called.
to mitigate, we can just clear the reboot request in case of an error.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
SHA-512 crypted passwords are longer than 64 byte, and it also does
not make sense to limit passwords to such a short length. Increase
to 1024, that should be enough for a while, but still limits maximal
password payload to avoid DOS or the like.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This brings qemu more in line with containers, and it's nicer to
allow passing the replacement config if we want to keep it, instead
of setting a "memory: 128" config.
Use that to lock it on removal before final deletion, and on legacy
tar archive restore, in between old VM destruction and new
restoration.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it has some potential semantic change too, i.e., the Storage
vdisk_list call is not wrapped by eval anymore, put as
we did some (unguarded) storage things before that call I'd say that
that does not matters much..
We try to clean all unused disks too, even if one deletion fails
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When calling qmrestore a config file is created and locked with a lock
property. The following destroy_vm has been impossible as skiplock has not
been set.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
Explicitly close leftover connections in the destructor,
otherwise the IO::Multiplex instance can be leaked causing
the qmp connection to never be closed.
This could occur for instance when cancelling vzdump with
ctrl+c with extremely unlucky timing...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
as else one has no idea what the imported disk is, especially if
multiple unused disks are already present..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
mixed with indentation changes a whole lot of other changes which
should normally not mixed to much together, but this is all a bit
tangled and I'm not sure if splitting it into two or three parts
would help anybody.. just use "-w" (ignore whitespace changes) when
looking at the diff..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previously a VMID conflict was possible when creating a VM on another node
between locking the config with lock_config_full and writing to it for the
first time with write_config.
Using create_and_lock_config eliminates this possibility. This means that now
the "lock" property is set in the config instead of using flock only.
$param was empty when it was assigned the three values "name", "memory" and
"cores" before being assigned to $conf later on. Assigning those values
directly to $conf avoids confusion about what the two variables contain.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
Functions like qm importovf can now set the "lock" property in a config file
before calling do_import.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
This function has been used in one place only into which we inlined its
functionality. Removing it avoids confusion between vm_destroy and vm_destroy.
The whole $importfn is executed in a lock_config_full.
As a consequence, for the inlined code:
1. lock_config is redundant
2. it is not possible that the VM has been started (check_running) in the
meanwhile
Additionally, it is not possible that the "lock" property has been written into
the VM's config file (check_lock) in the meanwhile
Add warning after eval so that it does not go unnoticed if it ever comes into
action.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
The codepath for "any" hugepages did not check if memory size was even,
leading to the code below trying to allocate half a hugepage (e.g. VM
with 2049MiB RAM would lead to 1024.5 2kB hugepages).
Also improve error message for systems with only 1GB hugepages enabled.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
with qemu 4.0.1, there is now a machine type pc-q35-4.0.1 which does not fit
into our regex
this broke live migration of q35, as we give the machine type (incl version
info) to 'qm start' on the target node, which checks it against the
JSONSchema
to fix this, extend the regex to allow any number of version levels,
for q35, i440fx and virt (to be more future proof)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As mentioned on the mailing list [0] disks owned by the VM and unused
disks should be removed before the config file is removed.
[0] https://pve.proxmox.com/pipermail/pve-devel/2019-October/039593.html
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
reverting a nonexisting option did not work with the latest changes
in pve-guest-common, because we do not delete the pending option
in 'add_to_pending_delete' anymore
this had the effect that we had following in the config:
[pending]
option: pendingvalue
delete: option
which would do the deletion code and the pending add code
(e.g. delete the pending cloud init drive and creating it again)
to avoid that situation, we need to remove the option from the pending hash
in the 'delete loop'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As mentioned in #2408, live-migrating a VM between storages that use
different scsi backends (scsi-hd, scsi-generic, scsi-block) breaks.
To fix, from QEMU 4.1 machine types onward (to not break current
behaviour any more), only use scsi-hd, as in recent versions, there is
almost no difference between the two anyway.
scsi-block (which potentially also breaks) requires a flag to be
manually set on the disk, so we can assume the user knows what they're
doing.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Suggested-by: Daniel Berteaud <daniel@firewall-services.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Each pending options has a hash value which has the 'force'
information encoded as entry. But, this can be { force => 1 } or
{ force => 0 }, so we actually need to check the value and not just
set force to the hash directly, as else we have force always truthy..
fixes a bug where 'detach' caused disks to be destroyed immediately,
because $force parameter was always true since hash is true.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We cannot activate a path, only volume IDs with activate_volumes
(duh)
fixes commit 5c1d42b7f8
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the changes to pve-storage in commit 56362cf the startup hangs for
5 minutes on ZFS if the cloudinit disk does not exist. Instead of
calling activate_volume followed by file_size_info we now call
volume_size_info. This should work reliably on all storages that support
cloudinit disks.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
When adding a cloudinit disk it does not contain media=cdrom until it is
actually created. This means the check in check_replication fails to
detect cloudinit and it is recognized as normal disk. Then parse_volname
fails because it does not match the vm-$vmid-XYZ format. To fix this we
now check explicitly if the volname matches cloudinit and if so, return
early.
Additionally 2 small cleanups replacing cloudinit regexes with the
same check for volname matches cloudinit.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
When destroying a VM, we intentionally did not remove all related
configs such as backup or replication jobs.
The intention of this flag is to allow the removal of references to
the VM being removed from such configs on destroy.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we can use the shared conf_table_with_pending guesthelper to produce the
config table with the extra delete and pending columns.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
most of the pending changes related code has been moved into
AbstractConfig, so we have to call them as class methods from QemuConfig instead.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
in config GET call, we can now use the new shared methods from
guest-common, namely load_current_config and load_snapshot_config.
the correct method is called depending on the parameters 'current' or
'snapshot'
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
brings us more in line with what we do in pve-container, also it's
good to not use file_set_contents directly if we have all those nice
wrapper interface methods to do things in a safe and guaranteed way.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Between calling vm_destroy and removing the ID from user.cfg (remove_vm_access)
creating a new VM with this ID was possible. VMs could go missing from pools as
a consequence.
Adding a lock solves this for clones from the same node. Additionally,
unlinking must happen at the very end of the deletion process to avoid that
other nodes use the ID in the meanwhile.
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
So, while we could just make this a special case before the
config_to_command call and set the $conf->{vmstate} to the statefile
for the case were it's a valid volumeid, the special case handling
get's much easier when we do this outside of that method.
So it's basically a trade-off, and after looking far to long at all
nice revisions Alwin made for me and Fabians request, and even trying
out different approaches, it was never perfect.
But having slight code duplication over the movement mess I proposed
(as I did not had the full picture then, sorry Alwin) felt like the
slightly nicer trade off, as all worked I just use this one now, it
has very clear semantics, easy to understand and that now three lines
are duplicated is IMO irrelevant.
Co-developed-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the volume id list is only used to activate before real start and
deactivate later, so use it for the vmstate file too.
This not only makes config_to_command have less side effects, it also
ensures that the vmstate is deactivated again
Co-developed-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and use it also for efidisk creation and importdisk
this way we correctly handle zfs-over-iscsi options for those cases
also write tests for it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As reported in bug #2402, a system started with "default_hugepagesz=1G
hugepagesz=1G" does not have a /sys/kernel/mm/hugepages/hugepages-2048kB
directory.
To fix, ignore the missing directory in hugepages_mount (since it might
not be needed anyway), and correctly check if the requested hugepage
size is available in hugepages_size instead.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
In general it matters where a command line options is positioned
inside a QEMU command, so we want to actually also check the order in
the cfg2cmd test, to do so we need to avoid false positives like this
added.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thanks to Gilberto Nunes for finding a bug where the VM would not start
with foldersharing enabled and the qemu agent option disabled [0].
The cause was that the device org.spice-space.webdav.0 would not find a
virtio-serial-bus in this situation.
Since we always create a virtio-serial-bus for the spice vdagent it
seems sensible to use that also for the foldersharing device by moving
it in front of the other spice devices.
[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-October/039441.html
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This removes the cloudinit disk from the list of drives to clone. As the
cloudinit disk is recreated on every VM start, it's not necessary to
clone it.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The reason for why we did not do this in the first place was the fact
that the "usb3" flag could be set in older qemu-server versions, we
just ignored it but not filtered it out of the config..
That means there can be VMs out there which would now become a
different HW layout, and issue for migration and live-snapshot
restore.
But, actually, while the "usb3" property could be set it allowed to
start the VM in only if an additional USB devices was added to the VM
with USB2, or the VM uses "q35" based machine - as else no "ehci" was
available, and thus the "ignored" USB3 - SPICE could not get attached
anywhere -> QEMU chickened out.
And if a user had a configuration where this could started we have
still a bit luck, live-migration was not possible as the "can't
migrate VM which uses local devices:" check still hit, as in
qemu-server older than 6.0-8 we explicitly checked for "spice" when
seeing what usb device were not local, so a "spice,usb3=X" was always
(luckily) wrongly detected as local device -> migration was blocked.
So we only have one case left: restoring a live-snapshot. Here sadly
there seems no way out, it was possible to do with a "spice,usb3=1"
usb device, and thus all Snapshots taken on such VMs after they had a
clean restart on PVE 6 (to have a machine version >= 4.0) are broken
- but can be easily fixed by removing the "usb3=1" from the
problematic snapshot config.
As restoring a snapshot can be repeated more than once even on
failure without rendering the snapshot or VM permanently unusable,
this should be a reasonable compromise.
I strongly believe that the chance is so small that no one is
affected in practice and the property description mentioned that it
was not supported. If anybody is affected on snapshot restore we can
help them on a case-per-case basis.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The fix introduced in commit bf4a933 did not work as intended. We're
iterating over the $oldconf, not over $virtdev_hash. This means
$drive->{is_cloudinit} is always undefined. Instead use the $exclude_cloudinit
parameter from drive_is_cdrom().
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>