both where previously missing. the existing 'check_storage_access'
helper is not applicable here since it operates on a full set of VM
config options, not just storage IDs.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
the syntax is backwards compatible, providing a single storage ID or '1'
works like before. the new helper ensures consistent behaviour at all
call sites.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
as preparation for refactoring it further. remote migration will add
another 1-2 parameters, and it is already unwieldly enough as it is.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
on storages where the minimum size of images is bigger than the real
OVMF_VARS.fd file, they get padded to their minimum size
when using such an image, qemu maps it fully to the vm, but the efi
does not find the vars region and creates a file on the first efi
partition it finds
this breaks some settings in the ovmf, such as resolution
to fix this, we have to specify the size for the pflash, so that
qemu only maps the first n bytes in the vm (this only works for
raw files, not for qcow2)
we also have to use the correct size when converting between storages
in 'clone_disk' (used for move disk and cloning vms) and when
live migrating to different storages
when we now expect that the source image is always correctly used/created
(e.g. raw with size=x in pflash argument) then we always create the
target correctly
when encountering users which have a non-valid image (e.g. a efidisk
moved from zfs to qcow2 before this patch), we have to tell them to
recreate the efidisk and the settings on it
we have to version_guard it to 4.1+pve2 (since we haven't bumped yet
since the change to pve2)
also add 2 tests, one for the old version and one for the new
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Stefan Reiter <s.reiter@proxmox.com>
Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
[ Thomas: rebased to master ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by only checking for replicatable volumes when a replication job is
defined, and passing only actually replicated volumes to the target node
via STDIN, and back via STDOUT.
otherwise this can pick up theoretically replicatable, but not actually
replicated volumes and treat them wrong.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
For secure live migration with local disks via NBD over a unix socket,
we have to somehow communicate from the source node to the target node
if it supports it. This is because there can only be one NBD server with
exactly one socket bound.
The source node passes that information via STDIN. Support for
'spice_ticket: (...)' is added in addition to 'nbd_protocol_version:
<version>'. As old source nodes send the spice ticket without a prefix,
we still have to have a fallback for this case. New information should
always be passed via a prefix that is matched, otherwise it will be
recognized as spice ticket.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
With Qemu 4.2 we encountered a problem with unix sockets and SSH socket
forwarding for drive-mirror. It seems the socket gets reopened again and
again after it closes for some reason. This can be worked around by
specifying 'block-job-cancel' instead of 'block-job-complete' when we're
not interested in swapping the disks again from NBD to their original
protocol. This is always the case when we use drive-mirror for live
migrating a VM.
qemu_drive_mirror is used for migration and for clone_disk. All in all
we have 3 cases to handle. Either the 'skip' case which skips the
completion of the job. The 'wait' case which was the default before and
still is when $completion is undefined. And the new 'wait_noswap' case
which is used for the live migration.
If 'wait_noswap' is specified, we issue a 'block-job-cancel' once the block
job is in 'ready' state. This completes the block job without swapping the
disks.
clone_disk always uses 'block-job-cancel' via the qemu_blockjobs_cancel
sub.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The initialization for the drive keys in $confdesc is changed
to be a single for-loop iterating over the keys of $drivedesc_hash and
the initialization of the unusedN keys is move to directly below it.
To avoid the need to change all the call sites, functions with more than
a few callers are exported from the submodule and imported into QemuServer.pm.
For callers of the now imported functions within QemuServer.pm, the prefix
PVE::QemuServer is dropped, because it is unnecessary and now even confusing.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Because of alignment and rounding in the storage backend, the effective
size might not match the 'newsize' parameter we passed along.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The description for vm_config was out of date and from the description
for vm_pending it was hard to tell what the difference to vm_config was.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
regression introduced with commit a85ff91b
previously we set $target to undef if it's localnode or localhost, then
we check if node exists.
with regression commit, behaviour changes as we do the node check in
else, but $target may be undef. this causes an error:
no such cluster node ''
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
improved readability
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
VM.Audit can see the current config and the list of snapshots
already, so there is no real reason to disallow
the config of snapshots
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This is the guarantee that this call operates on it's created config.
A VMID cannot be reused afterall. So only remove the guarantee at the
last step, just before throwing up the error message about the clone
failure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We clone the source VM firewall config before forking the "realcmd"
worker, but did not mind cleaning it up again if the clone failed
somewhere in the worker.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
wrap around code which can possibly fail in evals to handle them
gracefully, and log errors.
note: this results in a change of behavior in the API. since errors
are handled gracefully instead of "die"ing, when there is a pending
change which cannot be applied for some reason, it will get logged in
the tasklog but the vm will continue booting regardless. the
non-applied change will stay in the pending section of the
configuration.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Do the same as for the "create" case, only trigger the "start after
create/restore" task after the locked "realcmd" was done. Else, the
start can never succeed, it also acquires a lock, but restore only
release it once outside of realcmd.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
bump versioned build-dependency, as qemu-server has tests checking
for errors, and we fixed an grammar error in pve-storage, so we need
the newer version to ensure our test go through
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
only VM.PowerMgmt is not enough, since we allocate space on a storage,
so we need VM.Config.Disk on the vm and Datastore.AllocateSpace on the storage
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Sometimes, a user wants to remove the 'suspended' state without
resuming the vm from that state. Since the vm is locked with
'suspended', this was not possible without help from root@pam
This patch allows to delete the vmstate and the suspended lock and
related config entries with it. The user still has to have the right
priviliges and the vm cannot be 'protected' for this to work
Inspired-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we did not actually delete the state if we deleted the 'vmstate' config,
leaving stray vmstates on the disks
actually implement the removal, requiring 'VM.Config.Disk' and
'VM.PowerMgmt' privs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'pve-qm-machine' is auto-registered, but for re-use for a new
runningmachine we added the newer pve-qemu-machine standard option.
Use that one to avoid confusion.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the noerr flag set in parse_volume_id we have to check if
$volname is defined before comparing it to 'cloudinit'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
this is useful as meta information for e.g., provisioning or config
management systems
adding the info also to the 'status' api call to make it easier to show
it in the gui
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
QMP and monitor helpers are moved from QemuServer.pm.
By using only vm_running_locally instead of check_running, a cyclic
dependency to QemuConfig is avoided. This also means that the $nocheck
parameter serves no more purpose, and has thus been removed along with
vm_mon_cmd_nocheck.
Care has been taken to avoid errors resulting from this, and
occasionally a manual check for a VM's existance inserted on the
callsite.
Methods have been renamed to avoid redundant naming:
* vm_qmp_command -> qmp_cmd
* vm_mon_cmd -> mon_cmd
* vm_human_monitor_command -> hmp_cmd
mon_cmd is exported since it has many users. This patch also changes all
non-package users of vm_qmp_command to use the mon_cmd helper. Includes
mocking for tests.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
While we may not want to copy the cloudinit disk/drive, we still need
to create+allocate the volume, else the next start complains about a
missing CI drive..
fixes commit 7d6c99f0a0.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Show storages configured for the target node and not for the current one
because they can be different.
Duplicated the `complete_storage` sub and extended it to extract the
targetnode from the parameters to pass it into the storage_check_enabled
function.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
This brings qemu more in line with containers, and it's nicer to
allow passing the replacement config if we want to keep it, instead
of setting a "memory: 128" config.
Use that to lock it on removal before final deletion, and on legacy
tar archive restore, in between old VM destruction and new
restoration.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
reverting a nonexisting option did not work with the latest changes
in pve-guest-common, because we do not delete the pending option
in 'add_to_pending_delete' anymore
this had the effect that we had following in the config:
[pending]
option: pendingvalue
delete: option
which would do the deletion code and the pending add code
(e.g. delete the pending cloud init drive and creating it again)
to avoid that situation, we need to remove the option from the pending hash
in the 'delete loop'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When adding a cloudinit disk it does not contain media=cdrom until it is
actually created. This means the check in check_replication fails to
detect cloudinit and it is recognized as normal disk. Then parse_volname
fails because it does not match the vm-$vmid-XYZ format. To fix this we
now check explicitly if the volname matches cloudinit and if so, return
early.
Additionally 2 small cleanups replacing cloudinit regexes with the
same check for volname matches cloudinit.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>