this was already possible manually via "qm template", but
doing it automatically when moving a disk of a template
makes more sense.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As Fabian as required,
add an extra flag "with-local-disks" to enable live storage migration with localdisk.
default target storage is same sid than source, this can be overrided with
"targetstorage" option.
I will try improve this later, with optionnal mapping, disk by disk.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
if qga is enabled, we try to freeze the fs before cancelling block job.
if not , we pause the vm.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This allow to migrate disks on local storage to a remote node storage.
When the target node start, a new volumes are created and exposed through qemu embedded nbd server.
qemu drive-mirror is launch on source vm for each disk with nbd server as target.
when drive-mirror reach 100% of 1 disk, we don't complete the block jobs and begin mirror of next disk.
(mirroring are parralel, but we try to mirroring them 1 by 1 to avoid storage && network overload)
Then we live migrate the vm to destination node. (drive-mirror still occur at the same time).
We the vm is livemigrate (source vm paused, target vm pause), we complete the block jobs mirror.
When is done we stop the source vm and resume the target vm
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This will create a new drive for each local drive found,
and start the vm with this new drives.
if targetstorage == 1, we use same sid than original vm disk
a nbd server is started in qemu and expose local volumes to network port
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
because these allow adding arbitrary devices to VMs (and
other potentially dangerous things).
whitelist 'info *' and 'help' as usable with just
VM.Monitor, if more are desired and requested they can be
added later.
Without this patch we use the network were the cluster traffic runs
for sending migration traffic. This is not ideal as it may hinder
cluster traffic. Further some users have a powerful network which
would be perfect for migrations, with this patch they can run the
migration traffic over such a network without having the corosync
traffic on the same network.
The network is configurable through /etc/pve/datacenter.cfg which
got a new property, namely migration. migration has two
subproperties: type (replaces the old migration_unsecure property)
and network.
For the case of a network failure or that a VM has to be moved over
another network for arbitrary other reasons I added the
migration_type and migration_network parameters to qm migrate (and
respectively vm_start as this gets used on migration).
They allow overwriting the datacenter.cfg settings.
Fixes bug #1177
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
before copying the efidisk image to a storage,
we first have to activate the volume
also, add the -n flag to qemu-img convert (prevents
creation of the target volume)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when we create the efidisk0 over the api,
we discard the size, and create a 128kbyte vdisk
and copy it there with qemu-img convert
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this API call changes the config quite drastically, and as
such should not be possible while an operation that holds a
lock is ongoing (e.g., migration, backup, snapshot).
we needed root@pam rights to remove an unused disk
from a vm (instead of the correct Storage rights)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
moved numa\d+ persmission to VM.Config.CPU
and serial/parallel to root since it accesses the host hardware
also renamed remainingoptions to generaloptions to
reduce confusion, since the real remaining options
(those that we do not specify) require root
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of defaulting to VM.Config.Options for
all options not checked seperately, we
now have lists for the different config permissions
and check them accordingly.
for everything not given, we require root access
this is important especially for usbN and hostpciN
since they can change the host configuration
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The volume_size_info() call was what actually failed, but
the error reported to the gui came from afterwards trying to
resize the disk to a garabge size.
Otherwise some move operations will fail to delete the old
disk (eg. when moving from ceph to local storage).
Note that in order for the deactivation to succeed we need
to make sure qemu has closed its file descriptors, so we
need to wait for the job to disappear the same way we do in
$cancel_job().
Factored the waiting out into $finish_job().
previously, when shutting down a suspended vm,
we successfully send the shutdown command to it,
but it will not shutdown (because it is suspended)
there we will run into the timeout and either
bail out with an error, or kill the process
when we not kill the process and resume the vm,
it will instantly shutdown, because of the previous
command
this patch checks the status of the vm beforehand,
and either bails out with an error that you cannot
shutdown a suspended vm, or stops the vm with the
correct qmp command (depending of forceStop)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Drop snapshot_create, snapshot_delete and snapshot_rollback
in favour of PVE::AbstractConfig. Qemu-specific parts are
implemented in __snapshot_XX methods in PVE::QemuConfig.
has_feature is made an implementation of the abstract
has_feature, and thus moves to PVE::QemuConfig.
Note: a new hook method needed to be introduced to be called
before creating a volume snapshot, after creating a volume
snapshot, and after unfreezing the guestfs after creating a
volume snapshot. The base method in PVE::AbstractConfig is a
noop, the implemention in PVE::QemuConfig runs the necessary
Qemu monitor commands.
Drop load_config, write_config, lock_config[_xx],
check_lock, check_protection, is_template and config_file
in favour of implementions in PVE::AbstractConfig.
Implement guest_type, __config_max_unused_disks,
config_file_lock and cfs_config_path from
PVE::AbstractConfig in PVE::QemuConfig.
Previously, foreach_drive iterated over all configuration
keys (in a random order) and checked whether the current key
is a valid drive name. Instead, we now iterate over a list
of valid drive names (with deterministic order) and check
whether a drive with such a name exists in the
configuration.
Also rename the two involved methods from valid_drive_name
to is_valid_drive_name (for the check) and from disknames
to valid_drive_names (for the list of valid keys), for
consistency. These two were only used in the qemu-server
code base.
qm list and qm status both show suspended VMs as 'running'
while the GUI's status summary shows them as 'paused'.
This patch makes 'qm status' always request the full status
and adds an optional '-full' parameter for 'qm list' to
use a full status query to include the 'paused' state. (This
is optional as it causes qmp requests to all running VMs.)
We can clone online a running vm, we don't have to deactive source vm volume
if the source vm is running
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Since write_config was always called with skiplock=1 except
once, it makes sense to drop this parameter like in
PVE::LXC::write_config . If needed in the future, the
caller can use check_lock before write_config anyway.