This patch will include all necessary properties for the replication.
Also will it enable and disable a replication job
when appointed flags are set or deleted.
if we define a different target storeid for remote node,
and that storage is not available on source node
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
otherwise we get lots uninitialized warnings:
update VM 600: -delete unused7
Use of uninitialized value $data in split at /usr/share/perl5/PVE/JSONSchema.pm line 533.
Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Qemu.pm line 1012.
Use of uninitialized value $volid in pattern match (m//) at /usr/share/perl5/PVE/QemuServer.pm line 1824.
Use of uninitialized value $volid in pattern match (m//) at /usr/share/perl5/PVE/Storage/Plugin.pm line 201.
Use of uninitialized value $volid in concatenation (.) or string at /usr/share/perl5/PVE/Storage/Plugin.pm line 205.
vs:
update VM 600: -delete unused7
cannot delete 'unused7' - not set in current configuration!
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Single quoted text in asciidoc is rendered in man pages
with underlines, which makes the '+' symbol very similar to '+/-'
Backticks are rendered with monospace text in HTML, normal text
in man pages, and still readable in raw format.
this was already possible manually via "qm template", but
doing it automatically when moving a disk of a template
makes more sense.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
As Fabian as required,
add an extra flag "with-local-disks" to enable live storage migration with localdisk.
default target storage is same sid than source, this can be overrided with
"targetstorage" option.
I will try improve this later, with optionnal mapping, disk by disk.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
if qga is enabled, we try to freeze the fs before cancelling block job.
if not , we pause the vm.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This allow to migrate disks on local storage to a remote node storage.
When the target node start, a new volumes are created and exposed through qemu embedded nbd server.
qemu drive-mirror is launch on source vm for each disk with nbd server as target.
when drive-mirror reach 100% of 1 disk, we don't complete the block jobs and begin mirror of next disk.
(mirroring are parralel, but we try to mirroring them 1 by 1 to avoid storage && network overload)
Then we live migrate the vm to destination node. (drive-mirror still occur at the same time).
We the vm is livemigrate (source vm paused, target vm pause), we complete the block jobs mirror.
When is done we stop the source vm and resume the target vm
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This will create a new drive for each local drive found,
and start the vm with this new drives.
if targetstorage == 1, we use same sid than original vm disk
a nbd server is started in qemu and expose local volumes to network port
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
because these allow adding arbitrary devices to VMs (and
other potentially dangerous things).
whitelist 'info *' and 'help' as usable with just
VM.Monitor, if more are desired and requested they can be
added later.
Without this patch we use the network were the cluster traffic runs
for sending migration traffic. This is not ideal as it may hinder
cluster traffic. Further some users have a powerful network which
would be perfect for migrations, with this patch they can run the
migration traffic over such a network without having the corosync
traffic on the same network.
The network is configurable through /etc/pve/datacenter.cfg which
got a new property, namely migration. migration has two
subproperties: type (replaces the old migration_unsecure property)
and network.
For the case of a network failure or that a VM has to be moved over
another network for arbitrary other reasons I added the
migration_type and migration_network parameters to qm migrate (and
respectively vm_start as this gets used on migration).
They allow overwriting the datacenter.cfg settings.
Fixes bug #1177
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
before copying the efidisk image to a storage,
we first have to activate the volume
also, add the -n flag to qemu-img convert (prevents
creation of the target volume)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when we create the efidisk0 over the api,
we discard the size, and create a 128kbyte vdisk
and copy it there with qemu-img convert
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this API call changes the config quite drastically, and as
such should not be possible while an operation that holds a
lock is ongoing (e.g., migration, backup, snapshot).
we needed root@pam rights to remove an unused disk
from a vm (instead of the correct Storage rights)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
moved numa\d+ persmission to VM.Config.CPU
and serial/parallel to root since it accesses the host hardware
also renamed remainingoptions to generaloptions to
reduce confusion, since the real remaining options
(those that we do not specify) require root
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of defaulting to VM.Config.Options for
all options not checked seperately, we
now have lists for the different config permissions
and check them accordingly.
for everything not given, we require root access
this is important especially for usbN and hostpciN
since they can change the host configuration
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The volume_size_info() call was what actually failed, but
the error reported to the gui came from afterwards trying to
resize the disk to a garabge size.
Otherwise some move operations will fail to delete the old
disk (eg. when moving from ceph to local storage).
Note that in order for the deactivation to succeed we need
to make sure qemu has closed its file descriptors, so we
need to wait for the job to disappear the same way we do in
$cancel_job().
Factored the waiting out into $finish_job().
previously, when shutting down a suspended vm,
we successfully send the shutdown command to it,
but it will not shutdown (because it is suspended)
there we will run into the timeout and either
bail out with an error, or kill the process
when we not kill the process and resume the vm,
it will instantly shutdown, because of the previous
command
this patch checks the status of the vm beforehand,
and either bails out with an error that you cannot
shutdown a suspended vm, or stops the vm with the
correct qmp command (depending of forceStop)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Drop snapshot_create, snapshot_delete and snapshot_rollback
in favour of PVE::AbstractConfig. Qemu-specific parts are
implemented in __snapshot_XX methods in PVE::QemuConfig.
has_feature is made an implementation of the abstract
has_feature, and thus moves to PVE::QemuConfig.
Note: a new hook method needed to be introduced to be called
before creating a volume snapshot, after creating a volume
snapshot, and after unfreezing the guestfs after creating a
volume snapshot. The base method in PVE::AbstractConfig is a
noop, the implemention in PVE::QemuConfig runs the necessary
Qemu monitor commands.