This will create a new drive for each local drive found,
and start the vm with this new drives.
if targetstorage == 1, we use same sid than original vm disk
a nbd server is started in qemu and expose local volumes to network port
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
we can use multiple drive_mirror in parralel.
block-job-complete can be skipped, if we want to add more mirror job later.
also add support for nbd uri to qemu_drive_mirror
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
otherwise we end up with undeletable VM configs in case
vdisk_free fails (which could happen because of cluster-wide
lock contention, storage problems, ..).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
When trying to migrate a VM from a node with qemu server <= 4.0-92 to
a node with qemu server >= 4.0-93 we failed as the remote qemu-server
got no explicit migration_type' from the older qemu server on the
source.
Check if migration_type is defined on a incoming migration start, if
not set it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
All special flags for Windows 8 and Windows 2012 (win8 type)
are kept the same , since we set flags based on checking if
/^win(\d+)$/ is greater than 6 or 7
like for snapshot, we need to check if krbd is enabled, to known
if we need to use qmp delete-drive-snapshot or storage command directly
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This cleanup windows guest os version handling,
with normalizing ostype with numbers in a new windows_version sub.
if($ostype eq 'wxp' || $ostype eq 'w2k3' || $ostype eq 'w2k') {
$winversion = 5;
} elsif($ostype eq 'w2k8' || $ostype eq 'wvista') {
$winversion = 6;
} elsif ($ostype =~ m/^win(\d+)$/) {
$winversion = $1;
}
so we can simply do test on windows version with lower or upper version
Hyperv enlightments configuration is centralized
in a new add_hyperv_enlighments sub.
Also disable hyperv with win < 8 + ovmf.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Without this patch we use the network were the cluster traffic runs
for sending migration traffic. This is not ideal as it may hinder
cluster traffic. Further some users have a powerful network which
would be perfect for migrations, with this patch they can run the
migration traffic over such a network without having the corosync
traffic on the same network.
The network is configurable through /etc/pve/datacenter.cfg which
got a new property, namely migration. migration has two
subproperties: type (replaces the old migration_unsecure property)
and network.
For the case of a network failure or that a VM has to be moved over
another network for arbitrary other reasons I added the
migration_type and migration_network parameters to qm migrate (and
respectively vm_start as this gets used on migration).
They allow overwriting the datacenter.cfg settings.
Fixes bug #1177
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Let 'cdrom' use the pve-qm-ide format, as it's supposed to
be an alias to ide2.
We're not using the 'alias' schema property since the qemu
configs still use a custom parser (due to the
pending-changes system and the filename-to-volume-id
conversion for legacy support) which does not deal with
schema aliases.
when restoring into an existing VM, we don't want to die
half-way through because we can't delete one of the existing
volumes. instead, warn about the deletion failure, but
continue anyway. the not deleted disk is then added as
unused automatically.
this adds a bootsplash image in /usr/share/qemu-server
and if this file exists, use it for seabios
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if efidisk0 is defined, use it as a efivars disk,
to permanently store efivars (such as boot options)
we check if the files exist, and act accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
just a simple disk (only size, format and volid) for
efivars disk
also do not add it to command line in foreach_drive
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
drive-mirror is not working with qemu 2.6 when iothread is enabled.
with virtio-blk : mirror is working, but block-job-completed crash the vm
with virtio-scsi : mirror hang at start.
This should be fixed in qemu 2.7
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
we have a few problems with hotplug at the moment:
qemu may add usb hubs when adding usb devices but fails to remove them
when removing the usb device (this is a qemu bug)
also when starting a guest with a usb device we add ehci and uchi
controllers, which we cannot hot unplug
with those devices, it is impossible to live migrate the guest
to another host, meaning even if you remove all usb devices,
the migrate fails
so we deactivate usb hotplugging for now
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this patch introduces working usb hotplugging
you can now add a usb device while a vm is running
this does not work with spice at the moment, only
with usb passthrough
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since usb devices do not have their own
"query" command in qmp, we have to use
qom-list /machines/peripheral
which essentially gets a list of peripheral devices of
the vm
there we only get the usb devices
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
vm configuration
----------------
hugepages: (any|2|1024)
any: we'll try to allocate 1GB hugepage if possible, if not we use 2MB hugepage
2: we want to use 2MB hugepage
1024: we want to use 1GB hugepage. (memory need to be multiple of 1GB in this case)
optionnal host configuration for 1GB hugepages
----------------------------------------------
1GB hugepages can be allocated at boot if user want it.
hugepages need to be contiguous, so sometime it's not possible to reserve them on the fly
/etc/default/grub : GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=x"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
add them by default for qemu 2.6
(support is already present in qemu 2.5, but we don't want to break live migration for current running vm)
vpindex && runtime need host kernel 4.4
Theses 3 enlightements are needed by windows to use vmbus
http://searchwindowsserver.techtarget.com/definition/Microsoft-Virtual-Machine-Bus-VMBus
details :
- When Hyper-V "vpindex" is on, guest can use MSR HV_X64_MSR_VP_INDEX
to get virtual processor ID.
- Hyper-V "runtime" enlightement feature allows to use MSR
HV_X64_MSR_VP_RUNTIME to get the time the virtual processor consumes
running guest code, as well as the time the hypervisor spends running
code on behalf of that guest.
- Hyper-V "reset" allows guest to reset VM.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>