commit 1c0c1c17b0
Author: Wolfgang Link <wolfgang@linksystems.org>
Date: Wed Nov 26 11:11:40 2014 +0100
shutdown by Qemu Guest Agent if the agent flag in the config is set
Important: "guest-shutdown" returns only by error a message.
Signed-off-by: Wolfgang Link <wolfgang@linksystems.org>
breaks live migration as it always tries to load the vm config - even in case of $nocheck. Also it double loads the config ($conf && $config)
Signed-off-by: Stefan Priebe <s.priebe@profihost.ag>
This enable numa support inside the guest, and share the memory and cores across the sockets numa nodes.
numa: 0|1
example:
-------
sockets:2
cores:2
memory:4096
numa: 1
qemu command line
-----------------
-object memory-backend-ram,size=2048,id=ram-node0
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0
-object memory-backend-ram,size=2048,id=ram-node1
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
remove the freezefs flag.
If Qemu Guest Agent flag is set in config the vm filesystem will always be frozen,
unless we save RAM.
also remove param freezefs in PVE::API2 snapshot,
because there is no use for it.
Signed-off-by: Wolfgang Link <wolfgang@linksystems.org>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Even if we check the busy flag, we can have sometime race condition if new write
are coming between the query-block-job and the block-job-complete.
block-job-complete throw an error "The active block job for device '%(name)' cannot be completed"
we just need to retry in this case.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
block-job-cancel is async, we need to check that job is really finished
before try to free the volume
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
new config option:
iothread: 1|0
This enable iothread/dataplane support, to improve io performance on fast storages
Currently block jobs don't work yet, it's planned for qemu 2.2.
So it's better to not expose yet this option in gui.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
the machine option is write in the snapshot (ok), but also in the running config (bad)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
we should push to $devices array instead $cmd array,
because pci bridges need to be create before spice devices
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
multifunction device should be define without the .function
hostpci0: 00:00
example
-------
if 00:00.0
00:00.1
00:00.2
exists,
then we generate the multifunction devices
-device (pci-assign|vfio-pci),host=00:00.0,id=hostpci0.0,bus=...,addr=0x0.0,multifunction=on
-device (pci-assign|vfio-pci),host=00:00.1,id=hostpci0.1,bus=...,addr=0x0.1
-device (pci-assign|vfio-pci),host=00:00.2,id=hostpci0.2,bus=...,addr=0x0.2
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
hostpci0: .....,x-vga=on,pcie=1
x-vga require kernel 3.10 with vfio-vga support enable
if x-vga=on, we force vfio-pci device
pcie=1 choose the pciexpress bus (need q35 machine model)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
q35 use pcie.0 root by default. so currently we can't start machine model q35.
we need to add 3 pci-bridge pci.0, pci.1, pci.2, to handle our devices.
pcie.0 does not support hotplug. so pci-bridge are defined at startup.
I use an pve-q35.cfg (mostly the same than q35-chipset.cfg from qemu docs).
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
this a new option queue=(\d+) to net interface
Allow to use more than 1 cpu for network stream, so this can improve network bandwidth,
when vhost-net cpu is the bottleneck
http://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
-netdev tap,vhost=on,queues=N -device virtio-net-pci,mq=on,vectors=2N+2
host requirement
----------------
this require host kernel >= 3.8 (or qemu die at start)
linux guest requirement
-----------------------
kernel >= 3.8
manual enabling multiqueue
windows guest requierement
--------------------------
recent virtio-net driver
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
We simply add option iscsi if we have an initiator name. So we
never add this option multiple times, and it works with hotplug
in case someone plugs an 'iscsi:' drive later.
enable check if host support all cpu flags configured for the guests
this avoid some bad setup like Opteron vcpu on a intel host for example,
and avoid some bad live migrations
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This reduce guest cpu speed if dirtied bytes is 50% more than the approx.amount of bytes that just got transferred since the last time we were in this routine.
qemu commit :
http://git.qemu.org/?p=qemu.git;a=commit;h=bde1e2ec2176c363c1783bf8887b6b1beb08dfee
tested with "stress -m 2 -c 2" under debian
without autoconvergence : downtime 12s - duration 12min
with autoconvergence : downtime 2s - duration 4min
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
add qxl2 (2monitors),qxl3 (3monitors),qxl4 (4monitors) vga type.
For linux, we only need 1 qxl card with more memory
For windows, we need 1 qxl card by monitor
Original Information from spice-mailing
"
You need to specify multiple devices for Windows VMs. This is what
libvirt gives me (via 'virsh domxml-to-native qemu argv DOMAIN_XML'):
<...> -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device qxl,id=video1,ram_size=67108864,vram_size=33554432 -device qxl,id=video2,ram_size=67108864,vram_size=33554432 -device qxl,id=video3,ram_size=67108864,vram_size=33554432
For Linux VM, just one qxl device is OK but then it's advisable to
increase the available RAM:
<...> -vga qxl -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=33554432
If you don't turn off surfaces, then you should increase vram size to
say 64 MB from current default of 32 MB.
"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>