we should push to $devices array instead $cmd array,
because pci bridges need to be create before spice devices
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
multifunction device should be define without the .function
hostpci0: 00:00
example
-------
if 00:00.0
00:00.1
00:00.2
exists,
then we generate the multifunction devices
-device (pci-assign|vfio-pci),host=00:00.0,id=hostpci0.0,bus=...,addr=0x0.0,multifunction=on
-device (pci-assign|vfio-pci),host=00:00.1,id=hostpci0.1,bus=...,addr=0x0.1
-device (pci-assign|vfio-pci),host=00:00.2,id=hostpci0.2,bus=...,addr=0x0.2
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
hostpci0: .....,x-vga=on,pcie=1
x-vga require kernel 3.10 with vfio-vga support enable
if x-vga=on, we force vfio-pci device
pcie=1 choose the pciexpress bus (need q35 machine model)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
q35 use pcie.0 root by default. so currently we can't start machine model q35.
we need to add 3 pci-bridge pci.0, pci.1, pci.2, to handle our devices.
pcie.0 does not support hotplug. so pci-bridge are defined at startup.
I use an pve-q35.cfg (mostly the same than q35-chipset.cfg from qemu docs).
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
this a new option queue=(\d+) to net interface
Allow to use more than 1 cpu for network stream, so this can improve network bandwidth,
when vhost-net cpu is the bottleneck
http://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
-netdev tap,vhost=on,queues=N -device virtio-net-pci,mq=on,vectors=2N+2
host requirement
----------------
this require host kernel >= 3.8 (or qemu die at start)
linux guest requirement
-----------------------
kernel >= 3.8
manual enabling multiqueue
windows guest requierement
--------------------------
recent virtio-net driver
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
We simply add option iscsi if we have an initiator name. So we
never add this option multiple times, and it works with hotplug
in case someone plugs an 'iscsi:' drive later.
enable check if host support all cpu flags configured for the guests
this avoid some bad setup like Opteron vcpu on a intel host for example,
and avoid some bad live migrations
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
This reduce guest cpu speed if dirtied bytes is 50% more than the approx.amount of bytes that just got transferred since the last time we were in this routine.
qemu commit :
http://git.qemu.org/?p=qemu.git;a=commit;h=bde1e2ec2176c363c1783bf8887b6b1beb08dfee
tested with "stress -m 2 -c 2" under debian
without autoconvergence : downtime 12s - duration 12min
with autoconvergence : downtime 2s - duration 4min
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
add qxl2 (2monitors),qxl3 (3monitors),qxl4 (4monitors) vga type.
For linux, we only need 1 qxl card with more memory
For windows, we need 1 qxl card by monitor
Original Information from spice-mailing
"
You need to specify multiple devices for Windows VMs. This is what
libvirt gives me (via 'virsh domxml-to-native qemu argv DOMAIN_XML'):
<...> -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device qxl,id=video1,ram_size=67108864,vram_size=33554432 -device qxl,id=video2,ram_size=67108864,vram_size=33554432 -device qxl,id=video3,ram_size=67108864,vram_size=33554432
For Linux VM, just one qxl device is OK but then it's advisable to
increase the available RAM:
<...> -vga qxl -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=33554432
If you don't turn off surfaces, then you should increase vram size to
say 64 MB from current default of 32 MB.
"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>