This enable numa support inside the guest, and share the memory and cores across the sockets numa nodes.
numa: 0|1
example:
-------
sockets:2
cores:2
memory:4096
numa: 1
qemu command line
-----------------
-object memory-backend-ram,size=2048,id=ram-node0
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0
-object memory-backend-ram,size=2048,id=ram-node1
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
remove the freezefs flag.
If Qemu Guest Agent flag is set in config the vm filesystem will always be frozen,
unless we save RAM.
also remove param freezefs in PVE::API2 snapshot,
because there is no use for it.
Signed-off-by: Wolfgang Link <wolfgang@linksystems.org>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Even if we check the busy flag, we can have sometime race condition if new write
are coming between the query-block-job and the block-job-complete.
block-job-complete throw an error "The active block job for device '%(name)' cannot be completed"
we just need to retry in this case.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
block-job-cancel is async, we need to check that job is really finished
before try to free the volume
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
new config option:
iothread: 1|0
This enable iothread/dataplane support, to improve io performance on fast storages
Currently block jobs don't work yet, it's planned for qemu 2.2.
So it's better to not expose yet this option in gui.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
the machine option is write in the snapshot (ok), but also in the running config (bad)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
we should push to $devices array instead $cmd array,
because pci bridges need to be create before spice devices
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
multifunction device should be define without the .function
hostpci0: 00:00
example
-------
if 00:00.0
00:00.1
00:00.2
exists,
then we generate the multifunction devices
-device (pci-assign|vfio-pci),host=00:00.0,id=hostpci0.0,bus=...,addr=0x0.0,multifunction=on
-device (pci-assign|vfio-pci),host=00:00.1,id=hostpci0.1,bus=...,addr=0x0.1
-device (pci-assign|vfio-pci),host=00:00.2,id=hostpci0.2,bus=...,addr=0x0.2
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
hostpci0: .....,x-vga=on,pcie=1
x-vga require kernel 3.10 with vfio-vga support enable
if x-vga=on, we force vfio-pci device
pcie=1 choose the pciexpress bus (need q35 machine model)
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
q35 use pcie.0 root by default. so currently we can't start machine model q35.
we need to add 3 pci-bridge pci.0, pci.1, pci.2, to handle our devices.
pcie.0 does not support hotplug. so pci-bridge are defined at startup.
I use an pve-q35.cfg (mostly the same than q35-chipset.cfg from qemu docs).
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
this a new option queue=(\d+) to net interface
Allow to use more than 1 cpu for network stream, so this can improve network bandwidth,
when vhost-net cpu is the bottleneck
http://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
-netdev tap,vhost=on,queues=N -device virtio-net-pci,mq=on,vectors=2N+2
host requirement
----------------
this require host kernel >= 3.8 (or qemu die at start)
linux guest requirement
-----------------------
kernel >= 3.8
manual enabling multiqueue
windows guest requierement
--------------------------
recent virtio-net driver
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
We simply add option iscsi if we have an initiator name. So we
never add this option multiple times, and it works with hotplug
in case someone plugs an 'iscsi:' drive later.