we have a few problems with hotplug at the moment:
qemu may add usb hubs when adding usb devices but fails to remove them
when removing the usb device (this is a qemu bug)
also when starting a guest with a usb device we add ehci and uchi
controllers, which we cannot hot unplug
with those devices, it is impossible to live migrate the guest
to another host, meaning even if you remove all usb devices,
the migrate fails
so we deactivate usb hotplugging for now
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this patch introduces working usb hotplugging
you can now add a usb device while a vm is running
this does not work with spice at the moment, only
with usb passthrough
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since usb devices do not have their own
"query" command in qmp, we have to use
qom-list /machines/peripheral
which essentially gets a list of peripheral devices of
the vm
there we only get the usb devices
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
vm configuration
----------------
hugepages: (any|2|1024)
any: we'll try to allocate 1GB hugepage if possible, if not we use 2MB hugepage
2: we want to use 2MB hugepage
1024: we want to use 1GB hugepage. (memory need to be multiple of 1GB in this case)
optionnal host configuration for 1GB hugepages
----------------------------------------------
1GB hugepages can be allocated at boot if user want it.
hugepages need to be contiguous, so sometime it's not possible to reserve them on the fly
/etc/default/grub : GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=x"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
foreach_volid recurses over snapshots as well, resulting in
lots of repeated checks (especially for VMs with lots of
snapshots and disks).
a potential vmstate volume must be checked explicitly,
because foreach_drive does not care about those.
we needed root@pam rights to remove an unused disk
from a vm (instead of the correct Storage rights)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add them by default for qemu 2.6
(support is already present in qemu 2.5, but we don't want to break live migration for current running vm)
vpindex && runtime need host kernel 4.4
Theses 3 enlightements are needed by windows to use vmbus
http://searchwindowsserver.techtarget.com/definition/Microsoft-Virtual-Machine-Bus-VMBus
details :
- When Hyper-V "vpindex" is on, guest can use MSR HV_X64_MSR_VP_INDEX
to get virtual processor ID.
- Hyper-V "runtime" enlightement feature allows to use MSR
HV_X64_MSR_VP_RUNTIME to get the time the virtual processor consumes
running guest code, as well as the time the hypervisor spends running
code on behalf of that guest.
- Hyper-V "reset" allows guest to reset VM.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Restore previous behaviour and do not request a forward tunnel on
insecure migrations.
For the migrations of all kind this has no direct impact, they all
worked, but an port to much requested from an limited pool is still
not ideal. Also an open tunnel, if not needed.
This is a light regression introduced from commit 1c9d54b.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Output all errors - if any - and add some log outputs on what we qmp
commands we do with which parameters, may be helpful when debugging
or analyzing a users problem.
Also check if the queried status is defined, as on a error this may
not be.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On error let phase2_cleanup close the tunnel as it stops the for
incoming migration waiting VM on the destination first, to be safe.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We cannot guarantee when the SSH forward Tunnel really becomes
ready. The check with the mtunnel API call did not help for this
prolem as it only checked that the SSH connection itself works and
that the destination node has quorum but the forwarded tunnel itself
was not checked.
The Forward tunnel is a different channel in the SSH connection,
independent of the SSH `qm mtunnel` channel, so only if that works
it does not guarantees that our migration tunnel is up and ready.
When the node(s) where under load, or when we did parallel
migrations (migrateall), the migrate command was often started
before a tunnel was open and ready to receive data. This led to
a direct abortion of the migration and is the main cause in why
parallel migrations often leave two thirds or more VMs on the
source node.
The issue was tracked down to SSH after debugging the QEMU
process and enabling debug logging showed that the tunnel became
often to late available and ready, or not at all.
Fixing the TCP forward tunnel is quirky and not straight ahead, the
only way SSH gives as a possibility is to use -N (no command)
-f (background) and -o "ExitOnForwardFailure=yes", then it would
wait in the foreground until the tunnel is ready and only then
background itself. This is not quite the nicest way for our special
use case and our code base.
Waiting for the local port to become open and ready (through
/proc/net/tcp[6]] as a proof of concept is not enough, even if the
port is in the listening state and should theoretically accept
connections this still failed often as the tunnel was not yet fully
ready.
Further another problem would still be open if we tried to patch the
SSH Forward method we currently use - which we solve for free with
the approach of this patch - namely the problem that the method
to get an available port (next_migration_port) has a serious race
condition which could lead to multiple use of the same port on a
parallel migration (I observed this on my many test, seldom but if
it happens its really bad).
So lets now use UNIX sockets, which ssh supports since version 5.7.
The end points are UNIX socket bound to the VMID - thus no port so
no race and also no limitation of available ports (we reserved 50 for
migration).
The endpoints get created in /run/qemu-server/VMID.migrate and as
KVM/QEMU in current versions is able to use UNIX socket just as well
as TCP we have not to change much on the interaction with QEMU.
QEMU is started with the migrate_incoming url at the local
destination endpoint and creates the socket file, we then create
a listening socket on the source side and connect over SSH to the
destination.
Now the migration can be started by issuing the migrate qmp command
with an updated uri.
This breaks live migration from new to old, but *not* from old to
new, so there is a upgrade path.
If a live migration from new to old must be made (for whatever
reason), use the unsecure_migration setting (man datacenter.conf)
to allow this, although that should only be done in trusted network.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>