we need to handle OVS setups differently, so for now just ignore it
there (behavior as it was in 7.2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the main differences to regular intra-cluster migration are:
- source VM config and disks are only removed upon request via --delete
- shared storages are treated like local storages, since we can't
assume they are shared across clusters (with potentical to extend this
by marking storages as shared)
- NBD migrated disks are explicitly pre-allocated on the target node via
tunnel command before starting the target VM instance
- in addition to storages, network bridges and the VMID itself is
transformed via a user defined mapping
- all commands and migration data streams are sent via a WS tunnel proxy
- pending changes and snapshots are discarded on the target side (for
the time being)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
in case of remote migration, we use the `update_vm_api` helper for
checking permissions on the incoming config. this would also cause an
incoming cloud-init image to be overwritten, since the VM is not running
yet at this point.
provide a parameter which can be set by an incoming *remote* migration
to avoid having inconsistent cloud init images on the source and target
side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
cloudinit generation needs to see the cloudinit drive so we
need to pass a config with it already updated
Fixes: 4b785da1a9 ("delay cloudinit generation in hotplug")
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
It performs schema valdiation (and normalization).
We only ever write values into it which came from an
already validated config, and we also add an additional
"added" key which is not covered by the schema, so this
would fail.
Simply skip it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
introducing an 'added' value in the cloudinit section for
values which have not been present when the cloudinit image
has been generated
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Hotpluggieg generated a cloudinit image based on old values
in order to attach the device and later update it again, but
the update was only done if cloudinit hotplug was enabled.
This is weird, let's not.
Also introduce 'apply_cloudinit_config' which also write the
config, which, as it turns out, is the only thing we
actually need anyway, currently.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Changing the read-only status of a disk is not possible through QMP, so
it needs to be exempt from the hotpluggable values as to notify the
user.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
max supported queues tx + rx = 256, so 128 for combined
https://lists.gnu.org/archive/html/qemu-devel/2015-03/msg03917.html
But from above link it also seems that x86 only supports 80 pairs in
practice, so for now "only" quadruple the limit to 64 and see if we
get user feedback for more requested.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reduce from 128 to 64 and add short rationale for that ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
there's no guarantee that we're locked here and it also produces
unnecessary extra IO in most cases.
While at it also avoid that a special:cloudinit section is added on
start to *every* VM, which caused another bug to trigger (see prev.
commit) and is just odd for users that ain't using cloudinit
Note in two call sites that we may need to write the config indeed
out there on the caller side.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we now always write out a new clouding special section on start (to
be fixed) independent of any cloudinit drive/config configured or
not, and thus always run into that section after a VM started with
the new qemu-server installed, which in turn set the description
always to undef.
Fixes: 95a5135 ("cloudinit: add cloudinit section for current generated config.")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
that function also caches the value, and it recently was changed to
be importable, so we can just import and drop this once a new enough
pve-common is available.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is reducing packet drop on high pps, and also needed for dpdk.
Redhat already have use it by default in rhev and his openstack platform too
since 2019.
I'm using it in production since 6 months, I don't have seen performance regression.
fix: (which ask for custom option, but setting it by default seem fine for me)
https://bugzilla.proxmox.com/show_bug.cgi?id=1546https://bugzilla.proxmox.com/show_bug.cgi?id=2349
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
at the end of a live migration, we need to remove old mac entries
on source host (vm is not yet stopped), before resume vm on target host
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[T: resolve conflicts and rework on apply ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In theory we can have a config with netX records that do not specify
a `macaddr` property, we just auto-generate on in config2cmd for
startup transitively, but don't save that explicitly back to the
config; so while we could parse the /proc/$pid/cmdline or try to get
the info from QMP (not fully straight forward) it seems rather a
hassle; especially if one has in mind that this cannot happen via the
API FWICT; as there a "deletion" *saves* a newly auto generated value
out to the config, same with clone of a VM and restore of a backup.
So, in basically all reasonable cases we got the `macaddr` available,
but if we don't it makes no sense to add a FDB variable for a *newly*
generated one by the parse_net call, as the VM won't use that (well,
at least if one doesn't get "lucky" and it randomly re-generates the
same as on startup), so allow telling parse_net to skip auto
generating MACs and use that in the add-fdb-entries helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
On plain VM start (no live migration), we can simply add MAC address
into the fdb. In case of a live migration, we add the mac address
just before the resume.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
same as with the extended support for more usb devices, allow
hotplugging for guests that can use the qemu-xhci controller which
require a machine type >= 7.1 and a ostype l26 or windows > 7
if no usb device was passed through on startup, dynamically add
the xhci controller (and remove if the last usb device is unplugged)
so that live migration is still possible
much of the usb hotplug code was already there, but it still needed
a few adaptions, for example we have to add a chardev when adding
a spice redir port (that gets automatically removed when the
usb-redir device gets removed)
since the spice devices use the id 'usbredirdevX' instead of 'usbX', we
have to manually map that a bit around
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
for machine versions >= 7.1 and ostype linux or windows > 7, we use the
qemu-xhci controller where we have up to 14 usable ports, so make them
available to the user
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
going by reports in the forum (e.g. [0]) and semi-official qemu
information[1], we should prefer qemu-xhci over nec-usb-xhci
for compatibility purposes, we guard that behind the machine version,
so that guests with a fixed version don't suddenly have a different usb
controller after a reboot (which could potentially break some hardcoded
guest configs)
0: https://forum.proxmox.com/threads/proxmox-usb-connect-disconnect-loop.117063/
1: https://www.kraxel.org/blog/2018/08/qemu-usb-tips/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
just outside of context, we already save the result from
machine_type_is_q35 into the $q35 variable, but never use it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reuse the PVE::CpuSet to validate cpuset formatting.
Add new qemu property called 'affinity' to store the cpuset.
Push taskset command in front of kvm if 'affinity' is set.
Signed-off-by: Daniel Bowder <daniel@bowdernet.com>
if the preparing of PCI devices or the start of the VM fails, we need
to cleanup the PCI devices (reservations *and* mdevs), or else it
might happen that there are leftovers which must be manually removed.
to include also mdevs now, refactor the cleanup code from
'vm_stop_cleanup' into it's own function, and call that instead of
only 'remove_pci_reservation'
also simplifies the code, such that it now removes all PCI ids
reserved for that VMID, since we cannot have multiple VMs with the
same VMID anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previously, only a plaintext line in the task log showed something was off.
Now, the GUI will show it as a warning.
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
This allow to regenerate config drive if pending values exist
when we change vm options.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
This allow to regenerate the config drive with 1 api call.
This also avoid to delete drive first, and recreate it again.
As it's a readonly drive, we can simply live update it,
and eject/replace it with qemu monitor
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Instead using vm pending options for pending cloudinit generated config,
write current generated cloudinit config in a new [special:cloudinit] SECTION.
Currently, some options like vm name, nic mac address can be hotplugged,
so they are not way to know if the cloud-init disk is already updated.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
while making it take the value directly instead of the config.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since kernel 5.15, there is an issue with io_uring when used in
combination with CIFS [0]. Unfortunately, the kernel developers did
not suggest any way to resolve the issue and didn't comment on my
proposed one. So for now, just disable io_uring when the storage is
CIFS, like is done for other storage types that had problematic
interactions.
It is rather easy to reproduce when writing large amounts of data
within the VM. I used
dd if=/dev/urandom of=file bs=1M count=1000
to reproduce it consistently, but your mileage may vary.
Some forum reports about users running into the issue [1][2][3].
[0]: https://www.spinics.net/lists/linux-cifs/msg26734.html
[1]: https://forum.proxmox.com/threads/109848/
[2]: https://forum.proxmox.com/threads/110464/
[3]: https://forum.proxmox.com/threads/111382/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>