Required because there's one single efi for ARM, and the 2m/4m
difference doesn't seem to apply.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: move description to format and reword subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
AFAICT, previously, errors from swtpm would not show up in any logs,
because they were just printed to the stderr of the daemonized
invocation here.
The 'truncate' option is not used, so that the log is not immediately
lost when a new instance is started. This increases the chance that
the relevant errors are still present when requesting the log from a
user.
Log level 1 contains the most relevant errors and seems to be quiet
for working-as-expected invocations. Log level 2 already includes
logging full TPM commands, some of which are 1024 bytes long. Thus,
log level 1 was chosen.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].
[0]: https://forum.proxmox.com/threads/123159/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The target of the drive-mirror operation is opened with (essentially)
the same flags as the source in QEMU, in particular whether io_uring
should be used is inherited.
But io_uring currently causes problems in combination with certain
storage types, sometimes even leading to crashes (LVM with Linux 6.1).
Just disallow live cloning of drives when the source uses io_uring and
the target storage is not ready for it. There is one exception, namely
when source and target storage are the same. In that case, just assume
it will keep working for the target.
Migration does not seem to be affected, because there, the target VM
opens the images with the checked aio setting and then NBD exports of
those are used as the targets for mirroring.
It can be that the default determined for the source is not what's
actually used, because after a drive-mirror to a storage with a
different default, it will still use the default from the old storage.
Unfortunately, aio doesn't seem to be part of the 'query-block' QMP
command's result, so just tolerate this edge case.
The check can be removed if either
1. drive-mirror learns to open the target with a different aio setting
or more ideally
2. there are no more bad storages for io_uring.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Previously, changing aio would be applied to the configuration, but
the drive would still be using the old setting.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
In the web UI, this was fixed years ago by pve-manager commit c11c4a40
("fix #1631: change units to binary prefix").
Quickly checked with the 'query-memory-size-summary' QMP command that
this is actually the case.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.
Signed-off-by: Noel Ullreich <n.ullreich@proxmox.com>
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
when a vm is configured to use a physical cd rom drive but there is no
such drive a cryptic "uninitialized value" error is thrown. this is
due to `$path` being undefined in `sub print_drive_commandline_full`.
warn that no cd rom drive is available instead.
note that the error was cosmetic as the vm would start just fine.
forum thread: https://forum.proxmox.com/threads/119592/
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Since we can now differentiate between 'suspended' and 'suspending',
it is possible to ignore the 'suspended' lock when destroying a VM.
It shouldn't matter whether the VM is locked because of hibernation
when you want to remove it. Therefore we can safely ignore the lock.
When $d->{'pci_bridge'}->{devices} is undef, @-dereferencing it will
die with:
> Can't use an undefined value as an ARRAY reference
This can happen (at least) when the VM is in 'prelaunch' state. The
QAPI definition for '@PciBridgeInfo' also declares the 'devices'
member as optional.
Before commit 721624b ("collect device list for nested pci-bridges"),
there was no issue, because $d->{'pci_bridge'}->{devices} was used in
foreach, so auto-vivified if undef.
Fixes: f721624b ("collect device list for nested pci-bridges")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
We can now do a few things that would be not really possible, or at
least mess with readability when this was still mostly inline
config2command, shaves of quite a few lines of code.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in preparation of reworking the new separate method for OVMF cmd
assembly, do this in a separate very targeted commit to make it more
clear that the next reworking-commit doesn't messes with our tests at
all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the fix for the recently introduced requirement of loading the VM config while
migrating was incomplete, since the vmlist node value could already be out of
date by the time load_config is called.
extend the fallback behaviour even further, by doing the following sequence:
- try regular load_config (likely case, rename already fully processed)
- if it fails, get node from vmlist, and load_config using that
- it that fails, invalidate the PVE::Cluster cache, retry regular load_config
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
was only explained in git history and vm_stop, add comments in other
relevant places to avoid future breakage.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
it's not deterministic whether the rename/move of the VM config
triggered on the source side of a migration is already visible on the
target side when vm_resume is executed. check the vmlist for the node
where the config is currently located if $nocheck is set - it is now
needed to add the forwarding DB entries to the bridge.
this fixes an issue on busier or slower clusters, where pmxcfs hasn't
yet processed the rename, and resuming would fail with an error about
the config not existing.
Reported-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we need to handle OVS setups differently, so for now just ignore it
there (behavior as it was in 7.2)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the main differences to regular intra-cluster migration are:
- source VM config and disks are only removed upon request via --delete
- shared storages are treated like local storages, since we can't
assume they are shared across clusters (with potentical to extend this
by marking storages as shared)
- NBD migrated disks are explicitly pre-allocated on the target node via
tunnel command before starting the target VM instance
- in addition to storages, network bridges and the VMID itself is
transformed via a user defined mapping
- all commands and migration data streams are sent via a WS tunnel proxy
- pending changes and snapshots are discarded on the target side (for
the time being)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
in case of remote migration, we use the `update_vm_api` helper for
checking permissions on the incoming config. this would also cause an
incoming cloud-init image to be overwritten, since the VM is not running
yet at this point.
provide a parameter which can be set by an incoming *remote* migration
to avoid having inconsistent cloud init images on the source and target
side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
cloudinit generation needs to see the cloudinit drive so we
need to pass a config with it already updated
Fixes: 4b785da1a9 ("delay cloudinit generation in hotplug")
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
It performs schema valdiation (and normalization).
We only ever write values into it which came from an
already validated config, and we also add an additional
"added" key which is not covered by the schema, so this
would fail.
Simply skip it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
introducing an 'added' value in the cloudinit section for
values which have not been present when the cloudinit image
has been generated
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Hotpluggieg generated a cloudinit image based on old values
in order to attach the device and later update it again, but
the update was only done if cloudinit hotplug was enabled.
This is weird, let's not.
Also introduce 'apply_cloudinit_config' which also write the
config, which, as it turns out, is the only thing we
actually need anyway, currently.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Changing the read-only status of a disk is not possible through QMP, so
it needs to be exempt from the hotpluggable values as to notify the
user.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
max supported queues tx + rx = 256, so 128 for combined
https://lists.gnu.org/archive/html/qemu-devel/2015-03/msg03917.html
But from above link it also seems that x86 only supports 80 pairs in
practice, so for now "only" quadruple the limit to 64 and see if we
get user feedback for more requested.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reduce from 128 to 64 and add short rationale for that ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
there's no guarantee that we're locked here and it also produces
unnecessary extra IO in most cases.
While at it also avoid that a special:cloudinit section is added on
start to *every* VM, which caused another bug to trigger (see prev.
commit) and is just odd for users that ain't using cloudinit
Note in two call sites that we may need to write the config indeed
out there on the caller side.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we now always write out a new clouding special section on start (to
be fixed) independent of any cloudinit drive/config configured or
not, and thus always run into that section after a VM started with
the new qemu-server installed, which in turn set the description
always to undef.
Fixes: 95a5135 ("cloudinit: add cloudinit section for current generated config.")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>