Generalizes fd95d780 ("migrate: send updated TPM state volid to target
node") to also handle other offline migrated disks appearing in the
VM config, which currently should only be cloud-init.
Breaks migration new -> old under similar (edge-case-)conditions as
fd95d780 did.
Keep sending the 'tpmstate0' STDIN parameter to avoid breaking new ->
old in the scenario fd95d780 fixed.
Keep parsing the vm_start 'tpmstate0' STDIN parameter to avoid
breaking old -> new, and to be able to keep sending it.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To be consistent with PBS's implementation of multi-line comments
remove "\s*" here too. Since the regex isn't lazy .* matches
everything \s* would anyway. (Note that new lines occurs after "$").
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
by re-using the same hash that's used when allocating/activating the
disks in the helpers doing the opposite.
Also in preparation to allow skipping certain disks upon restore.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's only available since QEMU 6.2 and doing a check here rather than
bumping the package dependency allows for easy downgrades.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
via the special syntax <storeid>:<size>.
Not worth it by itself, but this is anticipating a new 'import-from'
parameter which is only used upon import/allocation, but shouldn't be
part of the schema for the config or other API enpoints.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
[split into its own patch + minor improvements/style fixes]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
[renamed API handler, since it's not an index]
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
because when the VM ID of target and source are the same,
qemu_drive_mirror_monitor() switches the QEMU device node over to the
new backing image. The planned import-from functionality makes it
possible to run into this, although for an a bit unusual use case.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Necessary to import from an existing storage using block-device
volumes like ZFS.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
[split into its own patch]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
For disk import, it should be based on the disk properties that are
passed in rather than on those of a possibly pre-existing disk in the
config.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and also when source and target drivename are different. In those
cases, it is done via qemu-img convert/dd.
In preparation to allow import from existing PVE-managed disks.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's confusing that the config associated to the destination is
actually a reference to the source config for both existing callers.
Also, disk import will need to base the calculation on the passed-in
drive parameters and not just the current config, so this change is in
preparation for that too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Avoids the error
adding drive failed: Duplicate ID 'drive-scsi1' for drive
that could happen when switching over to a new disk (e.g. via qm set),
if unplugging wasn't fast enough.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The refactoring in 36d4bdcb86 missed
this. The check is already done as part of the following check_storage
call.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When restoring a backup and the storage the disks would be created on
doesn't allow 'images', the process errors without cleanup.
This is the same behaviour we currently have when the storage is
disabled.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
preparation for also clamping on hotplug and lower the minimum in the
schema so that the full v2 range can be used.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when passing a config from one cluster to another, we want to be strict
when parsing - it's better to fail the migration early and upgrade the
target node instead of failing the migration later (when significant
work for transferring disks and/or state has already been done) or not
at all, but silently lose config settings that the target doesn't
understand.
this also might be helpful in other cases - e.g. when restoring from a
backup.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since we are going to reuse the same mechanism/code for network bridge
mapping and pve-container.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
While existing callers are not using the parameter after the call,
the modification is rather unexpected and could lead to bugs quickly.
Also avoid setting an undef value in the hash, but use delete instead.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
We drop properties which we do not understand and we call
`vmconfig_apply_pending` on stop and before start, so if a user tried
to edit the config or downgraded qemu-server they may get stuff
dropped from the config just by doing a stop/start, which may be a
bit too confusing, also the write is just unnecessary then.
we also have the same skipping logic when starting vms, this way we
avoid calling 'write_config' when there are no present changes to
commit.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
The volid may change if local-storage migration is involved, we need
to tell the target node the new one and update the in-memory config
for starting the target VM accordingly.
Reported here: https://forum.proxmox.com/threads/99906/#post-431345
this possibly breaks migration new -> old iff
- spice is not used (else the explicit ticket wins because it comes
later)
- a local TPM state volume is used
- that local TPM state volume has a different volume id on the target
node (switched storage, volname already taken, ..)
because the target node will then mis-interpret the tpmstate0 line as
spice ticket and set it accordingly. if the old tpm state volume ID does
not exist on the target node, migration will fail. if it exists by
chance, it might work albeit with a wrong spice ticket (new because of
this patch) and tpm state volume (pre-existing breakage).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This patch fixes the wrong attempt of setting up an NBD server for
the replicated TPM state volume, in contrast to the other volumes the
TPM state is managed by swtpm and isn't available to QEMU for
block-migration/bitmap tracking.
Note that we do migrate the state volume via a storage migration
anyway if necessary.
This code path was only triggered for replicated VMs with TPM.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
else a user cannot use more than one mdev per card per host.
We do not need to reserve them at all, since sysfs will error out
on creation/reuse anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
issue reported in community forum [0][1], like "serial[n]" display we
also need to set this option for "none", otherwise we get a boot
loop.
[0]: https://forum.proxmox.com/threads/99508
[1]: https://forum.proxmox.com/threads/97310/post-427129
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>