For example, otherwise build of qemu-server will fail with:
> unknown file 'mapping/directory.cfg' at /usr/share/perl5/PVE/Cluster.pm
if libpve-cluster-perl is not recent enough and there most likely are
runtime issues too.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250408081233.20843-1-f.ebner@proxmox.com
Adds a config file for directories by using a 'map' property string for
each node mapping.
example config:
```
some-dir-id
map node=node1,path=/path/to/share/
map node=node2,path=/different/location/
```
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[TL: adapt config path to directory.cfg like in pve-cluster]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
remote migration requires elevated privileges already and can thus only be
triggered by trusted sources, but an additional safeguard of checking the image
for external references doesn't hurt.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
so that we can decide in qemu-server to allow live-migration.
The driver and QEMU must be capable of that, and it's the
admin's responsibility to know and configure that
Mark the option as experimental in the description.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
but that lives int he 'global' part of the mapping config, not in a
specific mapping. To check that, add it to the $configured_props from
there.
this requires all call sites to be adapted otherwise the check will
always fail for devices that are capable of mediated devices
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Remote migration via API will be invoked under Perl's '-T' switch to
detect tainted input used in commands. For remote migration, the
bandwidth limit from the remote side would be such tainted input. This
would lead to failure for offline disk migration when the target
node's bandwidth limit is stricter when invoking the 'pvesm export'
command:
> command 'set -o pipefail && pvesm export rbd:vm-400-disk-0 \
> raw+size - -with-snapshots 0 | /usr/bin/cstream -t 307232768' \
> failed: Insecure dependency in exec while running with -T switch
Untaint the value to fix the issue. Note that the schema for the
bandwidth limits in datacenter.cfg and storage.cfg allows fractional
values.
Avoid re-using the same variable for both, the reply from the remote
(which is a hash) and the actual remote bandwidth limit. This also
makes it possible to use the "assign regex match or die" pattern while
accessing the original value in the error message.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
'job-id' is passed when a backup as started as a job and will be
passed to the notification system as matchable metadata. It
can be considered 'internal'.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Max Carrara <m.carrara@proxmox.com>
by placing all expected properties from the hardware into an 'expected_props'
and those fromt he config into 'configured_props'
the names makes clearer what's what, and we can easily extend it, even
if the data does not come from the mapping (like we'll do with 'mdev')
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to make it clearer what it actually is. Also we want to add the
'real' config as parameter too, and so it's less confusing.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
according to the schema, else some combinations of migration / guest /
storage settings will fail validation:
2024-05-15 11:48:51 ERROR: migration_snapshot: type check ('boolean') failed - got ''
since this is client / source side, remote migrations to a remote node
with validation enabled can fail without this change.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid breakage with schema validation turned on.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit a6f5b35 ("replication: prepare: include volumes without
snapshots in the result"), attempts would be made to remove previous
replication snapshots from volumes on which they didn't exist. This
was noticed by Thomas since the output of a replication test in
pve-manager changed.
The issue is not completely new, i.e. there was no check that the
(previous) replication snapshot acutally exists before attempting
removal during the cleanup phase. Fix the issue by adding such a
check.
The $replicate_snapshots hash is only used for this, so the change
there is fine.
Fixes: a6f5b35 ("replication: prepare: include volumes without snapshots in the result")
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
If the user can already stop all tasks there is no point in spending
some work on every task to check if the user could also stop if
without those powerful permissions.
To avoid to much indentation rework the filter to an early-next style.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Given a `(type, user, vmid)` tuple, the helper aborts all tasks of the
given `type` for guest `vmid` that `user` is allowed to abort:
- If `user` has `Sys.Modify` on the node, they can abort any task
- If `user` is an API token, it can abort any task it started itself
- If `user` is a user, they can abort any task started by themselves
or one of their API tokens.
The helper is used to overrule any active qmshutdown/vzshutdown tasks
when attempting to stop a VM/CT (if requested).
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
so it doesn't need to be set when explicitly disabling fleecing. Needs
a custom verifier to enforce it being set when enabled.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's a property string, because that avoids having an implicit
"enabled" as part of a 'fleecing-storage' property. And there likely
will be more options in the future, e.g. threshold/limit for the
fleecing image size.
Storage is non-optional, so the storage choice needs to be a conscious
decision. Can allow for a default later, when a good choice can be
made further down the stack. The original idea with "same storage as
VM disk" is not great, because e.g. for LVM, it would require the same
size as the disk up front.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: style fix for whitespace placement in multi-line strings ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Do not pass the cleanup flag to get_replicatable_volumes() which leads
to replicatable volumes that have the replicate setting turned off to
be part of the result.
Instead pass the noerr flag, because things like missing the
storage-level replicate feature should not lead to an error here.
Reported in the community forum:
https://forum.proxmox.com/threads/120910/post-605574
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Suggest an alternative solution by removing the problematic volumes
from the replication target rather than the whole job.
This is helpful if there are multiple replicated volumes to avoid the
need to fully re-sync all volumes in many cases.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Note that PVE::Storage::volume_snapshot_info() will fail when a volume
does not exist, so no non-existing volume will end up in the result
(prepare() is only called with volumes that should exist).
This makes it possible to detect a volume without snapshots in the
result of prepare(), and as a consequence, replication will now also
fail early in a situation where source and remote volume both exist,
but (at least) one of them doesn't have any snapshots.
Such a situation can happen, for example, by deleting and re-creating
a volume with the same name on the source side without running
replication after deletion.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
'legacy-sendmail': Use mailto/mailnotification parameters and send
emails directly.
'notification-system': Always notify via notification system
'auto': Notify via mail if mailto is set, otherwise use notification
system.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
The first two will be migrated to the notification system, the second
were part for the first attempt for the new notification system.
The first attempt only ever hit pvetest, so we simply tell the user
to not use the two params.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
configuring pbs-entries-max can avoid failing backups due to a high
amount of files in folders where a folder exclusion is not possible
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
After removing a storage, replication states can still contain
references to it, even if no volume references it anymore.
If a storage does not exist in the storage configuration, the
replication target runs into an error when preparing the job locally.
This error prevents both running and removing the replication job. Fix
it by not passing the invalid storage ID in the first place.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>