Commit d8cd8e8cf9 introduced a
regression where only stale replicated volumes with an older
timestamp would be cleaned up. This meant that after removing a volume
from the guest config, it would only be cleaned up the second time the
replication ran afterwards. And the volume could become completely
orphaned in case the relevant storage wasn't used by the job anymore.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
while dropping the instance where the local variable was unused.
prepare() was changed a while ago to return all local snapshots.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
A email notification will be send for each job when the job fails.
This message will only send when an error occurs and the fail count is on 1.
Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
As the nodes replication status call also returns disabled jobs now,
we need to handle them here too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to reduce code duplication. this slightly changes behaviour
compared to the previous version:
only disks with the correct prefix are cleaned up, not all
disks with __replication* snapshots.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We pass a list of storage to scan for stale volumes to prepare_local_job().
So we make sure that we only activate/scan related storages.
Snapshot rollback may remove local replication shapshots. In that case
we still have the $conf->{parent} snapshot on both sides, so we
can use that as base snapshot.
Prepare for starting a replication job. This is called on the target
node before replication starts. This call is for internal use, and
return a JSON object on stdout. The method first test if VM <vmid>
reside on the local node. If so, stop immediately. After that the
method scans all volume IDs for snapshots, and removes all replications
snapshots with timestamps different than <last_sync>. It also removes
any unused volumes.
Returns a hash with boolean markers for all volumes with existing
replication snapshots.