which makes the continue not behave as intended.
Reported by shellcheck: SC2106: This [i.e. continue] only exits the subshell
caused by the (..) group
Also factor out long message for readability.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Commit d9ece228fb introduced the workaround with
using systemd units and 25e222ca0d re-used the
functionality for fuse-mounts too.
The latter commit suggests to switch to using mount.fuse.ceph for the '_netdev'
option, but it doesn't seem to work:
root@pve701 / # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
ceph-fuse[20729]: starting ceph client
2021-06-15T14:22:00.631+0200 7f995f878080 -1 init, newargv = 0x55e09fc11a40 newargc=11
ceph-fuse[20729]: starting fuse
root@pve701 / # mount -t ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/normal -o 'name=admin,secretfile=/etc/pve/priv/ceph/cephfs.secret,conf=/etc/pve/ceph.conf,_netdev'
root@pve701 / # mount | grep mnttest
ceph-fuse on /mnttest/fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
10.10.10.11,10.10.10.12,10.10.10.13:/ on /mnttest/normal type ceph (rw,relatime,name=admin,secret=<hidden>,acl,_netdev)
Also, the return value is not propagated by mount.fuse.ceph, meaning the output
would need to be parsed...
root@pve701 ~ # mount -t fuse.ceph 10.10.10.11,10.10.10.12,10.10.10.13:/ /mnttest/fuse -o 'ceph.id=admin,ceph.keyfile=/etc/pve/priv/ceph/cephfs.secret,ceph.conf=/etc/pve/ceph.conf,_netdev'
2021-06-15T14:42:56.326+0200 7f634edae080 -1 init, newargv = 0x560cdb5e0a40 newargc=11
ceph-fuse[34480]: starting ceph client
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
ceph-fuse[34480]: fuse failed to start
2021-06-15T14:42:56.338+0200 7f634edae080 -1
fuse_mount(mountpoint=/mnttest/fuse) failed.
Mount failed with status code: 5
root@pve701 ~ # echo $?
0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
It's necessary to be on Nautilus before upgrading to 7.x, so the check is no
longer needed. See commit e54c3e3347. It didn't
cleanly revert, because there were cleanups made afterwards.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is used if there is no ('dir'-type) 'local' entry. Storage configurations
made by the installer also support backups for the 'local' storage, and the
'prune-backups' parameter is not really useful otherwise.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Don't add an explicit deprecation warning on parsing (yet), this already done in
the pve6to7 script. Also, automatic conversion to 'prune-backups' happens when
the section config is read, so over time fewer users should be affected.
Postpone explicit warning/dropping the parameter to a future major release.
Also switch the setting for the default 'local' storage to 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which also checks whether the storage is even enabled. VZDump jobs already
activate the storage, but more direct calls via API/CLI didn't do so yet.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Try to detect active mounts and holders early, because it's cheap. The wipefs
command in the worker will detect even more situations where wiping alone is
not enough for the device to show up as unused, or could otherwise be
problematic.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
based on the wipe_disks method from pve-manager's Ceph/Tools.pm with the
following main differences:
* use wipefs to wipe labels first (to avoid sgdisk complaining about the
backed up GPT structure on a subsequent GPT initialization)
* only take one device as an argument
* do not use an absolute path for 'dd'
* die if one of the command fails
The wipefs command makes checks and complains about e.g. mounted or active
devices. One could supply --force to wipefs, but in many such situations it
does not work as expected, because the device would still be detected as in-use
afterwards, and further manaual steps would be needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This was never marked stable and the recommended one is the external
version, which is maintained by linbit themselves.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
instead of just the snapshot for consistency with other API endpoints,
and possible future extension to VMA backups (where 'snapshot' would be
a rather strange terminology).
add some additional checks (pbs storage type, backup volume type),
completion and magic (allow passing in either a full volume ID with
correct storage, or just the volume name, or just the snapshot for
easier API/CLI usage/convenience).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Includes list and restore calls.
Requires VM.Backup and Datastore.Audit permissions, for the accessed
VM/CT and containing datastore respectively.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
similar to the existing encryption key handling, but without
auto-generation since we only have the public part here.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Not replacing it with return, because the current behavior is dying:
Can't "next" outside a loop block
and the single existing caller in pve-manager's API2/Ceph/OSD.pm does not check
the return value.
Also check for $st, which can be undefined in case a non-existing path was
provided. This also led to dying previously:
Can't call method "mode" on an undefined value
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in case a disk with partitions also has an fstype set, which happens for our ZFS
boot disks. Do not change the behavior without include-partitons, as we
prefer(red) to be more specific than simply 'partitions' then.
Reported in the enterprise support channel.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Bug reported in the community forum[0].
Currently, it's possible to break replication by:
1. have an existing snapshot whose name contains an uppercase letter
2. set up a replication job and run it
3. rollback to the existing snapshot
4. replicate again -> fails
The failure occurs, because after step 3, the most recent common snapshot is the
previously existing one and currently no uppercase letters are allowed for
export/import.
The pve-snapshot-name option uses the CONFIGID_RE
qr/[a-z][a-z0-9_-]+/i
so it cannot be used here, because it would not allow for e.g. '__migrate__'.
Simply allow uppercase letters, to be backwards compatible and allow all
possible pve-snapshot-name values.
There is still an issue if there also was state volume, but that's a different
bug[1].
[0]: https://forum.proxmox.com/threads/solved-migration-error-base-value-does-not-match-the-regex-pattern.85946/
[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=3111
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>