Commit Graph

696 Commits

Author SHA1 Message Date
Thomas Lamprecht
0b08583f7f file size info: return early if we cannot parse json
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 339a4eb3c0)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:41:06 +02:00
Thomas Lamprecht
81a90b7991 file/volume size info: add actual errors to untaint messages
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit d4e00f2bd5)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:41:06 +02:00
Stoiko Ivanov
545288638b plugins: untaint volume_size_info retuns
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.

One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/

A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.

This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
  by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
  returns.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
(cherry picked from commit ac598d851e)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:41:06 +02:00
Thomas Lamprecht
345243fa7e tree-wide: fix typos with codespell
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit ffc31266da)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:41:05 +02:00
Fabian Grünbichler
71a531b63c pbs: fix typo
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
(cherry picked from commit 5b955999b9)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-06-23 08:39:38 +02:00
Thomas Lamprecht
a1e09e496e iscsi: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-05-04 12:02:47 +02:00
Fabian Ebner
9177cc2eda clone image: specify base format option with qemu-img
and avoid a warning. It is deprecated to auto-detect the format of the base
volume. See commit d9f059aa6cfccefaffa3532556e966df4a99ece2 in qemu for more
information.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-05-03 13:07:02 +02:00
Fabian Grünbichler
c56f7a71af pbs: allow setting up a master key
similar to the existing encryption key handling, but without
auto-generation since we only have the public part here.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-04-22 21:56:31 +02:00
Fabian Ebner
3c93115570 rbd: fix typo in error message
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-20 18:21:31 +02:00
Aaron Lauterer
8c858f7eeb fix #3345: zfs: restore container volume to ZFS with size 0
A restore to ZFS for a container which has a volume (rootfs / mount
point) of size 0 failed because the refquota property does not accept
'0k' but wants 'none' in that situation.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-12 14:37:50 +02:00
Thomas Lamprecht
c27fe64810 rbd: make volume param for get_rbd_path to allow further use
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 14:19:48 +02:00
Thomas Lamprecht
ed7ea5a352 rbd: list images: early return to avoid indentation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:51:15 +02:00
Thomas Lamprecht
a3cad0b50d rbd: list images: sort by keys when pushing on result array
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:49:56 +02:00
Thomas Lamprecht
6d0d0a977d rbd: indentation and whitespace cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:48:27 +02:00
Thomas Lamprecht
22265bd990 rbd: get kernel device sub returns a path, not a name
also transform to private sub instead of local variable closure.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:45:21 +02:00
Thomas Lamprecht
72bbd8a6f7 rbd: consistent closure call style
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:43:33 +02:00
Thomas Lamprecht
92a7826f88 rbd: build cmd: allow "falsy" namespace value also here
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:43:33 +02:00
Thomas Lamprecht
4cf696f66e rbd: use private sub for get_rbd_path
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 13:43:33 +02:00
Aaron Lauterer
e9bc993138 rbd: fix #3286 add namespace support
This patch introduces support for Cephs RBD namespaces.

A new storage config parameter 'namespace' defines the namespace to be
used for the RBD storage.

The namespace must already exist in the Ceph cluster as it is not
automatically created.

The main intention is to use this for external Ceph clusters. With
namespaces, each PVE cluster can get its own namespace and will not
conflict with other PVE clusters.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-04-09 12:56:21 +02:00
Aaron Lauterer
aeb007cb76 rbd: centralize rbd path concatenation
The <pool>/<image> paths are needed in quite a lot of places. Having one
single place where they are created helps to reduce duplicate code and
makes it easier to introduce new features.

The 'add_pool_to_disk' sub was already doing that but the name was not
really fitting. This commit renames it to the more general
'get_rbd_path' and changes the second parameter to the more widely used
$volume instead of $disk.

Furthermore, all occurences where "$pool/$volume" has been concatenated
have been replaced with a call to get_rbd_path.

Plus some minor code style cleanups for long function calls that were
touched.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2021-04-09 12:56:21 +02:00
Thomas Lamprecht
cba2b7c1d4 prune backups: improve internal errors messages slightly
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 12:49:55 +02:00
Thomas Lamprecht
4b702ac361 plugin: get_subdir_files: add comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-04-09 12:46:46 +02:00
Fabian Ebner
68ce0b31e8 prune backups: make vmid filtering more robust
by relying on archive_info's vmid first. archive_info is already used to
determine if it's a standard name, and in that case the vmid is certainly set.

Also add asserts to make sure we got what we expected.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-09 12:41:34 +02:00
Fabian Ebner
edb92f707a plugin: subdir files: backup: don't match for vmid against the full path
Only match against the file name to avoid false positives with
directory names containing "-$vmid-".

Found while trying to debug/reproduce a forum thread[0], but the path
there should not be affected by this...

[0]: https://forum.proxmox.com/threads/vzdump-removing-too-many-backups.87072/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-04-09 12:41:10 +02:00
Fabian Ebner
342a56805c fix #3348: NFS: select correct transport to check for service
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-03-31 10:22:52 +02:00
Fabian Grünbichler
dfa374d320 fix #3363: avoid undef-warning for PBS crypt-mode
it is optional after all, and missing (/None) for files stored in the
snapshot dir but not referenced in the manifest for whatever reason.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2021-03-31 10:22:52 +02:00
Dylan Whyte
2bce96c513 fix #3354: support notes on ceph backups
use DirPlugin's get/update_volume_notes implementation (which all the
other supported file systems use)

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-31 10:22:52 +02:00
Thomas Lamprecht
26de022b56 zpool: activate: move mount check out and make program flow easier
Early return when mounted heuristics returns true, that allows to get
rid of an indentation level.

Moving the heuristic out makes the activate method smaller and easier
to grasp

Best viewed with ignoring whitespace changes (`git show -w`).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-19 15:21:21 +01:00
Thomas Lamprecht
9440330aba zpool: activate: don't eval procfs read, if it fails it should be fatal
highly unlikely to fail in our setups, most realistic case is when
procfs is not mounted at /proc, which breaks much else anyway and is
a requirement

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-19 15:06:22 +01:00
Thomas Lamprecht
5b715fd984 zpool: activate: drop intermediate state variable, return directly
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-19 15:04:53 +01:00
Thomas Lamprecht
c31aee36d8 zpool: avoid wrong mount-decoding of dataset
this was mistakenly done as the procfs code uses it and it was
assumed we need to decode this too to get both in the same
encoding-space and thus correct comparission.

But only procfs has that encoding, we don't have it for pool values
in the storage config, so we must not do a decode on that value, that
could potentially break.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-19 15:01:39 +01:00
Stoiko Ivanov
e74b9dd79d zfspoolplugin: check if imported before importing
This commit is a small performance optimization to the previous one:
`zpool list` is cheaper than `zpool import -d /dev..` (the latter
scans the disks in the provided directory for zfs signatures,
unconditionally)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-02-19 14:22:43 +01:00
Stoiko Ivanov
56a7637adb zfspoolplugin: check if mounted instead of imported
This patch addresses an issue we recently saw on a production machine:
* after booting a ZFS pool failed to get imported (due to an empty
  /etc/zfs/zpool.cache)
* pvestatd/guest-startall eventually tried to import the pool
* the pool was imported, yet the datasets of the pool remained
  not mounted

A bit of debugging showed that `zpool import <poolname>` is not
atomic, in fact it does fork+exec `mount` with appropriate parameters.
If an import ran longer than the hardcoded timeout of 15s, it could
happen that the pool got imported, but the zpool command (and its
forks) got terminated due to timing out.

reproducing this is straight-forward by setting (drastic) bw+iops
limits on a guests' disk (which contains a zpool) - e.g.:
`qm set 100 -scsi1 wd:vm-100-disk-1,iops_rd=10,iops_rd_max=20,\
iops_wr=15,iops_wr_max=20,mbps_rd=10,mbps_rd_max=15,mbps_wr=10,\
mbps_wr_max=15`
afterwards running `timeout 15 zpool import <poolname>` resulted in
that situation in the guest on my machine

The patch changes the check in activate_storage for the ZFSPoolPlugin,
to check if any dataset below the 'pool' (which can also be a sub-dataset)
is mounted by parsing /proc/mounts:
* this is cheaper than running `zfs get` or `zpool list`
* it catches a properly imported and mounted pool in case the
  root-dataset has 'canmount' set to off (or noauto), as long
  as any dataset below is mounted

After trying to import the pool, we also run `zfs mount -a` (in case
another check of /proc/mounts fails).

Potential for regression:
* running `zfs mount -a` is problematic, if a dataset is manually
  umounted after booting (without setting 'canmount')
* a pool without any mounted dataset (no mountpoint property set and
  only zvols) - will result in repeated calls to `zfs mount -a`

both of the above seem unlikely and should not occur, if using our
tooling.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-02-19 14:18:36 +01:00
Stoiko Ivanov
a2d747a118 zfspoolplugin: activate_storage: minor cleanup
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2021-02-19 14:18:06 +01:00
Fabian Ebner
816dadb17f NFS: avoid using obsolete rpcinfo option
as suggested in the man page.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-02-09 12:24:35 +01:00
Fabian Ebner
68f1fc2783 mark PBS storages as shared
Like this, the property will get added when parsing the storage configuration
and PBS storages will correctly show up as shared storages in API results.

AFAICT the only affected PBS operation is free_image via vdisk_free, which will
now be protected by a cluster-wide lock, and that shouldn't hurt.

Another issue this fixes, which is the reason this patch exists, was reported
in the forum[0]. The free space from PBS storages was counted once for each node
that had access to the storage.

[0]: https://forum.proxmox.com/threads/pve-6-3-the-storage-size-was-displayed-incorrectly.83136/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2021-01-28 16:43:20 +01:00
Dominic Jäger
d854a71803 lvm: Fix #3159: Show RAID LVs as storage content
LVM RAID logical volumes (including mirrors) can be valid disk images, so they
should show up in storage content listings (for example pvesm list).

Including LV types is safer than excluding, especially because of possible
additional types in the future.

Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2021-01-26 19:19:02 +01:00
Alwin Antreich
89b9ac961a fix: check connection for nfs v4 only server
the check_connection is done by querying the exports of the nfs server
in question. With nfs v4 those exports aren't listed anymore since nfs
v4 employs a pseudo-filesystem starting from root (/).

rpcinfo allows to query the existence of an nfs v4 service.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2021-01-26 19:15:17 +01:00
Dominik Csapak
26a8f21ab2 add workaround for zfs rollback bug
as described in the zfs bug https://github.com/openzfs/zfs/issues/10931
the kernel keeps around cached data from mmaps after a rollback, thus
having invalid data in files that were allegedly rolled back

to workaround this (until a real fix comes along), we unmount the subvol,
invalidating the kernel cache anyway

the dataset gets mounted on the next 'activate_volume' again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-01-26 18:32:47 +01:00
Thomas Lamprecht
32dbc619a5 drbd: comment that the builtin plugin is depreacated
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-15 12:58:20 +01:00
Thomas Lamprecht
44fdfb2af6 nfs and cifs: implement backup notes helper
reuse the one from DirPlugin by directing the call to it, but with
the actual $class. This should stay stable, as we provide an ABI and
try to always use $class->helpers.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-07 16:13:09 +01:00
Thomas Lamprecht
f244e2aa7f api: content/backup: handle deletion of notes
Previous to this we did not called the plugins update_volume_notes at
all in the case where a user delted the textarea, which results to
passing a falsy value ('').

Also adapt the currently sole implementation to delete the notes field
in the undef or '' value case. This can be done safely, as we default
to returning an empty string if no notes file exists.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-07 16:10:09 +01:00
Thomas Lamprecht
20471dfd95 dir plugin: code cleanup
mostly re-ordering to improve statement grouping and avoiding the
need for an intermediate variable

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-07 16:09:26 +01:00
Dominik Csapak
53003cb5ea PBSPlugin: use get_repository from PVE::PBSClient
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-12-03 16:53:53 +01:00
Wolfgang Bumiller
ab90c3b1f1 pbs: fix token auth with PVE::APIClient
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-12-03 16:53:43 +01:00
Thomas Lamprecht
2cd10f58be pbs: activate storage: fully validate if storage config works
improves UX of on_update and on_add hooks *a lot*.

This is a bit more expensive than the TCP ping, or even just an
unauthenticated ping, but not as bad as a full datastore status - as
this only reads the datastore config file (which is normally in page
cache anyway).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-01 19:22:47 +01:00
Thomas Lamprecht
8b62ac6a0c pbs: add scan datastore helper
for use in both, the scan API and the on_add/on_update hooks

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-01 19:22:47 +01:00
Thomas Lamprecht
2f9eb6dc4c pbs: reuse pve apiclient for api connect helper
it is flexible enough to easily do so, and should do well until we
actually have cheap native bindings (e.g., through wolfgangs rust
permlod magic).

Make it a private helper, we do *not* want to expose it directly for
now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-12-01 19:22:47 +01:00
Fabian Ebner
f3ccd0ef3d plugin: hooks: add explicit returns
to avoid returning something unexpected. Finish what
afeda18256 already started for all the other
plugins. At least for ZFS's on_add_hook this is necessary (adding a ZFS storage
currently fails as reported here [0]), but it cannot hurt
in the other places either as the only hooks we expect to return something
currently are PBS's on_add_hook and on_update_hook.

[0]: https://forum.proxmox.com/threads/gui-add-zfs-storage-verification-failed-400-config-type-check-object-failed.79734/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-11-27 10:45:42 +01:00
Thomas Lamprecht
74dcca3a48 nfs: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-27 10:45:06 +01:00