improves UX of on_update and on_add hooks *a lot*.
This is a bit more expensive than the TCP ping, or even just an
unauthenticated ping, but not as bad as a full datastore status - as
this only reads the datastore config file (which is normally in page
cache anyway).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it is flexible enough to easily do so, and should do well until we
actually have cheap native bindings (e.g., through wolfgangs rust
permlod magic).
Make it a private helper, we do *not* want to expose it directly for
now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid returning something unexpected. Finish what
afeda18256 already started for all the other
plugins. At least for ZFS's on_add_hook this is necessary (adding a ZFS storage
currently fails as reported here [0]), but it cannot hurt
in the other places either as the only hooks we expect to return something
currently are PBS's on_add_hook and on_update_hook.
[0]: https://forum.proxmox.com/threads/gui-add-zfs-storage-verification-failed-400-config-type-check-object-failed.79734/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
it could be debated do have some security implications and that
deletion is safer, but key deletion is a pretty hairy thing.
Should be documented, and people just should use delete instead of
autogen if they want to "destroy" a key.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'file_size_info' only works for directory based storages, while
'volume_size_info' should work for all
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If there are already prune options configured, simply delete the maxfiles
setting. Having set both is invalid from vzdump's perspective anyways, and any
backup job on such a storage failed, meaning a user would've noticed.
If there are no prune options, translate the maxfiles value to keep-last,
except for maxfiles being zero (=unlimited), in which case we use keep-all.
If both are not set, don't set anything, so:
1. Storages don't suddenly have retention options set.
2. People relying on vzdump defaults can still use those.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
When using the path to request properties, and no ZFS file system is mounted
at that path, ZFS will fall back to the parent filesystem:
> # zfs unmount myzpool/subvol-172-disk-0
> # zfs get mounted /myzpool/subvol-172-disk-0
> NAME PROPERTY VALUE SOURCE
> myzpool mounted yes -
> # zfs get mounted myzpool/subvol-172-disk-0
> NAME PROPERTY VALUE SOURCE
> myzpool/subvol-172-disk-0 mounted no -
Thus, we cannot use the path and need to use the dataset directly.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
We allow snapshot names that match pve-configid but during qm destroy we have
not removed all snapshots that match pve-configid so far. For example, the name
x-y was allowed but the resulting snap_vm-105-disk-0_x-y was not removed.
Reported-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
This is basically necessary for the GUI's prune widget, because we want to
pass along all options equal to zero when all the number fields are cleared.
And it's more similar to how it's done in PBS now.
Bumped the APIAGE and APIVER, in case some external plugin needs to adapt to
the now less restrictive schema for 'prune-backups'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
First, doing such things can make client work slightly easier, as the
submitted values do not need to be made available in any callback
handling the response.
But the actual reason for doing this now is, that this is a
preparatory step for allowing the user to download/print/.. an
autogenerated PBS client encryption key.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
That was lots of code and hash map touching for the case where one
avoided a extra stat, which result probably was in the page cache
anyway, for the case that a backup has a comment.
A case which is rather be unlikely - comments are normally done for
the occasional explicit backup (e.g., before major upgrade, before a
configuration change in that guest, ...), at least not worth some
relatively complicated effort making that sub harder to read and
maintain.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
AFAICT the snapshot activation is not necessary for our plugins at the moment,
but it doesn't really hurt and might be relevant in the future or for external
plugins.
Deactivating volumes is up to the caller, because for example, for replication
on a running guest, we obviously don't want to deactivate volumes.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.
This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
This replaces a locally maintained hardware map in
get_wear_leveling_info() by commonly used register names of
smartmontool. Smartmontool maintains a labeled register database that
contains a majority of drives (including versions). The current lookup
produces false estimates, this approach hopefully provides more reliable
data.
Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
Provides recent test data for disk_tests/ssd_smart/sde_smart. The
previous data was created using an older smartmontools version, which
did not yet support the drive and therefore had bogus attribute mapping.
Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
Commit 8fe00d9944 already
introduced the necessary logging for the secure code path,
so presumably the bug was already fixed for most people.
Delay the potential die for the send command to be able to log
the ouput+error from the receive command. Like this we also see e.g.
'volume ... already exists' instead of just 'broken pipe'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
quiet takes care of both the error and success case.
Without this, there are lines like:
myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ name myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ -
in the log if the dataset exists, and this information is
already present in more readable form.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we already have the ZFS pool plugin as precedent to use 10s, at for
network with remote off-site storage one can get to 200 - 300ms
RTT latency, which means that for a protocol needing multiple rounds of
communication, one can easily get over 2s while not being in a broken
network.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The LIO backend for ZFS over iSCSI fetches the json-config periodically from
the target.
This patch reduces the stored config values to those which are actually used
and additonally untaints the values read from the remote host's config-file.
Since the LUN index is used in calls to targetcli on the remote host (via
run_command), untainting prevents the call to crash when run with '-T'.
Tested by creating a zfs over iscsi backed VM, starting it, adding disks,
resizing disks, removing disks, creating snapshots, rolling back to a snapshot.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
ZFS over iSCSI fetches information about the disk-images via ssh, thus
the obtainted data is tainted (perlsec (1)).
Since pvedaemon runs with '-T' enabled trying to start a VM via GUI/API failed,
while it still worked via `qm` or `pvesh`.
The issue surfaced after commit cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in
pve-common ('run_command: improve performance for logging and long lines'),
and results from concatenating the original (tainted) buffer to a variable,
instead of a captured subgroup.
Untainting the value in ZFSPlugin should not cause any regressiosn, since the
other 3 target providers already have a match on '\d+' for retrieving the
lun number.
reported via pve-user [0].
reproduced and tested by setting up a LIO-target (on top of a virtual PVE),
adding it as storage and trying to start a guest (with a disk on the
ZFS over iSCSI storage) with `perl -T /usr/sbin/qm start $vmid`
[0] https://lists.proxmox.com/pipermail/pve-user/2020-October/172055.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
and other stat failure modes.
this method returns undef if 'qemu-img info ...' fails to return
information, so callers must handle this already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>