Commit 8fe00d9944 already
introduced the necessary logging for the secure code path,
so presumably the bug was already fixed for most people.
Delay the potential die for the send command to be able to log
the ouput+error from the receive command. Like this we also see e.g.
'volume ... already exists' instead of just 'broken pipe'.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
quiet takes care of both the error and success case.
Without this, there are lines like:
myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ name myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ -
in the log if the dataset exists, and this information is
already present in more readable form.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we already have the ZFS pool plugin as precedent to use 10s, at for
network with remote off-site storage one can get to 200 - 300ms
RTT latency, which means that for a protocol needing multiple rounds of
communication, one can easily get over 2s while not being in a broken
network.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The LIO backend for ZFS over iSCSI fetches the json-config periodically from
the target.
This patch reduces the stored config values to those which are actually used
and additonally untaints the values read from the remote host's config-file.
Since the LUN index is used in calls to targetcli on the remote host (via
run_command), untainting prevents the call to crash when run with '-T'.
Tested by creating a zfs over iscsi backed VM, starting it, adding disks,
resizing disks, removing disks, creating snapshots, rolling back to a snapshot.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
ZFS over iSCSI fetches information about the disk-images via ssh, thus
the obtainted data is tainted (perlsec (1)).
Since pvedaemon runs with '-T' enabled trying to start a VM via GUI/API failed,
while it still worked via `qm` or `pvesh`.
The issue surfaced after commit cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in
pve-common ('run_command: improve performance for logging and long lines'),
and results from concatenating the original (tainted) buffer to a variable,
instead of a captured subgroup.
Untainting the value in ZFSPlugin should not cause any regressiosn, since the
other 3 target providers already have a match on '\d+' for retrieving the
lun number.
reported via pve-user [0].
reproduced and tested by setting up a LIO-target (on top of a virtual PVE),
adding it as storage and trying to start a guest (with a disk on the
ZFS over iSCSI storage) with `perl -T /usr/sbin/qm start $vmid`
[0] https://lists.proxmox.com/pipermail/pve-user/2020-October/172055.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
and other stat failure modes.
this method returns undef if 'qemu-img info ...' fails to return
information, so callers must handle this already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
When creating a new ZFS storage, also instantiate an import-unit for the pool.
This should help mitigate the case where some pools don't get imported during
boot, because they are not listed in an existing zpool.cache file.
This patch needs the corresponding addition of 'zfs-import@.service' in
the zfsonlinux repository.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
commit 815df2dd08 introduced a small issue
when activating linked clone volumes - the volname passed contains
basevol/subvol, which needs to be translated to subvol.
using the path method should be a robust way to get the actual path for
activation.
Found and tested by building the package as root (otherwise the zfs
regressiontests are skipped).
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Makes it possible to clone and start a container whose
ZFS subvols are not yet mounted for some reason. If a
subvol cannot be mounted, there's a better error now:
zfs error: cannot mount '/myzpool/subvol-103-disk-0': directory is not empty
Previously, cloning would quietly do an "empty" clone,
and startup would fail with:
mount_autodev: 1074 Permission denied - Failed to create "/dev" directory
lxc_setup: 3238 Failed to mount "/dev"
do_start: 1224 Failed to setup container "103"
__sync_wait: 41 An error occurred in another process (expected sequence number 5)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
we don't even know whether $snap exists at all, so the old variant could
be rather misleading..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This is shown in the man page, so it's not important to mention
that this is a wrapper. Also mention the fact that the keep options
from the storage configuration serve as a fallback, which was previously
mentioned in the description of the (now removed) prune-backups parameter.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
For prune selections, it doesn't matter what the current time is,
only the timestamps of the backups matter.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
as else the API cannot easily know if this is set, it cannot check
with -f as the key is in a restricted area and we do not want that a
GET runs as protected.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Do not try to scan (and thus activate) storages which aren't
configured to support (or cannot support) "vdisks" anyway.
Avoids seemingly strange failures of VM migrations due to a backup storage
not being currently online - even if that storage isn't referenced in
the VM config anywhere..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is a hack and we should get rid of `run_client_cmd` and
`run_raw_client_cmd` as an API entry!
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
also `pvesm set` and `pvesm add` should behave the same with
respect to how configuration options are treated
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>