Since ceph-fuse is called directly in the CephFS storage plugin, which
can not process the _netdev option, mounting the CephFS storage fails
when fuse is set in the storage.cfg.
This patch moves the _netdev option into the else part of the if fuse is
set statement. _netdev is only added if the CephFS kernel client mounts
the storage.
It seems _netdev is not needed anyway for the fuse mount, as the
connection is closed, once the fuse process gets killed on shutdown.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
simply chmod the temp file before copying to the "correct" permission
mode, where all users with access to the directory can read the file,
to mirror the behavior one gets for a apl_download call.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this executes all tests again with each disk as a single parameter
and all disks again as an array ref
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we will use this for adding a partition to a disk when using a device
for ceph osd db/wal which already has partitions on it
first we search for the highest partition number, then add the partition
and search for the resulting device (we cannot assume to simply
append the number, e.g. from /dev/nvme0n1 we get /dev/nvme0n1pX)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we now expect the first parameter to be either a string with a single
disk, or an array ref with a list of disks
this way we can get the info of multiple disks simultaneously while
not iterating over all disks
this will be used to get the info for osd/db/wal disk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Less reading and the own name for the variable should helps to grasp
more quickly what it should contain
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ceph-volume creates osds/journal/etc. on LVM instead of partitions,
so to detect them, we have to parse the lv_tags of the LVs and
match them with the underlying device
also add tests for this detection
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
With this, users can select disks with LVM on it for journal/db devices,
which will be necessary for ceph nautilus, since there
journals/db/wal will be put on an LV
of course when creating an osd, we have to detect if that
is ok (probably based on the vg name on it)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the delete parameter get's injected by the SectionConfigs
updateSchem, but we need to handle it ourself in the code
This makes the following possible:
pvesm set STORAGEID --delete property
Also the API equivalent is now possible. Adapted from the HA
managers Resource update API call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to simplify a common case of calling 'map_volume' followed
by a defined-check and a call to path if it is not. The path is now
always returned if the plugin in question does not override
'map_volume'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
The underlying issue is that a zpool can get imported only once, so
we first check if it's in `zpool list`, and thus imported, and only
if it does not shows up there we try to import it.
But, this can race with either:
* parallel running activate_storage call, through CLI/API/daemon
* a zpool import from an admin (a bit unlikely, but hey that's the
thing with race conditions ;))
So refactor the "is pool imported" check into a closure, and call it
addditionally if the import failed, and silent the error if the pool
is now listed, and thus imported. This makes it a little bit nicer to
read too, IMO.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Vzdump log files were not deleted when a backup was deleted.
Consequently, the folder continuously filled with .log files.
Now they get deleted after the backup is removed.
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
Since zfsutils are not a hard dependency of our stack it is possible to not have
`zpool` available.
Checking for existance of `zpool` before calling it suppresses spurious warnings
in the logs (e.g. when creating Ceph OSDs or accessing the 'Disk' Tab in the
GUI).
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
1) checking that the empty storage list is treated correctly (only override
and datacenter config limit considered)
2) checking that the empty storage list is treated correctly (as with 1).
3) checking that undef can be passed as one element of the storage list (it is
ignored)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
If one of the storages passed in $storage_list was not defined
get_bandwidth_limit died (see [0], of an occurence of this).
This patch changes the behavior to ignore undef as storage instead.
[0] https://pve.proxmox.com/pipermail/pve-devel/2019-April/036515.html
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
during storage activation.
for pools that don't get imported at boot (e.g. because their vdevs are
not available when zfs-import-*.service runs) it is fatal to include
them in the cachefile, for those that do get imported at boot this code
should never run anyway as they are already imported.
in any case, a fallback to import without cachefile is the safe variant.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
fixes the 'cannot create 'nvme/foo': volume size must be a multiple of
volume block size' error by always rounding the size up to the next 1M
boundary. this is a workaround until
https://github.com/zfsonlinux/zfs/issues/8541 is solved.
the current manpage says 128k is the maximum blocksize, but a local test
showed that values up to 1M are allowed. it might be possible to
increase it even further (see f1512ee61).
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Passing 'undef' as '$storage_list' led to a warning about using an
uninitialized value as array_ref.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
this wasn't released yet to any repo and I'd like to have the iSCSI
fix included, so just re-run 'dch -r'
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the test would read the real device and if one is an iscsi device
it would fail, move the test code to a sub and mock it in the tests
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>