Commit Graph

1961 Commits

Author SHA1 Message Date
Alwin Antreich
9553b55c4f cephfs: Exclude _netdev when mounting with fuse
Since ceph-fuse is called directly in the CephFS storage plugin, which
can not process the _netdev option, mounting the CephFS storage fails
when fuse is set in the storage.cfg.

This patch moves the _netdev option into the else part of the if fuse is
set statement. _netdev is only added if the CephFS kernel client mounts
the storage.

It seems _netdev is not needed anyway for the fuse mount, as the
connection is closed, once the fuse process gets killed on shutdown.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2019-06-14 10:04:51 +02:00
Thomas Lamprecht
243f413ad2 followup: reword comment a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-13 11:59:58 +02:00
Stefan Reiter
5c0720d6c9 fix #1427: Set file mode on uploaded templates/ISOs
simply chmod the temp file before copying to the "correct" permission
mode, where all users with access to the directory can read the file,
to mirror the behavior one gets for a apl_download call.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2019-06-13 11:55:18 +02:00
Thomas Lamprecht
11efb06659 drop un-maintained sheepdog plugin
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.

[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-04 17:02:47 +02:00
Thomas Lamprecht
3ed611874e buildsys: split storage plugins to single lines
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-04 16:56:46 +02:00
Dominik Csapak
9250ddfe3a Diskmanage: correctly add wearout value of 0
if wearout is 0 we showed 'N/A' instead of 100%
(wearout is really the 'life left' value)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 13:14:45 +02:00
Dominik Csapak
c5fa45a99f Diskmanage: fix incorrect variable usage
sysdir is a string, we wanted sysdata

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 13:14:45 +02:00
Dominik Csapak
ea928fd41a Diskmanage: extract nvme wearout from smartctl text
extract the info from the line:
Percentage Used: XX%

also adapt the tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 13:14:45 +02:00
Dominik Csapak
d0a217f624 tests: test get_disks single/multi disk filter
this executes all tests again with each disk as a single parameter
and all disks again as an array ref

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 12:51:44 +02:00
Dominik Csapak
1dc3038d40 Diskmanage: add append_partition sub
we will use this for adding a partition to a disk when using a device
for ceph osd db/wal which already has partitions on it

first we search for the highest partition number, then add the partition
and search for the resulting device (we cannot assume to simply
append the number, e.g. from /dev/nvme0n1 we get /dev/nvme0n1pX)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 12:51:44 +02:00
Dominik Csapak
52a064afcd Diskmanage: allow get_disks to take multiple disks
we now expect the first parameter to be either a string with a single
disk, or an array ref with a list of disks

this way we can get the info of multiple disks simultaneously while
not iterating over all disks

this will be used to get the info for osd/db/wal disk

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 12:51:44 +02:00
Dominik Csapak
558d412d10 Disks API: allow also unused disks for db/wal
since we will create a pv/vg/lv on it with nautilus

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 12:51:44 +02:00
Dominik Csapak
b12883ded1 tests: improve error handling of run_disk_tests
if Diskmanage.pm has a syntax error, this will now catch it
during build

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 10:27:51 +02:00
Dominik Csapak
c86ba76db7 LVMPlugin: factor out the lv creation
we will want to create lvs manually for ceph nautilus db/wal devices

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 10:26:46 +02:00
Thomas Lamprecht
0180fa427f followup: get_disks: use own variable for frequent access
Less reading and the own name for the variable should helps to grasp
more quickly what it should contain

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-31 12:04:42 +02:00
Thomas Lamprecht
248f43f58b follouwp: get_ceph_volume_infos: code and comment cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-31 12:04:13 +02:00
Dominik Csapak
19dcd1adcb Diskmanage: detect osds/journals/etc. created with ceph-volume
ceph-volume creates osds/journal/etc. on LVM instead of partitions,
so to detect them, we have to parse the lv_tags of the LVs and
match them with the underlying device

also add tests for this detection

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-05-31 11:41:33 +02:00
Dominik Csapak
cd814c0453 allow disk with LVM for journal devices
With this, users can select disks with LVM on it for journal/db devices,
which will be necessary for ceph nautilus, since there
journals/db/wal will be put on an LV

of course when creating an osd, we have to detect if that
is ok (probably based on the vg name on it)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-05-31 11:41:33 +02:00
Thomas Lamprecht
08335d55c9 buildsys: no need to include arch detection for arch-independent package
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-21 22:24:42 +02:00
Thomas Lamprecht
f3cb30b992 buildsys: switch upload dist over to buster
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-21 20:57:05 +02:00
Thomas Lamprecht
5518d36142 bump version to 6.0-0+1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-21 20:54:49 +02:00
Thomas Lamprecht
d0c9eff46a buildsys: use dpkg-dev makefile helpers for pkg info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-21 20:39:36 +02:00
Thomas Lamprecht
614a23cfd2 d/control: fix priority-extra-is-replaced-by-priority-optional
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-21 20:31:42 +02:00
Thomas Lamprecht
a4a6c595d8 bump version to 5.0-43
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-15 16:16:58 +02:00
Thomas Lamprecht
37ab64f388 api/config update: indentation and whitespace fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-15 10:36:01 +02:00
Thomas Lamprecht
91f42b33a0 api/config update: only iterate over hash keys, not values
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-15 10:34:12 +02:00
Thomas Lamprecht
4273e3ace9 pvesm set: handle deletion of properties
the delete parameter get's injected by the SectionConfigs
updateSchem, but we need to handle it ourself in the code
This makes the following possible:

pvesm set STORAGEID --delete property

Also the API equivalent is now possible. Adapted from the HA
managers Resource update API call.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-14 16:24:34 +02:00
Thomas Lamprecht
d8e8b291d4 bump version to 5.0-42
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-30 13:56:23 +00:00
Mira Limbeck
d560ec2860 map_volume: fall back to 'path'
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to simplify a common case of calling 'map_volume' followed
by a defined-check and a call to path if it is not. The path is now
always returned if the plugin in question does not override
'map_volume'.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2019-04-29 13:44:40 +00:00
Thomas Lamprecht
b5c8278a3e zpool: handle race with other zpool imports
The underlying issue is that a zpool can get imported only once, so
we first check if it's in `zpool list`, and thus imported, and only
if it does not shows up there we try to import it.

But, this can race with either:
* parallel running activate_storage call, through CLI/API/daemon
* a zpool import from an admin (a bit unlikely, but hey that's the
  thing with race conditions ;))

So refactor the "is pool imported" check into a closure, and call it
addditionally if the import failed, and silent the error if the pool
is now listed, and thus imported. This makes it a little bit nicer to
read too, IMO.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-18 10:01:39 +02:00
Fabian Grünbichler
1187d22d28 add zfsutils-linux to build-dependencies
to enable our ZFS tests and fix broken disk list tests.
2019-04-18 10:01:39 +02:00
Thomas Lamprecht
e9ab8ea313 zPool: fixup timeout setting for import
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-17 14:54:42 +00:00
Thomas Lamprecht
a10695b4e8 zpool: cleanup zfs_request command a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-17 14:31:05 +00:00
Wolfgang Link
dc18abe07b ZFS Pool: improve error output from activate_storage
related to #2154 which has some issues on zpool list, but we do not
see the error messages from that step : Buggy "pvesm status" output
2019-04-17 08:05:04 +00:00
Christian Ebner
4af7713268 Status: Include command error in error message when storage activation fails
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
2019-04-11 08:04:32 +02:00
Thomas Lamprecht
a2a04139da followup: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-10 10:15:42 +02:00
Dominic Jäger
b1f9d99017 Fix #318: Delete vzdump log when deleting a backup
Vzdump log files were not deleted when a backup was deleted.
Consequently, the folder continuously filled with .log files.
Now they get deleted after the backup is removed.

Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
2019-04-10 10:10:44 +02:00
Thomas Lamprecht
20f1951fb4 bump version to 5.0-41
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-08 17:54:08 +02:00
Thomas Lamprecht
501562d4b7 code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-08 17:48:59 +02:00
Stoiko Ivanov
4526dffa53 Diskmanage: don't run zpool if not present
Since zfsutils are not a hard dependency of our stack it is possible to not have
`zpool` available.

Checking for existance of `zpool` before calling it suppresses spurious warnings
in the logs (e.g. when creating Ceph OSDs or accessing the 'Disk' Tab in the
GUI).

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-04-08 17:47:07 +02:00
Stoiko Ivanov
e71336ad88 add 3 bwlimit tests
1) checking that the empty storage list is treated correctly (only override
and datacenter config limit considered)
2) checking that the empty storage list is treated correctly (as with 1).
3) checking that undef can be passed as one element of the storage list (it is
ignored)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-04-05 18:15:01 +02:00
Stoiko Ivanov
396aedff95 get_bandwidt_limits: ignore 'undef' as storage
If one of the storages passed in $storage_list was not defined
get_bandwidth_limit died (see [0], of an occurence of this).
This patch changes the behavior to ignore undef as storage instead.

[0] https://pve.proxmox.com/pipermail/pve-devel/2019-April/036515.html

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-04-05 18:15:01 +02:00
Thomas Lamprecht
e638b21312 bump version to 5.0-40
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-04-04 16:19:51 +02:00
Fabian Grünbichler
c6f1315524 zfs: don't generate/update cachefile on pool import
during storage activation.

for pools that don't get imported at boot (e.g. because their vdevs are
not available when zfs-import-*.service runs) it is fatal to include
them in the cachefile, for those that do get imported at boot this code
should never run anyway as they are already imported.

in any case, a fallback to import without cachefile is the safe variant.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2019-04-03 12:18:37 +02:00
Mira Limbeck
cdef3abb25 workaround zfs create -V error for unaligned sizes
fixes the 'cannot create 'nvme/foo': volume size must be a multiple of
volume block size' error by always rounding the size up to the next 1M
boundary. this is a workaround until
https://github.com/zfsonlinux/zfs/issues/8541 is solved.
the current manpage says 128k is the maximum blocksize, but a local test
showed that values up to 1M are allowed. it might be possible to
increase it even further (see f1512ee61).

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
2019-03-30 15:38:35 +01:00
Stoiko Ivanov
074bdd354f Storage::get_bandwidth_limit: fix if condition
Passing 'undef' as '$storage_list' led to a warning about using an
uninitialized value as array_ref.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2019-03-29 09:04:48 +01:00
Thomas Lamprecht
3d277fe6ac add libfile-chdir-perl to build dependencies
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-03-21 10:51:19 +01:00
Thomas Lamprecht
2c6954c97d re-bump version to 5.0-39
this wasn't released yet to any repo and I'd like to have the iSCSI
fix included, so just re-run 'dch -r'

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-03-07 12:22:59 +01:00
Dominik Csapak
eebcdb1119 fix tests when one has iscsi devices
the test would read the real device and if one is an iscsi device
it would fail, move the test code to a sub and mock it in the tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-03-07 11:08:43 +01:00
Dominik Csapak
3add8714a9 fix content listing for user mode iscsi plugin
the format is a required in the result schema

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-03-07 11:08:43 +01:00