with this, osd destruction is left to ceph-volume if the osd was created
with ceph-volume, else our old code remains mostly the same since
we want to be able to destroy upgraded osds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this completely rewrites the ceph os creation api call using ceph-volume
since ceph-disk is not available anymore
breaking changes:
no filestore anymore, journal_dev -> db_dev
it is now possible to give a specific size for db/wal, default
is to read from ceph db/config and fallback is
10% of osd for block.db and 1% of osd for block.wal
the reason is that ceph-volume does not autocreate those itself
(like ceph-disk) but you have to create it yourself
if the db/wal device has an lvm on it with naming scheme 'ceph-UUID'
it uses that and creates a new lv
if we detect partitions, we create a new partition at the end
if the disk is not used at all, we create a pv/vg/lv for it
it is not possible to create osds on luminous with this api call anymore,
anyone needing this has to use ceph-disk directly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ceph nautilus changed the structure of 'pg dump osds'
they moved the data one level below
parse both new and old format, and bail if it returns anything else
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Instead of opening proc/mounts through IO::File directly for parsing,
the patch uses ProcFSTools. This way it also takes care of eventual
decoding.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
When destroying an OSD over API or CLI, e.g. by executing:
'pveceph osd destroy <num> --cleanup'
all disks associated with the OSD got wiped with dd, which included
any shared and by others still in use ones, e.g., separate disks with
DB/WAL.
The patch changes 'wipe_disks' to wipe the partition instead of the
whole disk.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>