Commit Graph

18 Commits

Author SHA1 Message Date
Thomas Lamprecht
017bb1a8bd minor typo fix and code cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-08 15:26:44 +01:00
Alwin Antreich
3c6aa3f47e api osd/destroy: use ProcFSTools to iterate mounts
Instead of opening proc/mounts through IO::File directly for parsing,
the patch uses ProcFSTools. This way it also takes care of eventual
decoding.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2019-02-08 15:26:00 +01:00
Thomas Lamprecht
e45cc727e6 follouwp: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-08 15:26:00 +01:00
Alwin Antreich
b436dca874 Fix #2051: preserve DB/WAL disk on destroy
When destroying an OSD over API or CLI, e.g. by executing:

'pveceph osd destroy <num> --cleanup'

all disks associated with the OSD got wiped with dd, which included
any shared and by others still in use ones, e.g., separate disks with
DB/WAL.

The patch changes 'wipe_disks' to wipe the partition instead of the
whole disk.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-08 14:20:23 +01:00
Dominik Csapak
98fe93ae25 ceph: move Monitor API calls to API2/Ceph/MON.pm
and adapt the paths

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
4fec2764f1 ceph: move MGR API calls to API2/Ceph/MGR.pm
and adapt the paths

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
79fa41a2b8 ceph: move API2::CephOSD to API2::Ceph::OSD
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
27439be616 ceph: move service_cmd and MDS related code to Services.pm
Also adapts the calls to the relevant subs.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
6fb08cb923 ceph: move CephTools into Ceph/Tools.pm
It makes more sense to have it there, especially since we want to
split out the service parts into a seperate file.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
c31f487e7a ceph: use cfs_read/write_file for ceph.conf
The parser is now registered, and ceph.conf is a tracked file in pmxcfs.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Tim Marx
1aa902ae90 ceph api: added check for /etc/pve/ceph.conf to remaining/new endpoints
Signed-off-by: Tim Marx <t.marx@proxmox.com>
2018-12-13 10:14:39 +01:00
Thomas Lamprecht
6ad70a2bb8 ceph: update all pg_num defaults to 128
As the last patch missed a few places. see pg calc or also:
http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-29 12:36:25 +01:00
Thomas Lamprecht
c5a0a1e449 ceph: fixup s/add_storage/add-storage/
it's just the nicer interface and we want to go away from underscore
usage in new calls

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-28 10:07:08 +01:00
Thomas Lamprecht
e337caaf46 api: cephfs: reuse rados connection when polling for active MDS
no point in recreating one, we have an active one from earlier

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-27 15:24:08 +01:00
Thomas Lamprecht
a62d7bd966 api: cephfs: wait for MDS to become active
A MDS gets only active once a FS is there, and we need an MDS active
to be able to add a storage, as the CephFS plugin does an immediate
mount check. As an MDS needs some time to get active we had a
problematic time window where this mounting could fail.

Wait for a MDS to get in active state.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-23 18:49:48 +01:00
Thomas Lamprecht
34c1236c35 api: cephfs: check if SID is free when add_storage is set
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2018-11-23 18:35:16 +01:00
Thomas Lamprecht
7e1a9d25b6 ceph: add CephFS create and list API
Allow to create a new CephFS instance and allow to list them.

As deletion requires coordination between the active MDS and all
standby MDS next in line this needs a bit more work. One could mark
the MDS cluster down and stop the active, that should work but as
destroying is quite a sensible operation, in production not often
needed I deemed it better to document this only, and leaving API
endpoints for this to the future.

For index/list I slightly transform the result of an RADOS `fs ls`
monitor command, this would allow relative easy display of a CephFS
and it's backing metadata and data pools in a GUI.

While for now it's not enabled by default and marked as experimental,
this API is designed to host multiple CephFS instances - we may not
need this at all, but I did not want to limit us early. And anybody
liking to experiment can use it after the respective ceph.conf
settings.

When encountering errors try to rollback. As we verified at the
beginning that we did not reused pools, destroy the ones which we
created.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
2018-11-23 13:33:12 +01:00
Thomas Lamprecht
b82649cc52 ceph: add MDS create/delete/list API
Allow to create, list and destroy and Ceph Metadata Server (MDS) over
the API and the CLI `pveceph` tool.

Besides setting up the local systemd service template and the MDS
data directory we also add a reference to the MDS in the ceph.conf
We note the backing host (node) from the respective MDS and set up a
'mds standby for name' = 'pve' so that the PVE created ones are a
single group. If we decide to add integration for rank/path specific
MDS (possible useful for CephFS with quite a bit of load) then this
may help as a starting point.

On create, check early if a reference already exists in ceph.conf and
abort in that case. If we only see existing data directories later
on we abort but do not remove them, they could well be from an older
manual create - where it's possible dangerous to just remove it. Let
the user handle it themself in that case.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
2018-11-23 13:33:12 +01:00