Commit Graph

32 Commits

Author SHA1 Message Date
Thomas Lamprecht
52bdf49f20 ceph tools: adapt version to accept -pveX too
this is a precautional measure

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-11-23 17:59:17 +01:00
Thomas Lamprecht
3248590d53 api: ceph version: actually get full version
add and change the return signature for the wantarray case, which can
safely done as this is only used once (statd), and there only the
first elemen, the full version string, is used - so no breakage
potential there

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-11-15 18:35:39 +01:00
Thomas Lamprecht
735f24ebae ceph: move possible_flags to Ceph::Tools for intra-module reuse
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-07-23 15:52:23 +02:00
Thomas Lamprecht
7ef69f338e ceph tools: factor out frequent keyring and config init check
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-07-23 07:48:45 +02:00
Alwin Antreich
ea60e3b72e keyring: use ckeyring_path variable in chown cmd
A fixed path for the ceph.client.admin.keyring was used in the chown
command. This patch uses the ckeyring_path variable instead, to minimize
changes should the path change.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2019-07-10 15:42:49 +02:00
Alwin Antreich
177095661b Fix: typo in ckeyring_path
pveceph init failed, as the copy used a wrong path with typo as
target and then the chown tried to operate on the correct keyring
path, which was then non-existing.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2019-07-10 15:41:20 +02:00
Dominik Csapak
342de4e778 ceph: services: improve addr selection
we map '$type addr' to '$type_addr' anyway in the ceph.conf parser,
so this is not necessary

also use 'public_addr' if it is set

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-07-04 09:57:50 +02:00
Fabian Grünbichler
8ba0d0a05e Ceph: add get_cluster_versions helper
to make 'ceph versions' and 'ceph XX versions' accessible.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2019-06-25 09:03:42 +02:00
Thomas Lamprecht
c18db15ba0 ceph: ensure /etc/ceph belongs to ceph
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-19 17:00:21 +02:00
Thomas Lamprecht
c5a673ed1d ceph: setup symlinks: ensure global ceph config directory exists
normally this gets created on package installation, but could be
deleted, e.g., by a debug purge. As it costs nothing to create just
do a mkdir on it, which does not fails if it already exists..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-18 17:15:07 +02:00
Dominik Csapak
d558d296f7 ceph: mon create: refactor and improve auth key creation
it makes no sense to have the mon key inside the client.admin.keyring
also the order and operations did not make much sense

also create the client admin keyring when creating the config

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-18 17:06:50 +02:00
Dominik Csapak
e7e615768f ceph: services: do not create rados object in get_services_info
we always gave one, and the only reason why it could be undef
is that we could not connect, so it makes no sense to try again
and add unecessary time to the api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-18 16:17:35 +02:00
Dominik Csapak
f8eade23dd ceph: pool destroy: give correct parameter for nautilus
this parameter changed sometime between luminous and nautilus
note that with this change, it is not possible to delete pools in
luminous anymore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-11 13:47:55 +02:00
Dominik Csapak
46fb9c5017 ceph: a little luminous backwards compatibility
ceph luminous does not use the 'name' property in the metadata
everywhere, so fall back to 'id'

this makes the ceph dashboard usable while having still luminous
(relevant for upgrading)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-11 12:58:24 +02:00
Dominik Csapak
d4e7f1bf3d ceph: add db/wal size helper
reads the sizes from the ceph config db first, then from the
ceph config, first from the osd section then global

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 17:19:33 +02:00
Thomas Lamprecht
d79e9eb587 followup code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-04 17:18:09 +02:00
Dominik Csapak
48e8a06de1 ceph: add ceph-volume helper
those will be needed for creation/destruction of nautilus osds

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 17:16:03 +02:00
Dominik Csapak
b521573040 ceph: mgr: delete auth key on destruction
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 17:07:39 +02:00
Thomas Lamprecht
930c1d6a15 get_cluster_service: do not always call cfs_update
This was done to ensure the nodelist, which get_node_kv uses
internally to loop over all nodes, is filled. But, everything coming
through API/CLI already has a cfs_update called so this is not
required outside from development testing calls. As we normally do
not do this for such calls, cfs_update call is a prerequisite for
such things. The better solution would be handling this internally in
get_node_kv, e.g., by checking if $clinfo->{versionn} was set once,
and if not call to cfs_update, or maybe even better, so adapt the IPC
call for a possibility to get the KV status hash for all nodes.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-04 16:42:37 +02:00
Thomas Lamprecht
9ecb3d10e4 followup code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-06-04 16:42:37 +02:00
Dominik Csapak
d5373b7dc3 ceph: factor out the service info generation
and include a call to $type metadata to include the version

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 14:57:10 +02:00
Dominik Csapak
4e76dbd7b3 ceph: refactor broadcast_ceph_services and get_cluster_service
and use the broadcast when a service is added/removed
we will use 'get_cluster_service' in the future when we generate a list
of services of a specific type

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 14:56:24 +02:00
Dominik Csapak
74628668d7 ceph: get_local_services: also check /var/lib/ceph/$type
so we do not miss disabled services

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-06-04 14:55:45 +02:00
Thomas Lamprecht
ac4f971d18 followup code cleanup for: add get_local_services for ceph
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-27 15:52:12 +02:00
Dominik Csapak
23b20ae451 add get_local_services for ceph
this returns a hash of existing service links for
mds/mgr/mons so that we know which services exists

this is necessary since ceph itself does not save if a service is
defined somewhere, only when it runs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2019-05-27 15:30:35 +02:00
Thomas Lamprecht
bba5c71217 ceph: drop systemd_managed - we now always are
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-05-26 13:35:39 +02:00
Thomas Lamprecht
507b7cfedd fix #2108: ceph: 'osd pools set' cannot accept integers anymore
The new luminous release 12.2.11 cherry-picked a commit[0] which
remove some legacy fallback when parsing a pool set command.
As of now only strings (which itself can then be a string, int or
float) may be passed as a value, while older also accepted an int
directly.

So ensure that all of our "osd pool set" commands pass strings, which
means that 'min_size' and 'size' must be converted to strings before
executing the command.

Without this one cannot create a CephFS over the WebUI anymore as the
create pools fails. Interestingly, the normal create pools over WebUI
still worked, as it has those two parameters exposed in the creation
formular, and thus sents them as string to the backend, while the
cephfs API does not exposes those two values at all but sets them
directly to integers. Funny stuff. CLI invocations also had the same
issue, depending on the fact if size and min_size could and got
passed (then it worked) or where omited (then it did not worked).

[0]: c838a0096d

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-20 19:25:16 +01:00
Alwin Antreich
b436dca874 Fix #2051: preserve DB/WAL disk on destroy
When destroying an OSD over API or CLI, e.g. by executing:

'pveceph osd destroy <num> --cleanup'

all disks associated with the OSD got wiped with dd, which included
any shared and by others still in use ones, e.g., separate disks with
DB/WAL.

The patch changes 'wipe_disks' to wipe the partition instead of the
whole disk.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2019-02-08 14:20:23 +01:00
Tim Marx
315304f369 ceph: change check if installed to ceph mon binary
Signed-off-by: Tim Marx <t.marx@proxmox.com>
2019-01-10 14:53:52 +01:00
Dominik Csapak
be7edba15d ceph: move mgr create/destroy to Ceph::Services
and adapt the paths and callers

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
27439be616 ceph: move service_cmd and MDS related code to Services.pm
Also adapts the calls to the relevant subs.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00
Dominik Csapak
6fb08cb923 ceph: move CephTools into Ceph/Tools.pm
It makes more sense to have it there, especially since we want to
split out the service parts into a seperate file.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2018-12-20 09:44:01 +01:00