since the size of an LV can only be a multiple of 512b, we round
down to the next kib
we then have to mulitply it by 1024 for the partition, since
append_partition expects bytes and not kib
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this fixes the dashboard when one views it on a node
alternatively, we can add a node specific metadata api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the MDS api, so that DELETE and POST calls can operate on
the same path. This does not changes the CLI pveceph interface
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is an abstraction for listing Ceph Services with a few improvements:
* start/stop/restart buttons for all services (incl. icons)
* a syslog button to view the syslog of that service
* correct reloading behaviour when creating/destroying
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As in a situation where we /had/ a manager but destroyed it this
key's value is a empty string, and if we pass that to the WebUI we
get strange results form of a ghost MGR entry with ExtJS auto-ID
generation as name -> pretty confusing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
no point in first building a list if we can just remove it directly
afterwards, it's eval-ed anyway and $osd_list did not get touched
in-between.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with this, osd destruction is left to ceph-volume if the osd was created
with ceph-volume, else our old code remains mostly the same since
we want to be able to destroy upgraded osds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this completely rewrites the ceph os creation api call using ceph-volume
since ceph-disk is not available anymore
breaking changes:
no filestore anymore, journal_dev -> db_dev
it is now possible to give a specific size for db/wal, default
is to read from ceph db/config and fallback is
10% of osd for block.db and 1% of osd for block.wal
the reason is that ceph-volume does not autocreate those itself
(like ceph-disk) but you have to create it yourself
if the db/wal device has an lvm on it with naming scheme 'ceph-UUID'
it uses that and creates a new lv
if we detect partitions, we create a new partition at the end
if the disk is not used at all, we create a pv/vg/lv for it
it is not possible to create osds on luminous with this api call anymore,
anyone needing this has to use ceph-disk directly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This way they can cleanly exit (with a SIGTERM or systemctl stop) but
will get restarted if killed or they abort, but only for
StartLimitIntervalSec= time period with a total maximal count of
StartLimitBurst= retries.
See `man systemd.unit` for details, default are 10s total
restart retry period with at max 5 total retry counts, after that the
unit will placed in the failure state. So this is a best effort try
with no real downside.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
reads the sizes from the ceph config db first, then from the
ceph config, first from the osd section then global
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we will have a seperate gui for the manager, we do not need this
anymore
this is a breaking api change
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This was done to ensure the nodelist, which get_node_kv uses
internally to loop over all nodes, is filled. But, everything coming
through API/CLI already has a cfs_update called so this is not
required outside from development testing calls. As we normally do
not do this for such calls, cfs_update call is a prerequisite for
such things. The better solution would be handling this internally in
get_node_kv, e.g., by checking if $clinfo->{versionn} was set once,
and if not call to cfs_update, or maybe even better, so adapt the IPC
call for a possibility to get the KV status hash for all nodes.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and use the broadcast when a service is added/removed
we will use 'get_cluster_service' in the future when we generate a list
of services of a specific type
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This was the exact same symbol we use for container, and as this is
_not_ CT related, and box did not make sense for the version here for
me, just remove it for now before we forget it..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ceph has a bad reputation for chaging such minor things a few times,
even during a stable release, and as here a bit more flexibillity
cost exactly nothing let's do so and accept versions with one to
arbitrary version levels
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this correctly compares ceph versions by its numeric parts
the way it is written, it can be used as a function for 'sort'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
value is not always a string (depending on the value that changed),
so we have to convert it to a string to have a 'match' function
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We allow 'luminous' still for testing purpose, it could be also
useful if one already upgraded his cluster to PVE 6.0 / Buster but
not yet ceph and due to a incident needs to setup a new luminous node
on Buster to get healthy again. This is fabricated but not
unthinkable, as it costs nothing and isn't available for WebUI user
just keep it for now. Remove with a future point release though.
Use non-public repo for now, will be updated to testing soon.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>