since the size of an LV can only be a multiple of 512b, we round
down to the next kib
we then have to mulitply it by 1024 for the partition, since
append_partition expects bytes and not kib
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the MDS api, so that DELETE and POST calls can operate on
the same path. This does not changes the CLI pveceph interface
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As in a situation where we /had/ a manager but destroyed it this
key's value is a empty string, and if we pass that to the WebUI we
get strange results form of a ghost MGR entry with ExtJS auto-ID
generation as name -> pretty confusing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
no point in first building a list if we can just remove it directly
afterwards, it's eval-ed anyway and $osd_list did not get touched
in-between.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with this, osd destruction is left to ceph-volume if the osd was created
with ceph-volume, else our old code remains mostly the same since
we want to be able to destroy upgraded osds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this completely rewrites the ceph os creation api call using ceph-volume
since ceph-disk is not available anymore
breaking changes:
no filestore anymore, journal_dev -> db_dev
it is now possible to give a specific size for db/wal, default
is to read from ceph db/config and fallback is
10% of osd for block.db and 1% of osd for block.wal
the reason is that ceph-volume does not autocreate those itself
(like ceph-disk) but you have to create it yourself
if the db/wal device has an lvm on it with naming scheme 'ceph-UUID'
it uses that and creates a new lv
if we detect partitions, we create a new partition at the end
if the disk is not used at all, we create a pv/vg/lv for it
it is not possible to create osds on luminous with this api call anymore,
anyone needing this has to use ceph-disk directly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
reads the sizes from the ceph config db first, then from the
ceph config, first from the osd section then global
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we will have a seperate gui for the manager, we do not need this
anymore
this is a breaking api change
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This was done to ensure the nodelist, which get_node_kv uses
internally to loop over all nodes, is filled. But, everything coming
through API/CLI already has a cfs_update called so this is not
required outside from development testing calls. As we normally do
not do this for such calls, cfs_update call is a prerequisite for
such things. The better solution would be handling this internally in
get_node_kv, e.g., by checking if $clinfo->{versionn} was set once,
and if not call to cfs_update, or maybe even better, so adapt the IPC
call for a possibility to get the KV status hash for all nodes.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and use the broadcast when a service is added/removed
we will use 'get_cluster_service' in the future when we generate a list
of services of a specific type
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We allow 'luminous' still for testing purpose, it could be also
useful if one already upgraded his cluster to PVE 6.0 / Buster but
not yet ceph and due to a incident needs to setup a new luminous node
on Buster to get healthy again. This is fabricated but not
unthinkable, as it costs nothing and isn't available for WebUI user
just keep it for now. Remove with a future point release though.
Use non-public repo for now, will be updated to testing soon.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add two new api calls in /cluster/ceph
status:
the same as /nodes/NODE/ceph/status, but accessible without
nodename, which we don't need, as in the hyperconverged case, all nodes
have the ceph.conf which contains the info on how to connect to the
monitors
metadata:
combines data from the cluster filesystem about the services,
as well as the 'ceph YYY metadata' info we get from ceph.
with this info we can convieniently display which services exists,
which are running and which versions they have
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this returns a hash of existing service links for
mds/mgr/mons so that we know which services exists
this is necessary since ceph itself does not save if a service is
defined somewhere, only when it runs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Wit commit a74ba607d4 we switched over
to using the dpkg-dev provided helpers to set package version,
architecture and such in the buildsystem.
But unlike other repositories we used the version also for giving it
back over the API through the during build generated PVE::pvecfg
module, which wasn't fully updated to the new style.
This patch does that, and also cleans up semantics a bit, the
following two changed:
release is now the Debian release, instead of the "package release"
(i.e., the -X part of a full package version).
version is now simply the full (pve-manager) version, e.g., 6.0-1 or
the currently for testing used 6.0-0+1
This allows to do everything we used this information for even in a
slightly easier way (no string concat needed anymore), and fits also
with the terminology we often used in our public channels (mailing
lists, forum, website)
Remove some cruft as we touch things.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
From Nautilus release changelog[0]:
> The auid property for cephx users and RADOS pools has been removed.
> This was an undocumented and partially implemented capability that
> allowed cephx users to map capabilities to RADOS pools that they
> “owned”. Because there are no users we have removed this support.
[0]: https://ceph.com/releases/v14-2-0-nautilus-released/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This partially reverts commit f9b08743a5
as we had some wrong assumptions about lastentries and the other
params, so just note conflicts in the description but let the tool
itself make the checks
This reverts commit f9b08743a5.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This refactors things a bit to avoid having the same two lines in 3
places and allows the plugin to set the "guest was resumed" time
stamp at the point it really was resumed, not only once the backup
completed (see #503). Further, if a plugin prints it's own
"resumed/running after X seconds" message, it can unset the vmstop
time and thus avoid printing the message twice.
related to a fix for #503
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this uses the new journalreader instead of journalctl, which is a bit
faster and can read from/to cursor and returns a start/end cursor
also you can give an unix epoch as time parameters
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>