since the size of an LV can only be a multiple of 512b, we round
down to the next kib
we then have to mulitply it by 1024 for the partition, since
append_partition expects bytes and not kib
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the MDS api, so that DELETE and POST calls can operate on
the same path. This does not changes the CLI pveceph interface
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As in a situation where we /had/ a manager but destroyed it this
key's value is a empty string, and if we pass that to the WebUI we
get strange results form of a ghost MGR entry with ExtJS auto-ID
generation as name -> pretty confusing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
no point in first building a list if we can just remove it directly
afterwards, it's eval-ed anyway and $osd_list did not get touched
in-between.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with this, osd destruction is left to ceph-volume if the osd was created
with ceph-volume, else our old code remains mostly the same since
we want to be able to destroy upgraded osds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this completely rewrites the ceph os creation api call using ceph-volume
since ceph-disk is not available anymore
breaking changes:
no filestore anymore, journal_dev -> db_dev
it is now possible to give a specific size for db/wal, default
is to read from ceph db/config and fallback is
10% of osd for block.db and 1% of osd for block.wal
the reason is that ceph-volume does not autocreate those itself
(like ceph-disk) but you have to create it yourself
if the db/wal device has an lvm on it with naming scheme 'ceph-UUID'
it uses that and creates a new lv
if we detect partitions, we create a new partition at the end
if the disk is not used at all, we create a pv/vg/lv for it
it is not possible to create osds on luminous with this api call anymore,
anyone needing this has to use ceph-disk directly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since we will have a seperate gui for the manager, we do not need this
anymore
this is a breaking api change
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
and use the broadcast when a service is added/removed
we will use 'get_cluster_service' in the future when we generate a list
of services of a specific type
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add two new api calls in /cluster/ceph
status:
the same as /nodes/NODE/ceph/status, but accessible without
nodename, which we don't need, as in the hyperconverged case, all nodes
have the ceph.conf which contains the info on how to connect to the
monitors
metadata:
combines data from the cluster filesystem about the services,
as well as the 'ceph YYY metadata' info we get from ceph.
with this info we can convieniently display which services exists,
which are running and which versions they have
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Wit commit a74ba607d4 we switched over
to using the dpkg-dev provided helpers to set package version,
architecture and such in the buildsystem.
But unlike other repositories we used the version also for giving it
back over the API through the during build generated PVE::pvecfg
module, which wasn't fully updated to the new style.
This patch does that, and also cleans up semantics a bit, the
following two changed:
release is now the Debian release, instead of the "package release"
(i.e., the -X part of a full package version).
version is now simply the full (pve-manager) version, e.g., 6.0-1 or
the currently for testing used 6.0-0+1
This allows to do everything we used this information for even in a
slightly easier way (no string concat needed anymore), and fits also
with the terminology we often used in our public channels (mailing
lists, forum, website)
Remove some cruft as we touch things.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
From Nautilus release changelog[0]:
> The auid property for cephx users and RADOS pools has been removed.
> This was an undocumented and partially implemented capability that
> allowed cephx users to map capabilities to RADOS pools that they
> “owned”. Because there are no users we have removed this support.
[0]: https://ceph.com/releases/v14-2-0-nautilus-released/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This partially reverts commit f9b08743a5
as we had some wrong assumptions about lastentries and the other
params, so just note conflicts in the description but let the tool
itself make the checks
This reverts commit f9b08743a5.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this uses the new journalreader instead of journalctl, which is a bit
faster and can read from/to cursor and returns a start/end cursor
also you can give an unix epoch as time parameters
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ceph nautilus changed the structure of 'pg dump osds'
they moved the data one level below
parse both new and old format, and bail if it returns anything else
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
otherwise this potentially returns outdated information (like the
cluster being quorate when corosync has crashed on all nodes 5 minutes
ago).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
It makes sense to not give users without Sys.Audit permissions to
much information over a node and this is relatively easy and cheap to
check and enforce at those two points.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Tested-by: Oguz Bektas <o.bektas@proxmox.com>
Reword the error message in find_mon_ip to make it more clear, that
there is no active IP configuration for the ceph public network.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
If calls aren't proxied to the selected node, which seems legit in
some cases, this will cause some misleading errors while ceph is
not installed on that node. Therefor the calls should now always get
proxied.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
it's a bit strange that one cannot pass the default value explicitly,
helpfull when calling this API path through the CLI envrionment,
which currently cannot have optional fixed-positioned default values
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As this is now the default behavior in all other ceph api endpoints,
I adapted the status api correspondingly.
We also pass our ceph configuration file directly when connecting to
RADOS, so a /etc/ceph/ceph.conf isn't necessarily required to
indicate a fully setup and enabled PVE-ceph environment.
Signed-off-by: Tim Marx <t.marx@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>