pve-manager/PVE/API2/Ceph
Aaron Lauterer c4368cf6d6 ceph osd: return PGs per OSD and show in UI
By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
equivalent , we get the same data structure with more information per
OSD. One of them is the number of PGs stored on that OSD.

The number of PGs per OSD is an important number, for example when
trying to figure out why the performance is not as good as expected.
Therefore, adding it to the OSD overview visible by default should
reduce the number of times, one needs to access the CLI.

Comparing runtime cost on a 3 node ceph cluster with 4 OSDs each doing 50k
iterations gives:

               Rate osd-df-tree    osd-tree
osd-df-tree  9141/s          --        -25%
osd-tree    12136/s         33%          --

So, while definitively a bit slower, but it's still in the µs range,
and as such below HTTP in TLS in TCP connection setup for most users,
so worth the extra useful information.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
 [ TL: slight rewording of subject and add benchmark data ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2023-02-15 10:16:50 +01:00
..
FS.pm api: cephfs: add 'fs-name' for cephfs storage 2021-11-11 17:52:08 +01:00
Makefile api: ceph: subclass pools 2021-02-06 14:17:53 +01:00
MDS.pm ceph: make all service name regexes the same 2020-03-04 15:38:09 +01:00
MGR.pm ceph: make all service name regexes the same 2020-03-04 15:38:09 +01:00
MON.pm api: ceph: update return schemas 2023-01-16 14:32:00 +01:00
OSD.pm ceph osd: return PGs per OSD and show in UI 2023-02-15 10:16:50 +01:00
Pools.pm api: ceph: add applications of each pool to the lspools endpoint 2022-11-16 20:24:12 +01:00