To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to return the index.
The metadata one provides various metadata regarding the OSD.
Such as
* process id
* memory usage
* info about devices used (bdev/block, db, wal)
* size
* disks used (sdX)
...
* network addresses and ports used
...
Memory usage and PID are retrieved from systemd while the rest can be
retrieved from the metadata provided by Ceph.
The second one (lv-info) returns the following infos for a logical
volume:
* creation time
* lv name
* lv path
* lv size
* lv uuid
* vg name
Possible volumes are:
* block (default value if not provided)
* db
* wal
'ceph-volume' is used to gather the infos, except for the creation time
of the LV which is retrieved via 'lvs'.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The 'hardware' entry was missing there. While interfacing with it
works, it will not show up during CLI auto completion and in the HTML
debug view (/api2/html/) if not listed here in the API directory
index.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
equivalent , we get the same data structure with more information per
OSD. One of them is the number of PGs stored on that OSD.
The number of PGs per OSD is an important number, for example when
trying to figure out why the performance is not as good as expected.
Therefore, adding it to the OSD overview visible by default should
reduce the number of times, one needs to access the CLI.
Comparing runtime cost on a 3 node ceph cluster with 4 OSDs each doing 50k
iterations gives:
Rate osd-df-tree osd-tree
osd-df-tree 9141/s -- -25%
osd-tree 12136/s 33% --
So, while definitively a bit slower, but it's still in the µs range,
and as such below HTTP in TLS in TCP connection setup for most users,
so worth the extra useful information.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ TL: slight rewording of subject and add benchmark data ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
previously the upid would just be used without a file extension when
downloading a task log. this lead to rather strange filenames that
appeared unfamiliar to users as the upid is not very prevalent in the
gui. set a proper file name based on the node name, worker type and a
time stamp instead. also add the ".log" file extension to indicate
that these files contain logs.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
This patch fixes the issue that when the user supplied any non-standard
repositories, the changelogs often wouldn't load. For example, providing
both pve-no-subscription and pbs-no-subscription broke the changelog
API, since the URL built for pbs-no-subscription was invalid.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
This API endpoint returns a big nested schema. This patch adds a mostly
complete description.
For the actual service instance return schema, we include commonly used
and important properties. It will usually return more. What exactly
depends on the Ceph service type.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to include a more complete description of the returned data.
Sort properties in alphabetical order if the list is longer.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
So that one can really decide if this is a shutdown or an actual
stop.
partially related to #4194
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Note that this changes the lower timeout of 60s for CTs also to 180s
like VM always used; besides that there's not much gained making that
distinction there was never a really good argument for this.
partially related to #4194
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we might add other ones that might be used together with the
`download` one, so rather be explicit on communicating what we check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The read_tasklog API call now stream the whole log file if the query
parameter 'download' is set to true.
This is done in preparation for the task log download button to be
added in the TaskViewer.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Tested-by: Stefan Sterz <s.sterz@proxmox.com>
Reviewed-by: Stefan Sterz <s.sterz@proxmox.com>
The 'cmd-safety', 'configdb' and 'mgr' items were missing, and while
directly calling the API endpoints worked, the api-viewer and pvesh
where partially broken here.
Sort the whole list alphabetically will make it easier to track in
the future
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ T: note which items where missing and reword slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows sufficiently privileged users to pass-in retention and
performance parameters for manual backup, but keeps tmpdir, dumpdir
and script root-only. Such users could already edit the job
accordingly, so essentially not granting anything new.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
keep Sys.Modify only for backward compat, as it does not make really
much sense to ask for that on an informative GET call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ceph provides us with several safety checks to verify that an action is
safe to perform. This endpoint provides means to acces them.
The actual mon commands are not exposed directly. Instead the two
actions "stop" and "destroy" are offered.
In case it is not okay to perform an action, Ceph provides a status
message explaining why. This message is part of the returned values.
For now there are the following checks for these services:
MON:
- ok-to-stop
- ok-to-rm
OSD:
- ok-to-stop
- safe-to-destroy
MDS:
- ok-to-stop
Even though OSDs have a check if it is okay to destroy them, it is for
now not really usable in our workflow because it needs the OSD to be up
and running to return useful information. Our workflow in the GUI
currently is that the OSD needs to be stopped in order to destroy it.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
To avoid returning unrelated information in the /version api call in
the future use the API endpoint that'd return the relevant
information basically anyway.
It contains most ui relevant options, like the console preference and
tag-style so allow these for users without 'Sys.Audit' on '/', for
others its unchanged, they still get the whole datacenter config.
We also add the list of allowed tags, while not strictly a datacenter
config, it's derived from the current users privileges and the
datacenter config.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since ceph luminous (ceph 12) pools need to be associated with at
least one applicaton. expose this information here too so that clients
of this endpoint can use it.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
so the frontend has the information readily available.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
for backwards compatibility. Otherwise, e.g. listing backup jobs with
pvesh get /cluster/backup is broken. And suddenly not having the
property anymore would be a breaking API change.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This avoids errors about the use of uninitialized values if the 'pool'
parameter is not present in the storage configuration.
The 'pool' property for an RBD storage config is not mandatory.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Also generalizes the way vzdump property strings are handled for jobs.
Something similar could be done in VZDump.pm, but there the maxfiles
and prune-backups settings are currently coupled, so a dedicated
parse_performance() is used instead. Can be changed once maxfiles is
dropped.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>