Since we already have the information from the API call, why not add it
as a (hidden) column. It can be useful to quickly see which ceph
applications are enabled for a pool in some situations.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
As else we trigger a change event for the size field which triggers
the sizeChange callback that then re-calculates the default size
suggestion, which might be lower as the value the user configured.
This was reported in the Forum for a 5/4 size/min-size configuration
that got reset to 5/3 on edit.
Report: https://forum.proxmox.com/threads/158798/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The autoscaler is well-known Ceph concept. A translation might prefer
to use "Autoscaler" as-is in a translation.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
[ TL: Squash this into one patch and adapt commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of relying purely on listeners that then manually change other
components, we can use binds, formulas and a basic controller.
This makes it quite a bit easier to let multiple components react to
changes.
A cbind is used for the size component to set the initial start value.
Other options, like using setValue in the controller init, will trigger
the change listener and therefore can affect the min size without any
user interaction.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Instead of hard coded defaults for the size and min_size parameter,
check if we have defaults configured in the ceph.conf or config db and
use those.
There are clusters where different defaults are needed. For example if
the cluster spans two rooms and needs to survive the loss of one. A
size/min_size of 4/2 are common defaults in such a situation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The pool number is shown in a few places, having it easily accessible
can help to understand which pool a warning/error refers to.
For example, the PG ID consists of '{pool nr}.{pg nr}' and is shown in
every warning concerning that PG.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
ceph/pools (plural) is deprecated, use the new one.
Since the details / status of a pool has been moved from previously
ceph/pools/{name} to now ceph/pool/{name}/status, we need to pass the
'loadUrl' to the edit window.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Since the rule selector is not allowed to be empty, but the loading
of the rules is not instant, the validity change will trigger before
the load was finished. Since it is in the advanced section, it will
be opened every time instead of only when there is an invalid value.
This patch fixes that by temporarily setting 'allowBlank' to true
until the store is loaded, and then it revalidates the field.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
They cannot be changed after pool creation for erasure coded pools
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
we should not warn for ceph's built-in default value for min_size as
having the min_size half of the size (rounded up) is ok and even the
default for ceph
Since there seems to be no 'quorum based' pg inconsistency recovery[0],
only a copy from the authoritative osd, there is nothing wrong
with setting that.
0: https://docs.ceph.com/en/latest/rados/operations/pg-repair/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we are only allowed to set autoselect the first record after load on
creation, else we may change the value by mistake which, if the admin
does not notices when changing some other setting, can be quite fatal
as it can trigger a huge rebalance, where the cause may then not even
be obvious and thus an admin be quite baffled.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
they live there now, so we can delete them here and use the ones from
widget-toolkit instead
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
s/Target Size Ratio/Target Ratio/, shorter and ceph docs call it also
that way, allows do drop the non-standard label width.
Add tooltips to explain both, target size and ratio, a bit.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
no semantic changes intended, fixes a few eslint issues along the
way.
This could be further improved by using a ViewController, but lets
keep it at this compromise for now due to work time ROI.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the field gives us a string, so the second condition could never
be true, instead parse to a float instead
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is used to fine-tune the ceph autoscaler
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
because different pools can have different crush rules, etc.
the sum of the 'percentage used' column makes no real sense, so we
remove it
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to add the pg_autoscale_mode since its activated in Ceph Octopus by
default and emmits a waring (ceph status) if a pool has too many PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>