when having a simple directory as rootfs,
trying to edit it in the gui broke it, because
we tried to disable the backup checkbox which did not exists
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in luminous, the error message is not
'x/y in osds are down' anymore, but
'x osds down'
so we need to adapt the parsing, and it means we cannot check
the number of in osds there anymore (was never really needed, so
we can simply omit it)
when an osd is down but marked as out, those errors disappear
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Camel Case names for component alias follow Ext recommended practises
and are used otherwise everyelse in the code base.
No functionnal changes, aliases for these components were not used
anyway.
while it does not makes sense do over-reuse translations for the sake
of translating less, imo, here we can safely reuse already existing
ones and pull out the unit 'MB/s' from the gettext.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As the nodes replication status call also returns disabled jobs now,
we need to handle them here too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It was a bit strange to have two separate status columns, which do
effectively the same thing. So merge them together to save a bit
space and have less columns.
We do not need to translate 'Status Text' as a nice side effect.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add one more to show that also weekdays + intervals are possible
Further improve wording + reduce translation needs for others entries.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When a user sees this the first time and wants to add a job he
shouldn't be confused what the default value means, so display this
through the emptyText property, which does not get submitted to the
backend.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Commit 3385399339c94 ("replication: keep retrying every 30 minutes in
error state") changed the retry behavior to not stop after the 3rd error
and then stick to half-hour intervals. This needs to be reflected in the
tests. The numbers here match. (1900 + 30*60 = 3700).
Commit fd844180a7efa ("replication: don't sync to offline targets on
error states) changed the retry behavior to check whether the target
node is online. If this is not the case we fail right away. This
introduced a dependency on PVE::Cluster::get_members which we now need
to mock. Tests currently use node names "node{1,2,3}", so I just mock
those 3.
Delete the strange sounding Note from the Removal dialog.
This was added to make sure that an user isn't confused that a Job is
still shown after 'removing' it. But it isn't needed as there is
already noted in the jobs status that it will get removed soon after
doing so, so the user sees that his action had the desired effect, no
extra note needed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This will allow to disable replication, when for instance we
add a disk on a non-replicatable storage.
The option is hidden in the wizard, because at that time no VM
replication has been set.
we used a static emptytext at creation, which is wrong after editing
now copied from qemu/Options.js (the name), but instead of deleting the
hostname on the backend on an empty field, we set it to CT<VMID>
(this is also the default in the wizard)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>