we might add other ones that might be used together with the
`download` one, so rather be explicit on communicating what we check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The read_tasklog API call now stream the whole log file if the query
parameter 'download' is set to true.
This is done in preparation for the task log download button to be
added in the TaskViewer.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Tested-by: Stefan Sterz <s.sterz@proxmox.com>
Reviewed-by: Stefan Sterz <s.sterz@proxmox.com>
The 'cmd-safety', 'configdb' and 'mgr' items were missing, and while
directly calling the API endpoints worked, the api-viewer and pvesh
where partially broken here.
Sort the whole list alphabetically will make it easier to track in
the future
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ T: note which items where missing and reword slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows sufficiently privileged users to pass-in retention and
performance parameters for manual backup, but keeps tmpdir, dumpdir
and script root-only. Such users could already edit the job
accordingly, so essentially not granting anything new.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
keep Sys.Modify only for backward compat, as it does not make really
much sense to ask for that on an informative GET call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ceph provides us with several safety checks to verify that an action is
safe to perform. This endpoint provides means to acces them.
The actual mon commands are not exposed directly. Instead the two
actions "stop" and "destroy" are offered.
In case it is not okay to perform an action, Ceph provides a status
message explaining why. This message is part of the returned values.
For now there are the following checks for these services:
MON:
- ok-to-stop
- ok-to-rm
OSD:
- ok-to-stop
- safe-to-destroy
MDS:
- ok-to-stop
Even though OSDs have a check if it is okay to destroy them, it is for
now not really usable in our workflow because it needs the OSD to be up
and running to return useful information. Our workflow in the GUI
currently is that the OSD needs to be stopped in order to destroy it.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
To avoid returning unrelated information in the /version api call in
the future use the API endpoint that'd return the relevant
information basically anyway.
It contains most ui relevant options, like the console preference and
tag-style so allow these for users without 'Sys.Audit' on '/', for
others its unchanged, they still get the whole datacenter config.
We also add the list of allowed tags, while not strictly a datacenter
config, it's derived from the current users privileges and the
datacenter config.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since ceph luminous (ceph 12) pools need to be associated with at
least one applicaton. expose this information here too so that clients
of this endpoint can use it.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
so the frontend has the information readily available.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
for backwards compatibility. Otherwise, e.g. listing backup jobs with
pvesh get /cluster/backup is broken. And suddenly not having the
property anymore would be a breaking API change.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This avoids errors about the use of uninitialized values if the 'pool'
parameter is not present in the storage configuration.
The 'pool' property for an RBD storage config is not mandatory.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Also generalizes the way vzdump property strings are handled for jobs.
Something similar could be done in VZDump.pm, but there the maxfiles
and prune-backups settings are currently coupled, so a dedicated
parse_performance() is used instead. Can be changed once maxfiles is
dropped.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since the jobs are configured clusterwide in pmxcfs, a user can use any
node to update the config of them. for some configs (schedule/enabled)
we need to update the last runtime in the state file, but this
is sadly only node-local.
to also update the state file on the other nodes, we introduce
a new 'detect_changed_runtime_props' function that checks and saves relevant
properties from the config to the statefile each round of the scheduler if they
changed.
this way, we can detect changes in those and update the last runtime too.
the only situation where we don't detect a config change is when the
user changes back to the previous configuration in between iterations.
This can be ignored though, since it would not be scheduled then
anyway.
in 'synchronize_job_states_with_config' we switch from reading the
jobstate unconditionally to check the existance of the statefile
(which is the only condition that can return undef anyway)
so that we don't read the file multiple times each round.
Fixes: 0c8d7468 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
by extracting the JSON-encoded-string schema and dumping it into the
verbose description it at least shows up in the API viewer.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since this was missing a proper return type definition the api viewer
couldn't display the endpoint (`retinfs.items` was undefined). also
the `pvesh` command would complain that it cannot properly format the
return type because the variable `$item_type` in `CLIFormatter.pm` was
not defined.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
by updating the timestamp in the job state when enabled is changing
from 0 -> 1. We do it this way too in PBS for example, and is the more
sensible behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
which can happen when failing to obtain the guest's migration lock.
This led to a lot of mails being sent during migration (timeout for
obtaining lock is only 2 seconds and we run it in a loop).
One could argue that obtaining the lock should increase the fail
count, but without the lock, the job state should not be touched and
even the first three mails upon migration could be considered spam.
Fixes: fa4bb659 ("replication: sent always mail for first three tries and move helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Mention which optional parameters will be used for the replicated
metadata pool but won't have an effect on the erasure coded data pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The crush rule is an optional paramter which can be used for the
metadata pool, but the erasure coded data pool will always get its own
crush rule. Therefore this parameter can not be adapted.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a schedule only has a limited amount of runs it can happen
(e.g. 2022-10-01 8:00/30), $next will be undef after the last run.
Exit early in that case.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The osd dump already contains the pool type in numerical format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>