otherwise, we cannot correctly match types that contain a hyphen,
since the id itself can also contain those.
creating a regex where the first part is the concrete allowed
types followed by a hyphen + id can also match those.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we may want to even do this only once, before the loop, but for now
this is basically the same as it was previously but avoids the need
for every (future) plugin to do this manually; which is just the
wrong place.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since the jobs are configured clusterwide in pmxcfs, a user can use any
node to update the config of them. for some configs (schedule/enabled)
we need to update the last runtime in the state file, but this
is sadly only node-local.
to also update the state file on the other nodes, we introduce
a new 'detect_changed_runtime_props' function that checks and saves relevant
properties from the config to the statefile each round of the scheduler if they
changed.
this way, we can detect changes in those and update the last runtime too.
the only situation where we don't detect a config change is when the
user changes back to the previous configuration in between iterations.
This can be ignored though, since it would not be scheduled then
anyway.
in 'synchronize_job_states_with_config' we switch from reading the
jobstate unconditionally to check the existance of the statefile
(which is the only condition that can return undef anyway)
so that we don't read the file multiple times each round.
Fixes: 0c8d7468 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
if we have a schedule that has no 'next event' we should skip the scheduling
instead of schedule every round
this can happen if someone sets an schedule that has no next match.
some examples:
* 2-31 00:00 (there is not February 31st)
* mon 2022-04-02 (this would be a saturday, not monday)
* 1970-1-1 (or every other exact date in the past)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Also, default to 'internal error' if the task status within the lock
is undef, which shouldn't actually be possible.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
this adds a SectionConfig handling for jobs (only 'vzdump' for now) that
represents a job that will be handled by pvescheduler and a basic
'job-state' handling for reading/writing state json files
this has some intersections with pvesrs state handling, but does not
use a single state file for all jobs, but seperate ones, like we
do it in the backup-server.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>