since the jobs are configured clusterwide in pmxcfs, a user can use any
node to update the config of them. for some configs (schedule/enabled)
we need to update the last runtime in the state file, but this
is sadly only node-local.
to also update the state file on the other nodes, we introduce
a new 'detect_changed_runtime_props' function that checks and saves relevant
properties from the config to the statefile each round of the scheduler if they
changed.
this way, we can detect changes in those and update the last runtime too.
the only situation where we don't detect a config change is when the
user changes back to the previous configuration in between iterations.
This can be ignored though, since it would not be scheduled then
anyway.
in 'synchronize_job_states_with_config' we switch from reading the
jobstate unconditionally to check the existance of the statefile
(which is the only condition that can return undef anyway)
so that we don't read the file multiple times each round.
Fixes: 0c8d7468 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
by extracting the JSON-encoded-string schema and dumping it into the
verbose description it at least shows up in the API viewer.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since this was missing a proper return type definition the api viewer
couldn't display the endpoint (`retinfs.items` was undefined). also
the `pvesh` command would complain that it cannot properly format the
return type because the variable `$item_type` in `CLIFormatter.pm` was
not defined.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
by updating the timestamp in the job state when enabled is changing
from 0 -> 1. We do it this way too in PBS for example, and is the more
sensible behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
which can happen when failing to obtain the guest's migration lock.
This led to a lot of mails being sent during migration (timeout for
obtaining lock is only 2 seconds and we run it in a loop).
One could argue that obtaining the lock should increase the fail
count, but without the lock, the job state should not be touched and
even the first three mails upon migration could be considered spam.
Fixes: fa4bb659 ("replication: sent always mail for first three tries and move helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Mention which optional parameters will be used for the replicated
metadata pool but won't have an effect on the erasure coded data pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The crush rule is an optional paramter which can be used for the
metadata pool, but the erasure coded data pool will always get its own
crush rule. Therefore this parameter can not be adapted.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a schedule only has a limited amount of runs it can happen
(e.g. 2022-10-01 8:00/30), $next will be undef after the last run.
Exit early in that case.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The osd dump already contains the pool type in numerical format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The behavior of always adding the storage config was lost in commit
23c407e. But it is more sensible to make it a default that can be
changed if needed.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
moved to a format string 'erasurce-coded', that allows also to drop
most of the param existence checking as we can set the correct
optional'ness in there. Also avoids bloating the API to much for
just this.
Reuse the $rados connection more often to avoid to much
overhead/lingering sockets (the rados connection stays around in the
background to allow efficient reuse)
really should be three separate commits, but too intertwined and too
late for me to care tbh.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.
The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.
To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).
Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When removing a pool, we check against any storage that might have that
pool configured.
We need to check if that pool is used as data-pool too.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
commit 89d146f207 introduced permission
checks here that caused all regular bridges to be removed from the
returned list as soon as the SDN package is installed, unless the user
is root@pam or there exists a VNET with the same ID.
this is arguably a breaking change, so limit the priv check to actually
defined VNETs for the time being, and add ALL regular bridges
uncondtionally like before.
get_local_vnets already filters by the same prvs, so we need to get the
full config to find out which IDs are VNETs and which are not.
once/iff we introduce ACL paths for *all* bridges in the future, we can
limit accordingly here.
CC: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
but rather multiple times becoming exponentially less frequent.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
on create we require either starttime (+dow) or a schedule, but when
updating an existing job, this is not necessary
before we changed to schedules, the starttime was not optional either on
update, but i think there is no reason to require the user to send the
schedule/startime along every time.
the gui will send all values every time, so that was never a problem there
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This remove vmbr* from bridgeselector if user have access to vnets.
if user need to have also access to vmbr, we can add a permission
in path "/sdn/vnets/vmbrX"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Using
pvesh create /nodes/pve701/apt/repositories --path
"/etc/apt/sources.list" --index 0 --enabled 1
reliably leads to
error: invalid type: string "0", expected usize
Coerce to int to avoid this. I was not able to trigger the issue with
the "enabled" option being a string here (in PMG I was), but be on the
safe side and coerce there too. Otherwise it might get triggered by a
future, completely unrelated change further up in the API call
handling.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To avoid being blacklisted because of the default, quite popular,
libwww-perl user-agent like reported in community forum [0].
[0]: https://forum.proxmox.com/threads/104081/
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-by: Matthias Heiserer <m.heiserer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A virtual package does not have SelectedState Install, but the
dependency will still be satisfied if a package providing it has.
Fixes a bug, wrongly showing that postfix will be installed, when a
different mail-transport-agent is installed and a pve-manager update
is available:
https://forum.proxmox.com/threads/103413/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and calculate it by getting the next event after 'now' since
we currently have no way to get the last run time for jobs only running
on different cluster nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of accumulating the whole output of 'mini-journalreader' in
the api call (this can be quite big), use the download mechanic of the
http-server to stream the output to the client.
we lose some error handling possibilities, but we do not have
to allocate anything here, and since perl does not free memory after
allocating[0] this is our desired behaviour.
to keep api compatiblitiy, we need to give the journalreader the '-j'
flag to let it output json.
also tell the http server that the encoding is gzip and pipe
the output through it.
0: https://perldoc.perl.org/perlfaq3#How-can-I-free-an-array-or-hash-so-my-program-shrinks?
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
had it lying around and did not felt condensed/code-golfed to me,
rather a bit more expressive (surely biased though)..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>