The whole thing is already prepared for this, the systemd timer was
just a fixed periodic timer with a frequency of one minute. And we
just introduced it as the assumption was made that less memory usage
would be generated with this approach, AFAIK.
But logging 4+ lines just about that the timer was started, even if
it does nothing, and that 24/7 is not to cheap and a bit annoying.
So in a first step add a simple daemon, which forks of a child for
running jobs once a minute.
This could be made still a bit more intelligent, i.e., look if we
have jobs tor run before forking - as forking is not the cheapest
syscall. Further, we could adapt the sleep interval to the next time
we actually need to run a job (and sending a SIGUSR to the daemon if
a job interval changes such, that this interval got narrower)
We try to sync running on minute-change boundaries at start, this
emulates systemd.timer behaviour, we had until now. Also user can
configure jobs on minute precision, so they probably expect that
those also start really close to a minute change event.
Could be adapted to resync during running, to factor in time drift.
But, as long as enough cpu cycles are available we run in correct
monotonic intervalls, so this isn't a must, IMO.
Another improvement could be locking a bit more fine grained, i.e.
not on a per-all-local-job-runs basis, but per-job (per-guest?)
basis, which would improve temporary starvement of small
high-periodic jobs through big, less peridoci jobs.
We argued that it's the user fault if such situations arise, but they
can evolve over time without noticing, especially in compolexer
setups.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ensure we got the notes property for the datacenter config and also
the newly registered/watched jobs.cfg for future pveschedule patches.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
was provided indirectly through libproxmox-acme-perl but we want to
downgrade it there to a recommends
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The only difference is that reload-or-try-restart does not do
anything if the service isn't already running, while
reload-or-restart also starts a stopped service.
We explicitly check if the service is enabled on upgrade before doing
any start/reload-or-restart action anyway. So, it would now start
daemons that were stopped but not disabled, which is not a really
valid state and would have happened on the next reboot anyway.
This starts new daemons (like the pvescheduler) automatically on a
package upgrade
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
So that we circumvent browsers caching 6.0 extjs js/css
this should (at least for new users upgrading) fix the browser caching
issue for extjs (we had some now in the forums)
I did it this way since we do not often change version of the extjs
package (since its a big task everytime anyway), but if wanted i can
do it differently. e.g.:
* hardcode it in the perl code
* generate it during the build (also for the control file)
But this is fine for ExtJS as we rarely update that and especially for
major releases we would need to adapt stuff anyway
Also bump the versioned dependency on extjs to 7.0.0 in the Debian
control file.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the GUI now defaults to creating unprivileged containers with nesting
enabled, but that requires a pve-container that actually allows this for
VM.Allocate users instead of root@pam only
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
in theory we'd need to be more cautios but this was added only during
beta, which is when we do not really provided any stability
guarantee, further, it's rather unlikely that one added very
important repos that, when removed, break something (again *during*
beta).
The new APT repo management makes it also easy to see when one does
not gets any PVE updates, and one can add the pvetest repo there
again easily too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now that we no longer ship our own LVM packages, set the relevant
filtering options here if they are missing.
for an upgrade from PVE 6.x, the following two scenarios are likely:
A: user edited config provided by our old lvm2 package. it likely
contains our (or a modified) global_filter, but the old scan_lvs
default. in this case we ignore global_filter as long as it contains our
'don't scan zvols' entry, and set scan_lvs to false.
B: config provided by our old lvm2 package was taken over by default
config from stock lvm2 package. scan_lvs defaults to false already, but
global_filter is unset (scan everything), so we need to set our own
global_filter excluding zvols.
other combinations should be handled fine as well.
for new installs (installer, install on top of Debian Bullseye) we are
always in scenario B.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
any system upgrading to 7.x was either installed with >= 6.4 in the
first place, or upgraded to >= 6.4 and thus fixed those issues already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We could also just check the mtime of the machine-id as heuristic,
but extracting the machine-ids from our ISO archive was pretty
straight forward and avoids special handling for from Debian
installed systems, so use that.
The full map:
pve 4.0-62414ad6-11 -> "2ec24eda629a4c8d8c1f8dac50a9ee5f"
pve 4.1-a64d2990-21 -> "bd94244c0da6419a82a383e62dc03b51"
pve 4.2-95d93422-28 -> "45d4e7046c3d4c26af8acd589f358ac6"
pve 4.3-29d03d47-2 -> "8c445f96b3064ff79f825ea78a3eefde"
pve 4.4-f4006904-1 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 4.4-f4006904-2 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 5.0-786da0da-1 -> "285de85759894b3f9ad9844a89045af6"
pve 5.0-786da0da-2 -> "89971dede7b04c98b2b0bc8845f53320"
pve 5.0-20170505-test -> "4e3b6e9550f24d638bc26211a7b37df5"
pve 5.0-ad98a36-5 -> "bc2f684e31ee4daf95e45c62410a95b1"
pve 5.0-d136f4ad-3 -> "8cc7bc883fd048b78a4af7433c48e341"
pve 5.0-9795f744-4 -> "9b46d99712854566bb02a656a3ff9191"
pve 5.0-22d7548f-1 -> "e7fc055af47048ee884dcb88a7474336"
pve 5.0-273a9671-1 -> "13d879f75e6447a69ed85179bd93759a"
pve 5.1-2 -> "5b59e448c3e74029af2ac91f572d68a7"
pve 5.1-3 -> "5a2bd0d11a6c41f9a33fd527751224ea"
pve 5.1-cfaf62cd-1 -> "516afc72013c4b9da85b309aad987df2"
pve 5.1-test-20171019-1 -> "b0ce8d24684845e8ac337c588a7715cb"
pve 5.1-test-20171218 -> "e0af064c16e9463e9fa980eac66427c1"
pve 5.2-1 -> "6e925d11b497446e8e7f2ff38e7cf891"
pve 5.3-1 -> "eec280213051474d8bfe7e089a86744a"
pve 5.3-2 -> "708ded6ee82a46c08b77fecda2284c6c"
pve 5.3-preview-20181123-1 -> "615cb2b78b2240289fef74da610c146f"
pve 5.4-1 -> "b965b329a7e246d5be66a8d367f5760d"
pve 6.0-1 -> "5472a49c6436426fbebd7881f7b7f13b"
The 6.0 one should never trigger as there we had the fix already out,
but it may be that some internal installation missed that and it
doesn't hurt to check, so include it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The switch to 'cmd' was made by commit af39a6f09651e15d1c83536e25493a2212efd7d3
in the pve-xtermjs repo and is included in 4.7.0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>