- improve variable definition/use locality
- avoid some if's for some (mostly boolean) assignments, just use an
expression
As long as we don't go overboard with code golfing it to extremely
terse, shorter code is always more readable, especially if
definition/use happens not dozens of lines apart.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the auto theme uses media queries to detect a users preferred theme,
switch to using it per default instead of the light theme.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
load the dark theme only if requested through a cookie, also adds
support for the "auto" theme that uses the dark theme based on a
media query.
this requires a bump of the widget toolkit so the dark-theme css file
is available.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
so the frontend has the information readily available.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Planned to be used for static resource scheduling in the HA manager.
It's enough to broadcast the values whenever they are outdated or not
set in the node's local kv store, because pmxcfs will re-broadcast the
local kv store whenever the quorate partition changes. This is already
relied upon for the 'ceph-versions' kv pair.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Avoid hard-coding the current implication of the replication stack to
not get started again until the old worker is done..
We still apply the same check, but changing that to let the jobs have
control is rather easy now.
Also rework the stop logic, send terminate to _all_ workers and make
the timeout a actual shared one (not first gets all, remaining get
kill) and send a kill to the stuck, leftover ones in one go at the
end, including some logging so that the admin can actually know about
this non-ideal situation.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
utilize PVE::Daemons 'hup' functionality to reload gracefully.
Leaves the children running (if any) and give them to the new instance
via ENV variables. After loading, check if they are still around
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
previously, systemd timers were responsible for running replication jobs.
those timers would not restart if the previous one is still running.
though trying again while it is running does no harm really, it spams
the log with errors about not being able to acquire the correct lock
to fix this, we rework the handling of child processes such that we only
start one per loop if there is currently none running. for that,
introduce the types of forks we do and allow one child process per type
(for now, we have 'jobs' and 'replication' as types)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if '$sub' dies, the error handler of PVE::Daemon triggers, which
initiates a shutdown of the child, resulting in confusing error logs
(e.g. 'got shutdown request, signal running jobs to stop')
instead, run it under 'eval' and print the error to the sylog instead
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
broadcast the built-in, statically available version info, e.g.:
{
"release" : "7.0",
"repoid" : "3ce05d40",
"version" : "7.0-14"
}
We can expand this by more actual package version info in the future,
but that certainly needs more elaborate update control mechanisms as
the oneshot at boot we have now.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The whole thing is already prepared for this, the systemd timer was
just a fixed periodic timer with a frequency of one minute. And we
just introduced it as the assumption was made that less memory usage
would be generated with this approach, AFAIK.
But logging 4+ lines just about that the timer was started, even if
it does nothing, and that 24/7 is not to cheap and a bit annoying.
So in a first step add a simple daemon, which forks of a child for
running jobs once a minute.
This could be made still a bit more intelligent, i.e., look if we
have jobs tor run before forking - as forking is not the cheapest
syscall. Further, we could adapt the sleep interval to the next time
we actually need to run a job (and sending a SIGUSR to the daemon if
a job interval changes such, that this interval got narrower)
We try to sync running on minute-change boundaries at start, this
emulates systemd.timer behaviour, we had until now. Also user can
configure jobs on minute precision, so they probably expect that
those also start really close to a minute change event.
Could be adapted to resync during running, to factor in time drift.
But, as long as enough cpu cycles are available we run in correct
monotonic intervalls, so this isn't a must, IMO.
Another improvement could be locking a bit more fine grained, i.e.
not on a per-all-local-job-runs basis, but per-job (per-guest?)
basis, which would improve temporary starvement of small
high-periodic jobs through big, less peridoci jobs.
We argued that it's the user fault if such situations arise, but they
can evolve over time without noticing, especially in compolexer
setups.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
currently we only check the entry for cgroup v1 to decide if cores
should be rebalanced. extend the check to include cgroup v2 entries.
reported in forum [0]
[0]: https://forum.proxmox.com/threads/hard-set-streams-for-lxc-container.97768/
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
This patch fixes a regression for hosts disabling ipv6 via kernel
commandline ('ipv6.disable=1')introduced in commit
e224b7d2e6
(disabling IPv6 via sysctl did not exhibit these problems)
by hardcoding the address to '::', pveproxy and spiceproxy failed to
start with:
'unable to create socket - Address family not supported by protocol'
This patch depends on the commit in pve-common, which tries first
binding to '::' and then falling back to '0.0.0.0', and needs a
versioned dependency bump on libpve-common-perl.
With this patch the listening addresses are (`ss -tlnp |grep 8006` output)
* ipv6 disabled via kernel cmdline: '0.0.0.0:8006'
* sysctl net.ipv6.conf.all.disable_ipv6=1: '*:8006'
* sysctl net.ipv6.bindv6only=1: '[::]:8006'
* else: '*:8006'
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
The $host variable is set to "::0" by default to listen on wildcard
(with 'Domain' => PF_INET6).
If 'LISTEN_IP' is defined in /etc/default/pveproxy, that IP will be used
instead.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Since pve-container commit
c48a25452dccca37b3915e49b7618f6880aeafb1
the code to get the cpuset controller path lives in pve-commons PVE::CGroup.
Use that and improve the logging in case some error happens in the future.
Such an error will only be logged once per pvestatd run,
so it does not spam the log.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This uses the newly introduced PVE::LXC::CGroup's
cpuset_controller_path() method to find the controller path,
so we need to depend on the newer pve-container package.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
pvestatd will check if the KVM version has changed using
kvm_user_version (which automatically clears its cache if QEMU/KVM
updates), and if it has, query supported CPU flags and broadcast them as
key-value pairs to the cluster.
If detection fails, we clear the kv-store and set up a delay (120s), to not
try again too quickly.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Commit 0dd73a7fec (statd: refactor update_node_status) changed $target
in pvestatd's auto_balloning sub into a variable:
my $target = int($res->{$vmid});
but then uses it in a string as a parameter to the $log function:
$log->("BALLOON $vmid to $target (%d)\n", $target - $current);
This surprisingly causes the variable to be incorrectly converted into a
JSON string by perl's to_json (called in QMPClient after mon_cmd):
{"value":"1234"}
instead of
{"value":1234}
which causes QEMU to report the parameter as invalid:
"Invalid parameter type for 'value', expected: integer"
This behaviour is made even trickier, since $target internally is still
considered more of an 'int' (although that's a weak claim in perl
anyway), showing up without quotes in Dumper et. al. - but the perldoc
for to_json scheds some light:
simple scalars
Simple Perl scalars (any scalar that is not a reference) are the
most difficult objects to encode: this module will encode undefined
scalars as JSON "null" values, scalars that have last been used in a
string context before encoding as JSON strings, and anything else as
number value
So coerce to_json to treat $target as an integer by using it as one and
everything is fine again.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
it seems that we have a reference leak or the like somewhere in the
(graphite?) status plugin, while the recent transaction based update
mechanism made it slightly better, it's still bad with a lot of VMs..
Until we can track that down, or abandon perl for good, avoid to
frequent restarts by allowing statd to grow 15 MB of memory usage
after initial calibration (it's memory usage at the 10th cycle)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
For now it only handles the plugin registration and the two recently
integrated helpers.
But, this is a prepartation to move the external metrics server
update mechanic from a stateless always-newly-connect-send-disconnect
to a statefull transaction based mechanis; see later patches
keep the PVE::Status::Plugin use in pvestatd, as we read the cfs
hosted status.cfg there, and the parser is defined by the common
status plugin base module.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>