Commit Graph

1294577 Commits

Author SHA1 Message Date
Tejun Heo
344576fa6a sched_ext: Improve logging around enable/disable
sched_ext currently doesn't generate messages when the BPF scheduler is
enabled and disabled unless there are errors. It is useful to have paper
trail. Improve logging around enable/disable:

- Generate info messages on enable and non-error disable.

- Update error exit message formatting so that it's consistent with
  non-error message. Also, prefix ei->msg with the BPF scheduler's name to
  make it clear where the message is coming from.

- Shorten scx_exit_reason() strings for SCX_EXIT_UNREG* for brevity and
  consistency.

v2: Use pr_*() instead of KERN_* consistently. (David)

Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Phil Auld <pauld@redhat.com>
Reviewed-by: Phil Auld <pauld@redhat.com>
Acked-by: David Vernet <void@manifault.com>
2024-08-08 13:42:37 -10:00
Tejun Heo
991ef53a48 sched_ext: Make scx_rq_online() also test cpu_active() in addition to SCX_RQ_ONLINE
scx_rq_online() currently only tests SCX_RQ_ONLINE. This isn't fully correct
- e.g. consume_dispatch_q() uses task_run_on_remote_rq() which tests
scx_rq_online() to see whether the current rq can run the task, and, if so,
calls consume_remote_task() to migrate the task to @rq. While the test
itself was done while locking @rq, @rq can be temporarily unlocked by
consume_remote_task() and nothing prevents SCX_RQ_ONLINE from going offline
before the migration takes place.

To address the issue, add cpu_active() test to scx_rq_online(). There is a
synchronize_rcu() between cpu_active() being cleared and the rq going
offline, so if an on-going scheduling operation sees cpu_active(), the
associated rq is guaranteed to not go offline until the scheduling operation
is complete.

Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 60c27fb59f ("sched_ext: Implement sched_ext_ops.cpu_online/offline()")
Acked-by: David Vernet <void@manifault.com>
2024-08-08 13:38:19 -10:00
Tejun Heo
72763ea3d4 sched_ext: Fix unsafe list iteration in process_ddsp_deferred_locals()
process_ddsp_deferred_locals() executes deferred direct dispatches to the
local DSQs of remote CPUs. It iterates the tasks on
rq->scx.ddsp_deferred_locals list, removing and calling
dispatch_to_local_dsq() on each. However, the list is protected by the rq
lock that can be dropped by dispatch_to_local_dsq() temporarily, so the list
can be modified during the iteration, which can lead to oopses and other
failures.

Fix it by popping from the head of the list instead of iterating the list.

Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 5b26f7b920 ("sched_ext: Allow SCX_DSQ_LOCAL_ON for direct dispatches")
Acked-by: David Vernet <void@manifault.com>
2024-08-08 13:38:09 -10:00
Tejun Heo
2c390dda9e sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu()
task_can_run_on_remote_rq() is similar to is_cpu_allowed() but there are
subtle differences. It currently open codes all the tests. This is
cumbersome to understand and error-prone in case the intersecting tests need
to be updated.

Factor out the common part - testing whether the task is allowed on the CPU
at all regardless of the CPU state - into task_allowed_on_cpu() and make
both is_cpu_allowed() and SCX's task_can_run_on_remote_rq() use it. As the
code is now linked between the two and each contains only the extra tests
that differ between them, it's less error-prone when the conditions need to
be updated. Also, improve the comment to explain why they are different.

v2: Replace accidental "extern inline" with "static inline" (Peter).

Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
2024-08-06 09:40:11 -10:00
Tejun Heo
9390a923e1 sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked()
scx_task_iter_next_locked() skips tasks whose sched_class is
idle_sched_class. While it has a short comment explaining why it's testing
the sched_class directly isntead of using is_idle_task(), the comment
doesn't sufficiently explain what's going on and why. Improve the comment.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
2024-08-06 09:40:11 -10:00
Tejun Heo
a735d43c7f sched_ext: Simplify UP support by enabling sched_class->balance() in UP
On SMP, SCX performs dispatch from sched_class->balance(). As balance() was
not available in UP, it instead called the internal balance function from
put_prev_task_scx() and pick_next_task_scx() to emulate the effect, which is
rather nasty.

Enabling sched_class->balance() on UP shouldn't cause any meaningful
overhead. Enable balance() on UP and drop the ugly workaround.

Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
2024-08-06 09:40:11 -10:00
Tejun Heo
7799140b6a sched_ext: Use update_curr_common() in update_curr_scx()
update_curr_scx() is open coding runtime updates. Use update_curr_common()
instead and avoid unnecessary deviations.

Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
2024-08-06 09:40:11 -10:00
Tejun Heo
cd01449268 sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance()
SCX needs its balance() invoked even when waking up from a lower priority
sched class (idle) and put_prev_task_balance() thus has the logic to promote
@start_class if it's lower than ext_sched_class. This is only needed when
SCX is enabled. Add scx_enabled() test to avoid unnecessary overhead when
SCX is disabled.

Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
2024-08-06 09:40:10 -10:00
Tejun Heo
11cc374f46 sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick()
The way sched_can_stop_tick() used scx_can_stop_tick() was rather confusing
and the behavior wasn't ideal when SCX is enabled in partial mode. Simplify
it so that:

- scx_can_stop_tick() can say no if scx_enabled().

- CFS tests rq->cfs.nr_running > 1 instead of rq->nr_running.

This is easier to follow and leads to the correct answer whether SCX is
disabled, enabled in partial mode or all tasks are switched to SCX.

Peter, note that this is a bit different from your suggestion where
sched_can_stop_tick() unconditionally returns scx_can_stop_tick() iff
scx_switched_all(). The problem is that in partial mode, tick can be stopped
when there is only one SCX task even if the BPF scheduler didn't ask and
isn't ready for it.

Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
2024-08-06 09:40:10 -10:00
Tejun Heo
0df340ceae Merge branch 'sched/core' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into for-6.12
Pull tip/sched/core to resolve the following four conflicts. While 2-4 are
simple context conflicts, 1 is a bit subtle and easy to resolve incorrectly.

1. 2c8d046d5d ("sched: Add normal_policy()")
   vs.
   faa42d2941 ("sched/fair: Make SCHED_IDLE entity be preempted in strict hierarchy")

The former converts direct test on p->policy to use the helper
normal_policy(). The latter moves the p->policy test to a different
location. Resolve by converting the test on p->plicy in the new location to
use normal_policy().

2. a7a9fc5492 ("sched_ext: Add boilerplate for extensible scheduler class")
   vs.
   a110a81c52 ("sched/deadline: Deferrable dl server")

Both add calls to put_prev_task_idle() and set_next_task_idle(). Simple
context conflict. Resolve by taking changes from both.

3. a7a9fc5492 ("sched_ext: Add boilerplate for extensible scheduler class")
   vs.
   c245910049 ("sched/core: Add clearing of ->dl_server in put_prev_task_balance()")

The former changes for_each_class() itertion to use for_each_active_class().
The latter moves away the adjacent dl_server handling code. Simple context
conflict. Resolve by taking changes from both.

4. 60c27fb59f ("sched_ext: Implement sched_ext_ops.cpu_online/offline()")
   vs.
   31b164e2e4 ("sched/smt: Introduce sched_smt_present_inc/dec() helper")
   2f02735412 ("sched/core: Introduce sched_set_rq_on/offline() helper")

The former adds scx_rq_deactivate() call. The latter two change code around
it. Simple context conflict. Resolve by taking changes from both.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-08-04 07:36:54 -10:00
Tejun Heo
e99129e5db sched_ext: Allow p->scx.disallow only while loading
From 1232da7eced620537a78f19c8cf3d4a3508e2419 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Wed, 31 Jul 2024 09:14:52 -1000

p->scx.disallow provides a way for the BPF scheduler to reject certain tasks
from attaching. It's currently allowed for both the load and fork paths;
however, the latter doesn't actually work as p->sched_class is already set
by the time scx_ops_init_task() is called during fork.

This is a convenience feature which is mostly useful from the load path
anyway. Allow it only from the load path.

v2: Trigger scx_ops_error() iff @p->policy == SCHED_EXT to make it a bit
    easier for the BPF scheduler (David).

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "Zhangqiao (2012 lab)" <zhangqiao22@huawei.com>
Link: http://lkml.kernel.org/r/20240711110720.1285-1-zhangqiao22@huawei.com
Fixes: 7bb6f0810e ("sched_ext: Allow BPF schedulers to disallow specific tasks from joining SCHED_EXT")
Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-08-02 08:59:32 -10:00
Tejun Heo
a2f4b16e73 sched_ext: Build fix on !CONFIG_STACKTRACE[_SUPPORT]
scx_dump_task() uses stack_trace_save_tsk() which is only available when
CONFIG_STACKTRACE. Make CONFIG_SCHED_CLASS_EXT select CONFIG_STACKTRACE if
the support is available and skip capturing stack trace if
!CONFIG_STACKTRACE.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202407161844.reewQQrR-lkp@intel.com/
Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-08-01 07:08:01 -10:00
David Vernet
958b189184 scx/selftests: Verify we can call create_dsq from prog_run
We already have some testcases verifying that we can call
BPF_PROG_TYPE_SYSCALL progs and invoke scx_bpf_exit(). Let's extend that to
also call scx_bpf_create_dsq() so we get coverage for that as well.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-07-31 07:47:34 -10:00
David Vernet
298dec19bd scx: Allow calling sleepable kfuncs from BPF_PROG_TYPE_SYSCALL
We currently only allow calling sleepable scx kfuncs (i.e.
scx_bpf_create_dsq()) from BPF_PROG_TYPE_STRUCT_OPS progs. The idea here
was that we'd never have to call scx_bpf_create_dsq() outside of a
sched_ext struct_ops callback, but that might not actually be true. For
example, a scheduler could do something like the following:

1. Open and load (not yet attach) a scheduler skel

2. Synchronously call into a BPF_PROG_TYPE_SYSCALL prog from user space.
   For example, to initialize an LLC domain, or some other global,
   read-only state.

3. Attach the skel, which actually enables the scheduler

The advantage of doing this is that it can preclude having to do pretty
ugly boilerplate like initializing a read-only, statically sized array of
u64[]'s which the kernel consumes literally once at init time to then
create struct bpf_cpumask objects which are actually queried at runtime.

Doing the above is already possible given that we can invoke core BPF
kfuncs, such as bpf_cpumask_create(), from BPF_PROG_TYPE_SYSCALL progs. We
already allow many scx kfuncs to be called from BPF_PROG_TYPE_SYSCALL progs
(e.g. scx_bpf_kick_cpu()). Let's allow the sleepable kfuncs as well.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-07-31 07:45:28 -10:00
Tejun Heo
c8faf11cd1 Linux 6.11-rc1
-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmamtfseHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGC20H/j6G3+7gYGDtSsl9
 5eH7UFzk18JeIG4c9Z5q9p2YVqdTggHOyWUA0qYBJWLyjpQa0q5SO+Qf2VwH8bH7
 NpHZQYIdRB6dy/MySZII/6KdOJobz779P8EOPVdPs6PaAmiwOwzdK4aHxhi3iQJv
 8QHmswjnT6t44p7WX1gZCUL2R3TL5hyA505BfPBz5OPBLkuuTArCBO8mZfTvk3R6
 fskKrVBC3oEb9Vgx/bycah9wTJn4ptPUGggaTnbu44RkhZcHfMiciqOrtMtYtqKx
 fmGQllbVQ8CHp4IBZ5nYfUB4E04Zg+XqNeYHa0T9R97e7crZ5iMKutujydmnhqA0
 r3Ca53w=
 =R3sl
 -----END PGP SIGNATURE-----

Merge tag 'v6.11-rc1' into for-6.12

Linux 6.11-rc1
2024-07-30 09:30:11 -10:00
Peter Zijlstra
cea5a3472a sched/fair: Cleanup fair_server
The throttle interaction made my brain hurt, make it consistently
about 0 transitions of h_nr_running.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2024-07-29 12:22:37 +02:00
Peter Zijlstra
5f6bd380c7 sched/rt: Remove default bandwidth control
Now that fair_server exists, we no longer need RT bandwidth control
unless RT_GROUP_SCHED.

Enable fair_server with parameters equivalent to RT throttling.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: "Vineeth Pillai (Google)" <vineeth@bitbyteword.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/14d562db55df5c3c780d91940743acb166895ef7.1716811044.git.bristot@kernel.org
2024-07-29 12:22:37 +02:00
Joel Fernandes (Google)
c8a85394cf sched/core: Fix picking of tasks for core scheduling with DL server
* Use simple CFS pick_task for DL pick_task

  DL server's pick_task calls CFS's pick_next_task_fair(), this is wrong
  because core scheduling's pick_task only calls CFS's pick_task() for
  evaluation / checking of the CFS task (comparing across CPUs), not for
  actually affirmatively picking the next task. This causes RB tree
  corruption issues in CFS that were found by syzbot.

* Make pick_task_fair clear DL server

  A DL task pick might set ->dl_server, but it is possible the task will
  never run (say the other HT has a stop task). If the CFS task is picked
  in the future directly (say without DL server), ->dl_server will be
  set. So clear it in pick_task_fair().

This fixes the KASAN issue reported by syzbot in set_next_entity().

(DL refactoring suggestions by Vineeth Pillai).

Reported-by: Suleiman Souhlal <suleiman@google.com>
Signed-off-by: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vineeth Pillai <vineeth@bitbyteword.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/b10489ab1f03d23e08e6097acea47442e7d6466f.1716811044.git.bristot@kernel.org
2024-07-29 12:22:37 +02:00
Joel Fernandes (Google)
4b26cfdd39 sched/core: Fix priority checking for DL server picks
In core scheduling, a DL server pick (which is CFS task) should be
given higher priority than tasks in other classes.

Not doing so causes CFS starvation. A kselftest is added later to
demonstrate this.  A CFS task that is competing with RT tasks can
be completely starved without this and the DL server's boosting
completely ignored.

Fix these problems.

Reported-by: Suleiman Souhlal <suleiman@google.com>
Signed-off-by: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vineeth Pillai <vineeth@bitbyteword.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/48b78521d86f3b33c24994d843c1aad6b987dda9.1716811044.git.bristot@kernel.org
2024-07-29 12:22:36 +02:00
Daniel Bristot de Oliveira
d741f297bc sched/fair: Fair server interface
Add an interface for fair server setup on debugfs.

Each CPU has two files under /debug/sched/fair_server/cpu{ID}:

 - runtime: set runtime in ns
 - period:  set period in ns

This then leaves /proc/sys/kernel/sched_rt_{period,runtime}_us to set
bounds on admission control.

The interface also add the server to the dl bandwidth accounting.

Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/a9ef9fc69bcedb44bddc9bc34f2b313296052819.1716811044.git.bristot@kernel.org
2024-07-29 12:22:36 +02:00
Daniel Bristot de Oliveira
a110a81c52 sched/deadline: Deferrable dl server
Among the motivations for the DL servers is the real-time throttling
mechanism. This mechanism works by throttling the rt_rq after
running for a long period without leaving space for fair tasks.

The base dl server avoids this problem by boosting fair tasks instead
of throttling the rt_rq. The point is that it boosts without waiting
for potential starvation, causing some non-intuitive cases.

For example, an IRQ dispatches two tasks on an idle system, a fair
and an RT. The DL server will be activated, running the fair task
before the RT one. This problem can be avoided by deferring the
dl server activation.

By setting the defer option, the dl_server will dispatch an
SCHED_DEADLINE reservation with replenished runtime, but throttled.

The dl_timer will be set for the defer time at (period - runtime) ns
from start time. Thus boosting the fair rq at defer time.

If the fair scheduler has the opportunity to run while waiting
for defer time, the dl server runtime will be consumed. If
the runtime is completely consumed before the defer time, the
server will be replenished while still in a throttled state. Then,
the dl_timer will be reset to the new defer time

If the fair server reaches the defer time without consuming
its runtime, the server will start running, following CBS rules
(thus without breaking SCHED_DEADLINE). Then the server will
continue the running state (without deferring) until it fair
tasks are able to execute as regular fair scheduler (end of
the starvation).

Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/dd175943c72533cd9f0b87767c6499204879cc38.1716811044.git.bristot@kernel.org
2024-07-29 12:22:36 +02:00
Peter Zijlstra
557a6bfc66 sched/fair: Add trivial fair server
Use deadline servers to service fair tasks.

This patch adds a fair_server deadline entity which acts as a container
for fair entities and can be used to fix starvation when higher priority
(wrt fair) tasks are monopolizing CPU(s).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/b6b0bcefaf25391bcf5b6ecdb9f1218de402d42e.1716811044.git.bristot@kernel.org
2024-07-29 12:22:36 +02:00
Youssef Esmat
a741b82423 sched/core: Clear prev->dl_server in CFS pick fast path
In case the previous pick was a DL server pick, ->dl_server might be
set. Clear it in the fast path as well.

Fixes: 63ba8422f8 ("sched/deadline: Introduce deadline servers")
Signed-off-by: Youssef Esmat <youssefesmat@google.com>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/7f7381ccba09efcb4a1c1ff808ed58385eccc222.1716811044.git.bristot@kernel.org
2024-07-29 12:22:35 +02:00
Joel Fernandes (Google)
c245910049 sched/core: Add clearing of ->dl_server in put_prev_task_balance()
Paths using put_prev_task_balance() need to do a pick shortly
after. Make sure they also clear the ->dl_server on prev as a
part of that.

Fixes: 63ba8422f8 ("sched/deadline: Introduce deadline servers")
Signed-off-by: "Joel Fernandes (Google)" <joel@joelfernandes.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/d184d554434bedbad0581cb34656582d78655150.1716811044.git.bristot@kernel.org
2024-07-29 12:22:35 +02:00
Daniel Bristot de Oliveira
f23c042ce3 sched/deadline: Comment sched_dl_entity::dl_server variable
Add an explanation for the newly added variable.

Fixes: 63ba8422f8 ("sched/deadline: Introduce deadline servers")
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/147f7aa8cb8fd925f36aa8059af6a35aad08b45a.1716811044.git.bristot@kernel.org
2024-07-29 12:22:35 +02:00
Tianchen Ding
faa42d2941 sched/fair: Make SCHED_IDLE entity be preempted in strict hierarchy
Consider the following cgroup:

                       root
                        |
             ------------------------
             |                      |
       normal_cgroup            idle_cgroup
             |                      |
   SCHED_IDLE task_A           SCHED_NORMAL task_B

According to the cgroup hierarchy, A should preempt B. But current
check_preempt_wakeup_fair() treats cgroup se and task separately, so B
will preempt A unexpectedly.
Unify the wakeup logic by {c,p}se_is_idle only. This makes SCHED_IDLE of
a task a relative policy that is effective only within its own cgroup,
similar to the behavior of NICE.

Also fix se_is_idle() definition when !CONFIG_FAIR_GROUP_SCHED.

Fixes: 304000390f ("sched: Cgroup SCHED_IDLE support")
Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Josh Don <joshdon@google.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20240626023505.1332596-1-dtcccc@linux.alibaba.com
2024-07-29 12:22:35 +02:00
Phil Auld
a58501fb83 sched: remove HZ_BW feature hedge
As a hedge against unexpected user issues commit 88c56cfeae
("sched/fair: Block nohz tick_stop when cfs bandwidth in use")
included a scheduler feature to disable the new functionality.
It's been a few releases (v6.6) and no screams, so remove it.

Signed-off-by: Phil Auld <pauld@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20240515133705.3632915-1-pauld@redhat.com
2024-07-29 12:22:34 +02:00
Chuyi Zhou
2c2d962469 sched/fair: Remove cfs_rq::nr_spread_over and cfs_rq::exec_clock
nr_spread_over tracks the number of instances where the difference
between a scheduling entity's virtual runtime and the minimum virtual
runtime in the runqueue exceeds three times the scheduler latency,
indicating significant disparity in task scheduling.
Commit that removed its usage: 5e963f2bd: sched/fair: Commit to EEVDF

cfs_rq->exec_clock was used to account for time spent executing tasks.
Commit that removed its usage: 5d69eca542 sched: Unify runtime
accounting across classes

cfs_rq::nr_spread_over and cfs_rq::exec_clock are not used anymore in
eevdf. Remove them from struct cfs_rq.

Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com>
Acked-by: Vishal Chourasia <vishalc@linux.ibm.com>
Link: https://lore.kernel.org/r/20240717143342.593262-1-zhouchuyi@bytedance.com
2024-07-29 12:22:34 +02:00
Peilin He
0ec8d5aed4 sched/core: Add WARN_ON_ONCE() to check overflow for migrate_disable()
Background
==========
When repeated migrate_disable() calls are made with missing the
corresponding migrate_enable() calls, there is a risk of
'migration_disabled' going upper overflow because
'migration_disabled' is a type of unsigned short whose max value is
65535.

In PREEMPT_RT kernel, if 'migration_disabled' goes upper overflow, it may
make the migrate_disable() ineffective within local_lock_irqsave(). This
is because, during the scheduling procedure, the value of
'migration_disabled' will be checked, which can trigger CPU migration.
Consequently, the count of 'rcu_read_lock_nesting' may leak due to
local_lock_irqsave() and local_unlock_irqrestore() occurring on different
CPUs.

Usecase
========
For example, When I developed a driver, I encountered a warning like
"WARNING: CPU: 4 PID: 260 at kernel/rcu/tree_plugin.h:315
rcu_note_context_switch+0xa8/0x4e8" warning. It took me half a month
to locate this issue. Ultimately, I discovered that the lack of upper
overflow detection mechanism in migrate_disable() was the root cause,
leading to a significant amount of time spent on problem localization.

If the upper overflow detection mechanism was added to migrate_disable(),
the root cause could be very quickly and easily identified.

Effect
======
Using WARN_ON_ONCE() to check if 'migration_disabled' is upper overflow
can help developers identify the issue quickly.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Peilin He<he.peilin@zte.com.cn>
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Yunkai Zhang <zhang.yunkai@zte.com.cn>
Reviewed-by: Qiang Tu <tu.qiang35@zte.com.cn>
Reviewed-by: Kun Jiang <jiang.kun2@zte.com.cn>
Reviewed-by: Fan Yu <fan.yu9@zte.com.cn>
Link: https://lkml.kernel.org/r/20240716104244764N2jD8gnBpnsLjCDnQGQ8c@zte.com.cn
2024-07-29 12:22:34 +02:00
Zhang Qiao
c40dd90ac0 sched: Initialize the vruntime of a new task when it is first enqueued
When creating a new task, we initialize vruntime of the newly task at
sched_cgroup_fork(). However, the timing of executing this action is too
early and may not be accurate.

Because it uses current CPU to init the vruntime, but the new task
actually runs on the cpu which be assigned at wake_up_new_task().

To optimize this case, we pass ENQUEUE_INITIAL flag to activate_task()
in wake_up_new_task(), in this way, when place_entity is called in
enqueue_entity(), the vruntime of the new task will be initialized.

In addition, place_entity() in task_fork_fair() was introduced for two
reasons:
1. Previously, the __enqueue_entity() was in task_new_fair(),
in order to provide vruntime for enqueueing the newly task, the
vruntime assignment equation "se->vruntime = cfs_rq->min_vruntime" was
introduced by commit e9acbff648 ("sched: introduce se->vruntime").
This is the initial state of place_entity().

2. commit 4d78e7b656 ("sched: new task placement for vruntime") added
child_runs_first task placement feature which based on vruntime, this
also requires the new task's vruntime value.

After removing the child_runs_first and enqueue_entity() from
task_fork_fair(), this place_entity() no longer makes sense, so remove
it also.

Signed-off-by: Zhang Qiao <zhangqiao22@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20240627133359.1370598-1-zhangqiao22@huawei.com
2024-07-29 12:22:34 +02:00
Yang Yingliang
fe7a11c78d sched/core: Fix unbalance set_rq_online/offline() in sched_cpu_deactivate()
If cpuset_cpu_inactive() fails, set_rq_online() need be called to rollback.

Fixes: 120455c514 ("sched: Fix hotplug vs CPU bandwidth control")
Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20240703031610.587047-5-yangyingliang@huaweicloud.com
2024-07-29 12:22:33 +02:00
Yang Yingliang
2f02735412 sched/core: Introduce sched_set_rq_on/offline() helper
Introduce sched_set_rq_on/offline() helper, so it can be called
in normal or error path simply. No functional changed.

Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20240703031610.587047-4-yangyingliang@huaweicloud.com
2024-07-29 12:22:32 +02:00
Yang Yingliang
e22f910a26 sched/smt: Fix unbalance sched_smt_present dec/inc
I got the following warn report while doing stress test:

jump label: negative count!
WARNING: CPU: 3 PID: 38 at kernel/jump_label.c:263 static_key_slow_try_dec+0x9d/0xb0
Call Trace:
 <TASK>
 __static_key_slow_dec_cpuslocked+0x16/0x70
 sched_cpu_deactivate+0x26e/0x2a0
 cpuhp_invoke_callback+0x3ad/0x10d0
 cpuhp_thread_fun+0x3f5/0x680
 smpboot_thread_fn+0x56d/0x8d0
 kthread+0x309/0x400
 ret_from_fork+0x41/0x70
 ret_from_fork_asm+0x1b/0x30
 </TASK>

Because when cpuset_cpu_inactive() fails in sched_cpu_deactivate(),
the cpu offline failed, but sched_smt_present is decremented before
calling sched_cpu_deactivate(), it leads to unbalanced dec/inc, so
fix it by incrementing sched_smt_present in the error path.

Fixes: c5511d03ec ("sched/smt: Make sched_smt_present track topology")
Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Link: https://lore.kernel.org/r/20240703031610.587047-3-yangyingliang@huaweicloud.com
2024-07-29 12:22:32 +02:00
Yang Yingliang
31b164e2e4 sched/smt: Introduce sched_smt_present_inc/dec() helper
Introduce sched_smt_present_inc/dec() helper, so it can be called
in normal or error path simply. No functional changed.

Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20240703031610.587047-2-yangyingliang@huaweicloud.com
2024-07-29 12:22:32 +02:00
Zheng Zucheng
77baa5bafc sched/cputime: Fix mul_u64_u64_div_u64() precision for cputime
In extreme test scenarios:
the 14th field utime in /proc/xx/stat is greater than sum_exec_runtime,
utime = 18446744073709518790 ns, rtime = 135989749728000 ns

In cputime_adjust() process, stime is greater than rtime due to
mul_u64_u64_div_u64() precision problem.
before call mul_u64_u64_div_u64(),
stime = 175136586720000, rtime = 135989749728000, utime = 1416780000.
after call mul_u64_u64_div_u64(),
stime = 135989949653530

unsigned reversion occurs because rtime is less than stime.
utime = rtime - stime = 135989749728000 - 135989949653530
		      = -199925530
		      = (u64)18446744073709518790

Trigger condition:
  1). User task run in kernel mode most of time
  2). ARM64 architecture
  3). TICK_CPU_ACCOUNTING=y
      CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not set

Fix mul_u64_u64_div_u64() conversion precision by reset stime to rtime

Fixes: 3dc167ba57 ("sched/cputime: Improve cputime_adjust()")
Signed-off-by: Zheng Zucheng <zhengzucheng@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20240726023235.217771-1-zhengzucheng@huawei.com
2024-07-29 12:22:32 +02:00
Linus Torvalds
8400291e28 Linux 6.11-rc1 2024-07-28 14:19:55 -07:00
Linus Torvalds
a0c04bd55a Kbuild fixes for v6.11
- Fix RPM package build error caused by an incorrect locale setup
 
  - Mark modules.weakdep as ghost in RPM package
 
  - Fix the odd combination of -S and -c in stack protector scripts, which
    is an error with the latest Clang
 -----BEGIN PGP SIGNATURE-----
 
 iQJJBAABCgAzFiEEbmPs18K1szRHjPqEPYsBB53g2wYFAmamlN8VHG1hc2FoaXJv
 eUBrZXJuZWwub3JnAAoJED2LAQed4NsG828P/R3MRpmWBhMrw/JmvpJ3hyby6sZA
 VcwOMLmmiIFdmVK5NKqWRkOBfD2JlRrbuuCrJKGJIX1lo4h/TifQf+gu8s8bN38h
 ZLo7KMzn4kCX91jSwhnOBczw2jEnKbrQrMoktTl7k9GC+pnebf9pzdxXzXWIf5Rs
 Z91XuueFrOZ/lDOVjv/675MPrp9M4tM2aZPo3RaKbbPFAaSPDihSa3iKyRDqPd6K
 1rzqgt3+fvkVVO8HZjm12XBg5AoL2bYvaJtF12zZkFEhJRRmXmGQxTVbZSh/PP+z
 D+BCFUTAt0gr8rQSLTtW2PStjvZc0nEG6Mpnz/OiQCEcMycLgEq+YkD9TbQISX8q
 FiQMVPe9YV7AWSYZd5RgMHxgeB2RDd6e6Z4yKATeatJ46UpoV6Kg1JczBJTs1b5X
 bb9Q+nZniioyzc3ys5UQchTu01bzv17GsWya6ipZWf8Lf+yvMipoADuV6MOs6YUh
 TEFjoA/cGiGQQ5uLHDZ69n8dJ+FRZfMvW4dQIGE2kf92e0lQKrKwa1WXQ78+rPMn
 j8wVYB8kiqgDW+ObxTJgr0NakbqMEBQsGxwv0W4TRjD9fN4kXcWqN/bKq2IfTwxk
 mSi/ASo900XHLCGKtBH6vNoi6bkJXC7YoSBEIulpYaxCf5S0VPHZ5iFrQpdhqKSZ
 qNfFHDoZFyTdPGZk
 =HYew
 -----END PGP SIGNATURE-----

Merge tag 'kbuild-fixes-v6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild fixes from Masahiro Yamada:

 - Fix RPM package build error caused by an incorrect locale setup

 - Mark modules.weakdep as ghost in RPM package

 - Fix the odd combination of -S and -c in stack protector scripts,
   which is an error with the latest Clang

* tag 'kbuild-fixes-v6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  kbuild: Fix '-S -c' in x86 stack protector scripts
  kbuild: rpm-pkg: ghost modules.weakdep file
  kbuild: rpm-pkg: Fix C locale setup
2024-07-28 14:02:48 -07:00
Linus Torvalds
017fa3e891 minmax: simplify and clarify min_t()/max_t() implementation
This simplifies the min_t() and max_t() macros by no longer making them
work in the context of a C constant expression.

That means that you can no longer use them for static initializers or
for array sizes in type definitions, but there were only a couple of
such uses, and all of them were converted (famous last words) to use
MIN_T/MAX_T instead.

Cc: David Laight <David.Laight@aculab.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-07-28 13:50:01 -07:00
Linus Torvalds
4477b39c32 minmax: add a few more MIN_T/MAX_T users
Commit 3a7e02c040 ("minmax: avoid overly complicated constant
expressions in VM code") added the simpler MIN_T/MAX_T macros in order
to avoid some excessive expansion from the rather complicated regular
min/max macros.

The complexity of those macros stems from two issues:

 (a) trying to use them in situations that require a C constant
     expression (in static initializers and for array sizes)

 (b) the type sanity checking

and MIN_T/MAX_T avoids both of these issues.

Now, in the whole (long) discussion about all this, it was pointed out
that the whole type sanity checking is entirely unnecessary for
min_t/max_t which get a fixed type that the comparison is done in.

But that still leaves min_t/max_t unnecessarily complicated due to
worries about the C constant expression case.

However, it turns out that there really aren't very many cases that use
min_t/max_t for this, and we can just force-convert those.

This does exactly that.

Which in turn will then allow for much simpler implementations of
min_t()/max_t().  All the usual "macros in all upper case will evaluate
the arguments multiple times" rules apply.

We should do all the same things for the regular min/max() vs MIN/MAX()
cases, but that has the added complexity of various drivers defining
their own local versions of MIN/MAX, so that needs another level of
fixes first.

Link: https://lore.kernel.org/all/b47fad1d0cf8449886ad148f8c013dae@AcuMS.aculab.com/
Cc: David Laight <David.Laight@aculab.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-07-28 13:41:14 -07:00
Linus Torvalds
7e2d0ba732 This pull request contains updates (actually, just fixes) for UBI and UBIFS:
- Many fixes for power-cut issues by Zhihao Cheng
 - Another ubiblock error path fix
 - ubiblock section mismatch fix
 - Misc fixes all over the place
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCAA0FiEEdgfidid8lnn52cLTZvlZhesYu8EFAmamiikWHHJpY2hhcmRA
 c2lnbWEtc3Rhci5hdAAKCRBm+VmF6xi7wTy6EACEj6VcTRei5Ndt2QXNiKHNgJ3j
 yV0QmLWUF5brfXbUsoP9oeKL/Y9s+02maDUkOJoHGcHRClLyHkIMMHXYy5gpXyiI
 h4zTcx3x33+oddYJeqdZYDDpWwk8RsqE1UjB8i7q+DF7vznWqYl3/+G7coH8ZD8M
 +//3AzWjD/U9Gj3KVuyBgok0OmnwFJ36w3Ckoj7NKxbFQrCmxa7qtLqseuiHumhu
 HjAMPS1V+Hmw6qw/s+hs6NdBhOvmO56s1wwT26mji+kQVMuh6tG3YNo2tyM7AHSv
 cGJDVGuDmO3F6iKqom3edjxHetWmCN3gvCkacpE6G/9Q1x3XO7vWhPfl6JHv+KTu
 zsc0WDhmNKkxUIHgBKvFGCylo6Ui4rIn5iFihXBiGTjB52tBqPU+0STDfcKcqz6U
 BrgCiKdAuX7tZT00tuFJE9eWbZvPVWQii0aNw5uLi9GwXOFmLP/VAi+FDQ0hsNSF
 XQxgWAd9b7eLmyWUrn6ORj2+FIzjmF6oxBfUOHgE4J1GPggIDXii9NDJcma3XSTn
 OcqpCJe5SOc38M7ldwbZup53fl+DFvsbnbDofrGCvhSTCqAyGx5su3m40YsIVRvI
 fpCk6L/6vGp/ONw90Vq/KKeIkAJaHnqi6KbnfyDtF6LSwW7cb4+Cw69BWvpP0gQb
 m1hTm4vIaPvD25840g==
 =0y6/
 -----END PGP SIGNATURE-----

Merge tag 'ubifs-for-linus-6.11-rc1-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs

Pull UBI and UBIFS updates from Richard Weinberger:

 - Many fixes for power-cut issues by Zhihao Cheng

 - Another ubiblock error path fix

 - ubiblock section mismatch fix

 - Misc fixes all over the place

* tag 'ubifs-for-linus-6.11-rc1-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs:
  ubi: Fix ubi_init() ubiblock_exit() section mismatch
  ubifs: add check for crypto_shash_tfm_digest
  ubifs: Fix inconsistent inode size when powercut happens during appendant writing
  ubi: block: fix null-pointer-dereference in ubiblock_create()
  ubifs: fix kernel-doc warnings
  ubifs: correct UBIFS_DFS_DIR_LEN macro definition and improve code clarity
  mtd: ubi: Restore missing cleanup on ubi_init() failure path
  ubifs: dbg_orphan_check: Fix missed key type checking
  ubifs: Fix unattached inode when powercut happens in creating
  ubifs: Fix space leak when powercut happens in linking tmpfile
  ubifs: Move ui->data initialization after initializing security
  ubifs: Fix adding orphan entry twice for the same inode
  ubifs: Remove insert_dead_orphan from replaying orphan process
  Revert "ubifs: ubifs_symlink: Fix memleak of inode->i_link in error path"
  ubifs: Don't add xattr inode into orphan area
  ubifs: Fix unattached xattr inode if powercut happens after deleting
  mtd: ubi: avoid expensive do_div() on 32-bit machines
  mtd: ubi: make ubi_class constant
  ubi: eba: properly rollback inside self_check_eba
2024-07-28 11:51:51 -07:00
Nathan Chancellor
3415b10a03 kbuild: Fix '-S -c' in x86 stack protector scripts
After a recent change in clang to stop consuming all instances of '-S'
and '-c' [1], the stack protector scripts break due to the kernel's use
of -Werror=unused-command-line-argument to catch cases where flags are
not being properly consumed by the compiler driver:

  $ echo | clang -o - -x c - -S -c -Werror=unused-command-line-argument
  clang: error: argument unused during compilation: '-c' [-Werror,-Wunused-command-line-argument]

This results in CONFIG_STACKPROTECTOR getting disabled because
CONFIG_CC_HAS_SANE_STACKPROTECTOR is no longer set.

'-c' and '-S' both instruct the compiler to stop at different stages of
the pipeline ('-S' after compiling, '-c' after assembling), so having
them present together in the same command makes little sense. In this
case, the test wants to stop before assembling because it is looking at
the textual assembly output of the compiler for either '%fs' or '%gs',
so remove '-c' from the list of arguments to resolve the error.

All versions of GCC continue to work after this change, along with
versions of clang that do or do not contain the change mentioned above.

Cc: stable@vger.kernel.org
Fixes: 4f7fd4d7a7 ("[PATCH] Add the -fstack-protector option to the CFLAGS")
Fixes: 60a5317ff0 ("x86: implement x86_32 stack protector")
Link: 6461e53781 [1]
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-07-29 03:47:00 +09:00
Richard Weinberger
92a286e902 ubi: Fix ubi_init() ubiblock_exit() section mismatch
Since ubiblock_exit() is now called from an init function,
the __exit section no longer makes sense.

Cc: Ben Hutchings <bwh@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202407131403.wZJpd8n2-lkp@intel.com/
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
2024-07-28 20:08:25 +02:00
Linus Torvalds
e172f1e906 turbostat release 2024.07.26
Enable turbostat extensions to add both perf and PMT
 (Intel Platform Monitoring Technology) counters via the cmdline.
 
 Demonstrate PMT access with built-in support for Meteor Lake's Die%c6 counter.
 -----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEE67dNfPFP+XUaA73mB9BFOha3NhcFAmaj7c8UHGxlbi5icm93
 bkBpbnRlbC5jb20ACgkQB9BFOha3Nhe6XBAArHsMOw7r1dF94yctBukD94szLasz
 9BGI64NNVYz4pUlM0BUayZzr0kYxMonLvKMHcg1XaeCF4DRByUyhM86QfPAUVskx
 qEY3raRu18wTlfku/90jYm5AM09Dp846zOf4dtrV2Io/JM8TLqo9gAsHtZI5Qaiu
 bR9nPL4vjysnIiUG5aowBhYBnI9xvAecxbID/qfOofZMGyb6h06xlXPkDkD2KTkL
 F/5Bv7AWohAQ7cX8EEg865eQCea4YDnQtcZg4yYRMkA78bVfE1WzCgA+qjb1dJ0A
 2bbL6m7cBvikWkii82VwYWzPLbJTlQDIpQRKxRQP4SvsRc0NinZmS5d4AO9I63h3
 JfIjtj7NEhzyPnaykJqcsgyOI3lBFbcEOUckutj0M8S3B6UJSC9WQnU2D6sGuQcW
 iwav+zbsEkxL3ebYkOLuTEGLhqJFy6ZNxvPVAh+Q63jcBxraZoHVqG1VbmtqslnE
 fSrhZ/hJIqo1F25X7DsMjyk7txDQYed9g/EArQWBb+DiL/ggvaO+FT/TGE3hCGCQ
 dt2J1+hhshHesQIhF4/9sj/wHrw8RA08BVqGWzAOvwx69wevvSszSQrGU5ISypTv
 9JfeOXGvc72ZQzC8MaotQfyHmxIIJ5ZQ6wFAcThc1Og39OU2+CbfIC5tLIqRoLR6
 sEWH07ty9KpN0aQ=
 =wsSy
 -----END PGP SIGNATURE-----

Merge tag 'v6.11-merge' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux

Pull turbostat updates from Len Brown:

 - Enable turbostat extensions to add both perf and PMT (Intel
   Platform Monitoring Technology) counters via the cmdline

 - Demonstrate PMT access with built-in support for Meteor Lake's
   Die C6 counter

* tag 'v6.11-merge' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux:
  tools/power turbostat: version 2024.07.26
  tools/power turbostat: Include umask=%x in perf counter's config
  tools/power turbostat: Document PMT in turbostat.8
  tools/power turbostat: Add MTL's PMT DC6 builtin counter
  tools/power turbostat: Add early support for PMT counters
  tools/power turbostat: Add selftests for added perf counters
  tools/power turbostat: Add selftests for SMI, APERF and MPERF counters
  tools/power turbostat: Move verbose counter messages to level 2
  tools/power turbostat: Move debug prints from stdout to stderr
  tools/power turbostat: Fix typo in turbostat.8
  tools/power turbostat: Add perf added counter example to turbostat.8
  tools/power turbostat: Fix formatting in turbostat.8
  tools/power turbostat: Extend --add option with perf counters
  tools/power turbostat: Group SMI counter with APERF and MPERF
  tools/power turbostat: Add ZERO_ARRAY for zero initializing builtin array
  tools/power turbostat: Replace enum rapl_source and cstate_source with counter_source
  tools/power turbostat: Remove anonymous union from rapl_counter_info_t
  tools/power/turbostat: Switch to new Intel CPU model defines
2024-07-28 10:52:15 -07:00
Linus Torvalds
e62f81bbd2 CXL for v6.11 merge window
New Changes:
 - Refactor to a common struct for DRAM and general media CXL events
 - Add abstract distance calculation support for CXL
 - Add CXL maturity map documentation to detail current state of CXL enabling
 - Add warning on mixed CXL VH and RCH/RCD hierachy to inform unsupported config
 - Replace ENXIO with EBUSY for inject poison limit reached via debugfs
 - Replace ENXIO with EBUSY for inject poison cxl-test support
 - XOR math fixup for DPA to SPA translation. Current math works for MODULO arithmetic
   where HPA==SPA, however not for XOR decode.
 - Move pci config read in cxl_dvsec_rr_decode() to avoid unnecessary acess
 
 Fixes:
 - Add a fix to address race condition in CXL memory hotplug notifier
 - Add missing MODULE_DESCRIPTION() for CXL modules
 - Fix incorrect vendor debug UUID define
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE5DAy15EJMCV1R6v9YGjFFmlTOEoFAmahJiMACgkQYGjFFmlT
 OEq8URAArcnzmH9yLvgE2pFOtaKg34vIGDWZGC4R1LpTnFEea04FuJslmxEKNgWo
 DJgPt9VZ66ump/oIvzcbvgLl/yMCTbnSxt5U6J8G5EmpO50PvxOTeWnEgAYVa0NH
 Diuzk/aF4GA94T3w+iAOzYx2N36kF+ezsY3/kqSORT7MC+DipSSUaPUiJcjr6FC6
 /ZIwkhhRi51ONJ8IgaXD+oEU9kxx7WUEyZoQZrJ9bv8/fGbeEfqy04pz2xDKHmLD
 rlQjm3l9um67VMsCvZ62Ce14HXqM213jZ3l0FmYjO4GbdXd2+0ZmIRNAb5vvTG9n
 5cY8vNsL6fND9FKkxlcRSdzI/O/vV+gcU+jzJxiul0p5fWHh/gaYjVH7fFq3dYc+
 vYE5lr97BfyA61bdmylIc2xwDH4yNKVQLZZPVTz5XTxfzBjYCjLPb5vGQKfg/nrB
 N66wjCIWLfCH6DqusUXem1c6BSrrjob8MwXpg00eBE0AA4ihieiy5fxuApnv9mI2
 f809AXRV1k24s5upStZ9iGZSEILBBqiw/KwDyWfRvxjNz36Z1Q2eiXBwbHrVQHBa
 PFtRPPFsZ9+ouIG/8otFaLwDQdITRdA0+drG8lmJ+gs8239Z3eIMMS0+CYdLDbva
 S8vo4POOQSS+cVUjLkC9zIxwPaXq96TLIkCtiLI9xUx5eIzv4K0=
 =HaEG
 -----END PGP SIGNATURE-----

Merge tag 'cxl-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull CXL updates from Dave Jiang:
 "Core:

   - A CXL maturity map has been added to the documentation to detail
     the current state of CXL enabling.

     It provides the status of the current state of various CXL features
     to inform current and future contributors of where things are and
     which areas need contribution.

   - A notifier handler has been added in order for a newly created CXL
     memory region to trigger the abstract distance metrics calculation.

     This should bring parity for CXL memory to the same level vs
     hotplugged DRAM for NUMA abstract distance calculation. The
     abstract distance reflects relative performance used for memory
     tiering handling.

   - An addition for XOR math has been added to address the CXL DPA to
     SPA translation.

     CXL address translation did not support address interleave math
     with XOR prior to this change.

  Fixes:

   - Fix to address race condition in the CXL memory hotplug notifier

   - Add missing MODULE_DESCRIPTION() for CXL modules

   - Fix incorrect vendor debug UUID define

  Misc:

   - A warning has been added to inform users of an unsupported
     configuration when mixing CXL VH and RCH/RCD hierarchies

   - The ENXIO error code has been replaced with EBUSY for inject poison
     limit reached via debugfs and cxl-test support

   - Moving the PCI config read in cxl_dvsec_rr_decode() to avoid
     unnecessary PCI config reads

   - A refactor to a common struct for DRAM and general media CXL
     events"

* tag 'cxl-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl:
  cxl/core/pci: Move reading of control register to immediately before usage
  cxl: Remove defunct code calculating host bridge target positions
  cxl/region: Verify target positions using the ordered target list
  cxl: Restore XOR'd position bits during address translation
  cxl/core: Fold cxl_trace_hpa() into cxl_dpa_to_hpa()
  cxl/test: Replace ENXIO with EBUSY for inject poison limit reached
  cxl/memdev: Replace ENXIO with EBUSY for inject poison limit reached
  cxl/acpi: Warn on mixed CXL VH and RCH/RCD Hierarchy
  cxl/core: Fix incorrect vendor debug UUID define
  Documentation: CXL Maturity Map
  cxl/region: Simplify cxl_region_nid()
  cxl/region: Support to calculate memory tier abstract distance
  cxl/region: Fix a race condition in memory hotplug notifier
  cxl: add missing MODULE_DESCRIPTION() macros
  cxl/events: Use a common struct for DRAM and General Media events
2024-07-28 09:33:28 -07:00
Linus Torvalds
7b5d481889 Two small fixes to silent the compiler and static analyzers tools from
Ben Dooks and Jeff Johnson.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRIEdicmeMNCZKVCdo6u2Upsdk6RAUCZqA9+gAKCRA6u2Upsdk6
 RJETAQDN9OkX2GJlekEo5NPVD531ekV4G7OZMWTrmPKRINClZQEAj9Spt2zP5v4V
 413unRBro9nuKfGgTaquXoHlCuPE+wE=
 =L/SP
 -----END PGP SIGNATURE-----

Merge tag 'unicode-next-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/krisman/unicode

Pull unicode update from Gabriel Krisman Bertazi:
 "Two small fixes to silence the compiler and static analyzers tools
  from Ben Dooks and Jeff Johnson"

* tag 'unicode-next-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/krisman/unicode:
  unicode: add MODULE_DESCRIPTION() macros
  unicode: make utf8 test count static
2024-07-28 09:14:11 -07:00
Jose Ignacio Tornos Martinez
d01c14074b kbuild: rpm-pkg: ghost modules.weakdep file
In the same way as for other similar files, mark as ghost the new file
generated by depmod for configured weak dependencies for modules,
modules.weakdep, so that although it is not included in the package,
claim the ownership on it.

Signed-off-by: Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-07-28 17:07:03 +09:00
Linus Torvalds
5437f30d34 six smb3 client fixes
-----BEGIN PGP SIGNATURE-----
 
 iQGzBAABCgAdFiEE6fsu8pdIjtWE/DpLiiy9cAdyT1EFAmalhJwACgkQiiy9cAdy
 T1GbRgv+NPJ07ZtG7D4EosxCHiBETQS9oezS1Ulbv78YdEBHfP/9T+pYcCh+3qZC
 Sa2HQlB1y3lLZNrhYQrVtyECtVcsdeUloXf6IIczBMAtCeS7FZ0+U8B07+9vJHGz
 9p0paXOkRbOQ2JtYevsRN41Q0HxjvWqHSet/Y2tM8cj0M3yjCPHvJCFv3OC9ZUTV
 AyZZdYFoDFIYmW75459wq/80IADXhkSIsH/8IStTpshVhJbVdyGpr8FTrtW7G0m7
 prYKEzXtgdvzM1CVlfR9boyf5HqUDvcHuV0ZBFjBOx7A3kXiShdRh7PFmDaY1vqX
 o3qgmmjTntX9aRR3zL9GYuayGD8XsXFPotWbuGniKLraX5WJNXe3o8OKybXgivoY
 OEXnkmlyp4GcggmWZpPCqq7J5J+YcLQImCKXxfQI7HjToI9cy7aNZ6qh9g0LIQBm
 9totZcp5AMGk9Sbdf+MUeJ3cx8+3o26kc8a5MCV6fCPt/x7XNKG33ZRd5lne6rxr
 WX4neGG4
 =nzTc
 -----END PGP SIGNATURE-----

Merge tag '6.11-rc-smb-client-fixes-part2' of git://git.samba.org/sfrench/cifs-2.6

Pull more smb client updates from Steve French:

 - fix for potential null pointer use in init cifs

 - additional dynamic trace points to improve debugging of some common
   scenarios

 - two SMB1 fixes (one addressing reconnect with POSIX extensions, one a
   mount parsing error)

* tag '6.11-rc-smb-client-fixes-part2' of git://git.samba.org/sfrench/cifs-2.6:
  smb3: add dynamic trace point for session setup key expired failures
  smb3: add four dynamic tracepoints for copy_file_range and reflink
  smb3: add dynamic tracepoint for reflink errors
  cifs: mount with "unix" mount option for SMB1 incorrectly handled
  cifs: fix reconnect with SMB1 UNIX Extensions
  cifs: fix potential null pointer use in destroy_workqueue in init_cifs error path
2024-07-27 20:08:07 -07:00
Linus Torvalds
6342649c33 block-6.11-20240726
-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmajsToQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgptHKD/sHZcgf+bt/AOC1XebvCyIYYwWvNaMK2kQG
 7Z3kUknbuyML6QcAWSAhG5Z+6qGoIZPFXy1c7EUjs2LLM0KC13zw2D23ipTYXjyQ
 6cJPtgTY6Z4WrWyI7QVRG/cByYiUUNv20Cantexzv0H28/S1a5ne9ajwzkuJ5ZaM
 9EBIpbb9OtR+HKwzaw8oMng5XwXvxuo+DCX/MnJEHWnVYpRyQbQO7yCWb74zYKs0
 cwQp2FyACbGQ6AAt8c8loXYSObhGh6ofdAJoGwC1REZhgS1bxMHFgtnn5Gfq7EyJ
 MfPLFevZ99pxHgUkd9WR2DObOVAhY6kTOfuT2d26W50WP4PH40iAp5vapbEs1VDi
 /Rjr43PgVlzdGc4+u4O5B+E16mAIOmApBP6p6tgbypGU3+AFLhgKEdX3Tc3GgUqC
 zazucEO96ZCMVEjPhSrLLcWwBjlrHwINIgqAVq878D7b+3Qs7RSF6VLr6HP5y2nL
 B0CeSz9PY2xOjRgVqPrFeQLuYk9fRo6zAM2Ewd0TOgSg17z8XXznk0E2ME1ULvZW
 6Cuz5Pi59l2aqrIhm9gvVmMr+KEAu4YCiYOWI5yhNcz7EalY9SmNTovbBdnpaOTa
 74BrL+6lCviLsP4B29iNsqNai5GhAHYS/obJUITuaTAoZpvn9JjWlVBadZd7unbo
 Wn+53IKuaA==
 =Ci8Z
 -----END PGP SIGNATURE-----

Merge tag 'block-6.11-20240726' of git://git.kernel.dk/linux

Pull block fixes from Jens Axboe:

 - NVMe pull request via Keith:
     - Fix request without payloads cleanup  (Leon)
     - Use new protection information format (Francis)
     - Improved debug message for lost pci link (Bart)
     - Another apst quirk (Wang)
     - Use appropriate sysfs api for printing chars (Markus)

 - ublk async device deletion fix (Ming)

 - drbd kerneldoc fixups (Simon)

 - Fix deadlock between sd removal and release (Yang)

* tag 'block-6.11-20240726' of git://git.kernel.dk/linux:
  nvme-pci: add missing condition check for existence of mapped data
  ublk: fix UBLK_CMD_DEL_DEV_ASYNC handling
  block: fix deadlock between sd_remove & sd_release
  drbd: Add peer_device to Kernel doc
  nvme-core: choose PIF from QPIF if QPIFS supports and PIF is QTYPE
  nvme-pci: Fix the instructions for disabling power management
  nvme: remove redundant bdev local variable
  nvme-fabrics: Use seq_putc() in __nvmf_concat_opt_tokens()
  nvme/pci: Add APST quirk for Lenovo N60z laptop
2024-07-27 15:28:53 -07:00
Linus Torvalds
8c93074743 io_uring-6.11-20240726
-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmajvJcQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgpge7D/42t/Rx6pSLPJXtYcChVQNrctaXyqv4bbnB
 QuIhAsVjaR76Z8A9dkL+9AkIrScfODv1U42PFWAxDoe51XG25U519tlzrzf1Phwd
 NryBd28svgiuutfMi5UGTH2UCuXxS9RzEJDp76BhfZFavVd+ggPSQeZQ8Sy5pA2Z
 0lzF6rI9lXkt1yiZyjaisPR6J0UhSM8OU/PFjI7B1cMFBzDSKK6B74zdEDcCKqi0
 CdkkxdmHTn2fTR9FN4VhIRryS/2obzDTBd9Jg2DfZf/umQgDQ6A0ihcIq2vehTW3
 AfLxftnyBeKrzpWqcgC8QhQHeXV91WtPItq21Dd70YSRttHJ+lecPYM24ZVQQLA5
 Hqea0b6PSHxZXdNSRZaY9gjdcHVGdT8kykcNfD8PhOrhYEExAeKe7q/olIKsNANo
 pfxf1hs6cUY64zRCVMnaTKk/UZXWo73h7yVIAEgPmoo6ajAlrfUUn6FO8IV3jXgb
 oa5MNbcAboyfZT0Y0OlBvU1/tWupLknO2UG8lGUoKIGVH8FGEL/sAOJF1ICi3+cI
 0O0MJfUu0bky8N2G/3/U7cgEEE836hac0IRAntv3vh1W1Fy5DnYSPUvj2FjPKVou
 Ct4XY8EPXFEwXT9Zj5lxHMIAvNw1RhakVoTv4wtz5RE+EL5XvQIRHMzxtsi0sc3X
 NRrUmPRo9g==
 =lUXx
 -----END PGP SIGNATURE-----

Merge tag 'io_uring-6.11-20240726' of git://git.kernel.dk/linux

Pull io_uring fixes from Jens Axboe:

 - Fix a syzbot issue for the msg ring cache added in this release. No
   ill effects from this one, but it did make KMSAN unhappy (me)

 - Sanitize the NAPI timeout handling, by unifying the value handling
   into all ktime_t rather than converting back and forth (Pavel)

 - Fail NAPI registration for IOPOLL rings, it's not supported (Pavel)

 - Fix a theoretical issue with ring polling and cancelations (Pavel)

 - Various little cleanups and fixes (Pavel)

* tag 'io_uring-6.11-20240726' of git://git.kernel.dk/linux:
  io_uring/napi: pass ktime to io_napi_adjust_timeout
  io_uring/napi: use ktime in busy polling
  io_uring/msg_ring: fix uninitialized use of target_req->flags
  io_uring: align iowq and task request error handling
  io_uring: kill REQ_F_CANCEL_SEQ
  io_uring: simplify io_uring_cmd return
  io_uring: fix io_match_task must_hold
  io_uring: don't allow netpolling with SETUP_IOPOLL
  io_uring: tighten task exit cancellations
2024-07-27 15:22:33 -07:00
Linus Torvalds
bc4eee85ca vfs-6.11-rc1.fixes.3
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCZqSoSAAKCRCRxhvAZXjc
 omGDAP9g7+3hDDFvDfYm3cjSw5CcbYodaS0cjzSZV9FQ8jSLUgEAv9741X4c5og1
 u8hjOnkYXVJPKn+DzfwBza0wV8qtVQM=
 =YIoY
 -----END PGP SIGNATURE-----

Merge tag 'vfs-6.11-rc1.fixes.3' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs

Pull vfs fixes from Christian Brauner:
 "This contains two fixes for this merge window:

  VFS:

   - I noticed that it is possible for a privileged user to mount most
     filesystems with a non-initial user namespace in sb->s_user_ns.

     When fsopen() is called in a non-init namespace the caller's
     namespace is recorded in fs_context->user_ns. If the returned file
     descriptor is then passed to a process privileged in init_user_ns,
     that process can call fsconfig(fd_fs, FSCONFIG_CMD_CREATE*),
     creating a new superblock with sb->s_user_ns set to the namespace
     of the process which called fsopen().

     This is problematic as only filesystems that raise FS_USERNS_MOUNT
     are known to be able to support a non-initial s_user_ns. Others may
     suffer security issues, on-disk corruption or outright crash the
     kernel. Prevent that by restricting such delegation to filesystems
     that allow FS_USERNS_MOUNT.

     Note, that this delegation requires a privileged process to
     actually create the superblock so either the privileged process is
     cooperaing or someone must have tricked a privileged process into
     operating on a fscontext file descriptor whose origin it doesn't
     know (a stupid idea).

     The bug dates back to about 5 years afaict.

  Misc:

   - Fix hostfs parsing when the mount request comes in via the legacy
     mount api.

     In the legacy mount api hostfs allows to specify the host directory
     mount without any key.

     Restore that behavior"

* tag 'vfs-6.11-rc1.fixes.3' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
  hostfs: fix the host directory parse when mounting.
  fs: don't allow non-init s_user_ns for filesystems without FS_USERNS_MOUNT
2024-07-27 15:11:59 -07:00