linux-loongson/include/linux/sched/wake_q.h
John Stultz abfdccd6af sched/wake_q: Add helper to call wake_up_q after unlock with preemption disabled
A common pattern seen when wake_qs are used to defer a wakeup
until after a lock is released is something like:
  preempt_disable();
  raw_spin_unlock(lock);
  wake_up_q(wake_q);
  preempt_enable();

So create some raw_spin_unlock*_wake() helper functions to clean
this up.

Applies on top of the fix I submitted here:
 https://lore.kernel.org/lkml/20241212222138.2400498-1-jstultz@google.com/

NOTE: I recognise the unlock()/unlock_irq()/unlock_irqrestore()
variants creates its own duplication, which we could use a macro
to generate the similar functions, but I often dislike how those
generation macros making finding the actual implementation
harder, so I left the three functions as is. If folks would
prefer otherwise, let me know and I'll switch it.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: John Stultz <jstultz@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20241217040803.243420-1-jstultz@google.com
2024-12-20 15:31:21 +01:00

101 lines
2.9 KiB
C

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_SCHED_WAKE_Q_H
#define _LINUX_SCHED_WAKE_Q_H
/*
* Wake-queues are lists of tasks with a pending wakeup, whose
* callers have already marked the task as woken internally,
* and can thus carry on. A common use case is being able to
* do the wakeups once the corresponding user lock as been
* released.
*
* We hold reference to each task in the list across the wakeup,
* thus guaranteeing that the memory is still valid by the time
* the actual wakeups are performed in wake_up_q().
*
* One per task suffices, because there's never a need for a task to be
* in two wake queues simultaneously; it is forbidden to abandon a task
* in a wake queue (a call to wake_up_q() _must_ follow), so if a task is
* already in a wake queue, the wakeup will happen soon and the second
* waker can just skip it.
*
* The DEFINE_WAKE_Q macro declares and initializes the list head.
* wake_up_q() does NOT reinitialize the list; it's expected to be
* called near the end of a function. Otherwise, the list can be
* re-initialized for later re-use by wake_q_init().
*
* NOTE that this can cause spurious wakeups. schedule() callers
* must ensure the call is done inside a loop, confirming that the
* wakeup condition has in fact occurred.
*
* NOTE that there is no guarantee the wakeup will happen any later than the
* wake_q_add() location. Therefore task must be ready to be woken at the
* location of the wake_q_add().
*/
#include <linux/sched.h>
struct wake_q_head {
struct wake_q_node *first;
struct wake_q_node **lastp;
};
#define WAKE_Q_TAIL ((struct wake_q_node *) 0x01)
#define WAKE_Q_HEAD_INITIALIZER(name) \
{ WAKE_Q_TAIL, &name.first }
#define DEFINE_WAKE_Q(name) \
struct wake_q_head name = WAKE_Q_HEAD_INITIALIZER(name)
static inline void wake_q_init(struct wake_q_head *head)
{
head->first = WAKE_Q_TAIL;
head->lastp = &head->first;
}
static inline bool wake_q_empty(struct wake_q_head *head)
{
return head->first == WAKE_Q_TAIL;
}
extern void wake_q_add(struct wake_q_head *head, struct task_struct *task);
extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task);
extern void wake_up_q(struct wake_q_head *head);
/* Spin unlock helpers to unlock and call wake_up_q with preempt disabled */
static inline
void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
{
guard(preempt)();
raw_spin_unlock(lock);
if (wake_q) {
wake_up_q(wake_q);
wake_q_init(wake_q);
}
}
static inline
void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q)
{
guard(preempt)();
raw_spin_unlock_irq(lock);
if (wake_q) {
wake_up_q(wake_q);
wake_q_init(wake_q);
}
}
static inline
void raw_spin_unlock_irqrestore_wake(raw_spinlock_t *lock, unsigned long flags,
struct wake_q_head *wake_q)
{
guard(preempt)();
raw_spin_unlock_irqrestore(lock, flags);
if (wake_q) {
wake_up_q(wake_q);
wake_q_init(wake_q);
}
}
#endif /* _LINUX_SCHED_WAKE_Q_H */