mirror of
https://git.proxmox.com/git/mirror_ubuntu-kernels.git
synced 2026-01-25 08:11:44 +00:00
queued_spin_lock_slowpath() should not worry about another queued_spin_lock_slowpath() running in interrupt context and changing node->count by accident, because node->count keeps the same value every time we enter/leave queued_spin_lock_slowpath(). On some architectures this_cpu_dec() will save/restore irq flags, which has high overhead. Use the much cheaper __this_cpu_dec() instead. Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman.Long@hpe.com Link: http://lkml.kernel.org/r/1465886247-3773-1-git-send-email-xinhui.pan@linux.vnet.ibm.com [ Rewrote changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> |
||
|---|---|---|
| .. | ||
| lglock.c | ||
| lockdep_internals.h | ||
| lockdep_proc.c | ||
| lockdep_states.h | ||
| lockdep.c | ||
| locktorture.c | ||
| Makefile | ||
| mcs_spinlock.h | ||
| mutex-debug.c | ||
| mutex-debug.h | ||
| mutex.c | ||
| mutex.h | ||
| osq_lock.c | ||
| percpu-rwsem.c | ||
| qrwlock.c | ||
| qspinlock_paravirt.h | ||
| qspinlock_stat.h | ||
| qspinlock.c | ||
| rtmutex_common.h | ||
| rtmutex-debug.c | ||
| rtmutex-debug.h | ||
| rtmutex.c | ||
| rtmutex.h | ||
| rwsem-spinlock.c | ||
| rwsem-xadd.c | ||
| rwsem.c | ||
| rwsem.h | ||
| semaphore.c | ||
| spinlock_debug.c | ||
| spinlock.c | ||