summaryrefslogtreecommitdiff
path: root/scripts/atomic/fallbacks/inc_not_zero
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2020-01-24 22:13:03 +0100
committerThomas Gleixner <tglx@linutronix.de>2020-06-11 08:03:24 +0200
commit37f8173dd84936ea78000ed1cad24f8b18d48ebb (patch)
tree0b715066a7f5c16a71988e176627c46b61481b3c /scripts/atomic/fallbacks/inc_not_zero
parent765dcd209947e7b3666c08fb109ab8b879f7a471 (diff)
locking/atomics: Flip fallbacks and instrumentationlocking-urgent-2020-06-11
Currently instrumentation of atomic primitives is done at the architecture level, while composites or fallbacks are provided at the generic level. The result is that there are no uninstrumented variants of the fallbacks. Since there is now need of such variants to isolate text poke from any form of instrumentation invert this ordering. Doing this means moving the instrumentation into the generic code as well as having (for now) two variants of the fallbacks. Notes: - the various *cond_read* primitives are not proper fallbacks and got moved into linux/atomic.c. No arch_ variants are generated because the base primitives smp_cond_load*() are instrumented. - once all architectures are moved over to arch_atomic_ one of the fallback variants can be removed and some 2300 lines reclaimed. - atomic_{read,set}*() are no longer double-instrumented Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/20200505134058.769149955@linutronix.de
Diffstat (limited to 'scripts/atomic/fallbacks/inc_not_zero')
-rwxr-xr-xscripts/atomic/fallbacks/inc_not_zero6
1 files changed, 3 insertions, 3 deletions
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index d9f7b97aab42..50f2d4d48279 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,14 +1,14 @@
cat <<EOF
/**
- * ${atomic}_inc_not_zero - increment unless the number is zero
+ * ${arch}${atomic}_inc_not_zero - increment unless the number is zero
* @v: pointer of type ${atomic}_t
*
* Atomically increments @v by 1, if @v is non-zero.
* Returns true if the increment was done.
*/
static __always_inline bool
-${atomic}_inc_not_zero(${atomic}_t *v)
+${arch}${atomic}_inc_not_zero(${atomic}_t *v)
{
- return ${atomic}_add_unless(v, 1, 0);
+ return ${arch}${atomic}_add_unless(v, 1, 0);
}
EOF