summaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorMichael Clark <michaeljclark@mac.com>2019-02-11 17:38:29 +1300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-03-05 17:58:53 +0100
commit3bfa6413b03a676cbbddb10e7d0811368fb926de (patch)
tree62c2ed3204c23395e929230b21efcebfde1a7ed9 /arch
parent527cabfffbc5f55c111d510a918b33fb9fbe537d (diff)
MIPS: fix truncation in __cmpxchg_small for short values
commit 94ee12b507db8b5876e31c9d6c9d84f556a4b49f upstream. __cmpxchg_small erroneously uses u8 for load comparison which can be either char or short. This patch changes the local variable to u32 which is sufficiently sized, as the loaded value is already masked and shifted appropriately. Using an integer size avoids any unnecessary canonicalization from use of non native widths. This patch is part of a series that adapts the MIPS small word atomics code for xchg and cmpxchg on short and char to RISC-V. Cc: RISC-V Patches <patches@groups.riscv.org> Cc: Linux RISC-V <linux-riscv@lists.infradead.org> Cc: Linux MIPS <linux-mips@linux-mips.org> Signed-off-by: Michael Clark <michaeljclark@mac.com> [paul.burton@mips.com: - Fix varialble typo per Jonas Gorski. - Consolidate load variable with other declarations.] Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: 3ba7f44d2b19 ("MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()") Cc: stable@vger.kernel.org # v4.13+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch')
-rw-r--r--arch/mips/kernel/cmpxchg.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/arch/mips/kernel/cmpxchg.c b/arch/mips/kernel/cmpxchg.c
index 0b9535bc2c53..6b2a4a902a98 100644
--- a/arch/mips/kernel/cmpxchg.c
+++ b/arch/mips/kernel/cmpxchg.c
@@ -54,10 +54,9 @@ unsigned long __xchg_small(volatile void *ptr, unsigned long val, unsigned int s
unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
unsigned long new, unsigned int size)
{
- u32 mask, old32, new32, load32;
+ u32 mask, old32, new32, load32, load;
volatile u32 *ptr32;
unsigned int shift;
- u8 load;
/* Check that ptr is naturally aligned */
WARN_ON((unsigned long)ptr & (size - 1));