summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNadav Amit <namit@vmware.com>2021-02-20 15:17:09 -0800
committerIngo Molnar <mingo@kernel.org>2021-03-06 12:59:10 +0100
commit09c5272e48614a30598e759c3c7bed126d22037d (patch)
tree5f9090493a1510710e2d9e3e2dd7cfd1c1d46343
parent2f4305b19fe6a2a261d76c21856c5598f7d878fe (diff)
x86/mm/tlb: Do not make is_lazy dirty for no reason
Blindly writing to is_lazy for no reason, when the written value is identical to the old value, makes the cacheline dirty for no reason. Avoid making such writes to prevent cache coherency traffic for no reason. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20210220231712.2475218-7-namit@vmware.com
-rw-r--r--arch/x86/mm/tlb.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 345a0aff5de4..17ec4bfeee67 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -469,7 +469,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
__flush_tlb_all();
}
#endif
- this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
+ if (was_lazy)
+ this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
/*
* The membarrier system call requires a full memory barrier and