summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorDavidlohr Bueso <davidlohr@hp.com>2014-04-23 08:27:08 +1000
committerStephen Rothwell <sfr@canb.auug.org.au>2014-04-23 08:27:08 +1000
commitd2615d1817179d3ad9b45b20b50fa8a917f616cc (patch)
tree50641ded60cc86b4f16900ee1d805f849caae3d0 /mm
parent88b54d7c5a57d4c113979abb08910fbd24fffe05 (diff)
mm,vmacache: optimize overflow system-wide flushing
For single threaded workloads, we can avoid flushing and iterating through the entire list of tasks, making the whole function a lot faster, requiring only a single atomic read for the mm_users. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Suggested-by: Oleg Nesterov <oleg@redhat.com> Cc: Aswin Chandramouleeswaran <aswin@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmacache.c10
1 files changed, 10 insertions, 0 deletions
diff --git a/mm/vmacache.c b/mm/vmacache.c
index e167da29ea58..61c38ae9f54b 100644
--- a/mm/vmacache.c
+++ b/mm/vmacache.c
@@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
{
struct task_struct *g, *p;
+ /*
+ * Single threaded tasks need not iterate the entire
+ * list of process. We can avoid the flushing as well
+ * since the mm's seqnum was increased and don't have
+ * to worry about other threads' seqnum. Current's
+ * flush will occur upon the next lookup.
+ */
+ if (atomic_read(&mm->mm_users) == 1)
+ return;
+
rcu_read_lock();
for_each_process_thread(g, p) {
/*