summaryrefslogtreecommitdiff
path: root/mm/slub.c
diff options
context:
space:
mode:
authorVladimir Davydov <vdavydov@parallels.com>2014-07-10 10:25:32 +1000
committerStephen Rothwell <sfr@canb.auug.org.au>2014-07-10 10:25:32 +1000
commit66bdac512fbeb2a7a244701332bd758e95447932 (patch)
tree1d0360ae8b3d88c5eabaf9a9cb6313bc0663ecba /mm/slub.c
parent85491dc14b3d60a1cd63e2adc55c24ea33899f72 (diff)
slub: kmem_cache_shrink: check if partial list is empty under list_lock
SLUB's implementation of kmem_cache_shrink skips nodes that have nr_partial=0, because they surely don't have any empty slabs to free. This check is done w/o holding any locks, therefore it can race with concurrent kfree adding an empty slab to a partial list. As a result, a just shrinked cache can have empty slabs. This is unacceptable for kmemcg, which needs to be sure that there will be no empty slabs on dead memcg caches after kmem_cache_shrink was called, because otherwise we may leak a dead cache. Let's fix this race by checking if node partial list is empty under node->list_lock. Since the nr_partial!=0 branch of kmem_cache_shrink does nothing if the list is empty, we can simply remove the nr_partial=0 check. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Reported-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c3
1 files changed, 0 insertions, 3 deletions
diff --git a/mm/slub.c b/mm/slub.c
index 904a5e919981..5e691df71cf4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3395,9 +3395,6 @@ int __kmem_cache_shrink(struct kmem_cache *s)
flush_all(s);
for_each_kmem_cache_node(s, node, n) {
- if (!n->nr_partial)
- continue;
-
for (i = 0; i < objects; i++)
INIT_LIST_HEAD(slabs_by_inuse + i);