diff options
author | Christoph Lameter <cl@gentwo.org> | 2014-07-10 10:25:29 +1000 |
---|---|---|
committer | Stephen Rothwell <sfr@canb.auug.org.au> | 2014-07-10 10:25:29 +1000 |
commit | a5652864164cc48c4390cb602e120f8a770040ac (patch) | |
tree | 14ffc6a0353bb2421192af1f2471c3870e150a35 /mm/slub.c | |
parent | 3b34da140d5981bd26fc7db6f600969835db07d3 (diff) |
slub-use-new-node-functions-fix
On Wed, 11 Jun 2014, David Rientjes wrote:
> > + for_each_kmem_cache_node(s, node, n) {
> >
> > free_partial(s, n);
> > if (n->nr_partial || slabs_node(s, node))
>
> Newline not removed?
Ok got through the file and removed all the lines after
for_each_kmem_cache_node.
>
> > @@ -3407,11 +3401,7 @@ int __kmem_cache_shrink(struct kmem_cach
> > return -ENOMEM;
> >
> > flush_all(s);
> > - for_each_node_state(node, N_NORMAL_MEMORY) {
> > - n = get_node(s, node);
> > -
> > - if (!n->nr_partial)
> > - continue;
> > + for_each_kmem_cache_node(s, node, n) {
> >
> > for (i = 0; i < objects; i++)
> > INIT_LIST_HEAD(slabs_by_inuse + i);
>
> Is there any reason not to keep the !n->nr_partial check to avoid taking
> n->list_lock unnecessarily?
No this was simply a mistake the check needs to be preserved.
Subject: slub: Fix up earlier patch
Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/mm/slub.c b/mm/slub.c index 6f2d8c93f7f8..3918cd62a4b2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3216,7 +3216,6 @@ static inline int kmem_cache_close(struct kmem_cache *s) flush_all(s); /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { - free_partial(s, n); if (n->nr_partial || slabs_node(s, node)) return 1; @@ -3402,6 +3401,8 @@ int __kmem_cache_shrink(struct kmem_cache *s) flush_all(s); for_each_kmem_cache_node(s, node, n) { + if (!n->nr_partial) + continue; for (i = 0; i < objects; i++) INIT_LIST_HEAD(slabs_by_inuse + i); @@ -4334,7 +4335,6 @@ static ssize_t show_slab_objects(struct kmem_cache *s, struct kmem_cache_node *n; for_each_kmem_cache_node(s, node, n) { - if (flags & SO_TOTAL) x = count_partial(n, count_total); else if (flags & SO_OBJECTS) @@ -5324,7 +5324,6 @@ void get_slabinfo(struct kmem_cache *s, struct slabinfo *sinfo) struct kmem_cache_node *n; for_each_kmem_cache_node(s, node, n) { - nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); nr_free += count_partial(n, count_free); |