summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2009-01-23 17:21:39 +0100
committerPekka Enberg <penberg@cs.helsinki.fi>2009-04-16 11:14:28 +0300
commit249acdd3fd13d1ec579ff9b1c5273cf5515cd352 (patch)
treec37f844d51e67ab97cff7ab9b29e4c1f3049530d /lib
parentf4efdd65b754ebbf41484d3a2255c59282720650 (diff)
SLQB slab allocator (try 2)
Introducing the SLQB slab allocator. SLQB takes code and ideas from all other slab allocators in the tree. The primary method for keeping lists of free objects within the allocator is a singly-linked list, storing a pointer within the object memory itself (or a small additional space in the case of RCU destroyed slabs). This is like SLOB and SLUB, and opposed to SLAB, which uses arrays of objects, and metadata. This reduces memory consumption and makes smaller sized objects more realistic as there is less overhead. Using lists rather than arrays can reduce the cacheline footprint. When moving objects around, SLQB can move a list of objects from one CPU to another by simply manipulating a head pointer, wheras SLAB needs to memcpy arrays. Some SLAB per-CPU arrays can be up to 1K in size, which is a lot of cachelines that can be touched during alloc/free. Newly freed objects tend to be cache hot, and newly allocated ones tend to soon be touched anyway, so often there is little cost to using metadata in the objects. SLQB has a per-CPU LIFO freelist of objects like SLAB (but using lists rather than arrays). Freed objects are returned to this freelist if they belong to the node which our CPU belongs to. So objects allocated on one CPU can be added to the freelist of another CPU on the same node. When LIFO freelists need to be refilled or trimmed, SLQB takes or returns objects from a list of slabs. SLQB has per-CPU lists of slabs (which use struct page as their metadata including list head for this list). Each slab contains a singly-linked list of objects that are free in that slab (free, and not on a LIFO freelist). Slabs are freed as soon as all their objects are freed, and only allocated when there are no slabs remaining. They are taken off this slab list when if there are no free objects left. So the slab lists always only contain "partial" slabs; those slabs which are not completely full and not completely empty. SLQB slabs can be manipulated with no locking unlike other allocators which tend to use per-node locks. As the number of threads per socket increases, this should help improve the scalability of slab operations. Freeing objects to remote slab lists first batches up the objects on the freeing CPU, then moves them over at once to a list on the allocating CPU. The allocating CPU will then notice those objects and pull them onto the end of its freelist. This remote freeing scheme is designed to minimise the number of cross CPU cachelines touched, short of going to a "crossbar" arrangement like SLAB has. SLAB has "crossbars" of arrays of objects. That is, NR_CPUS*MAX_NUMNODES type arrays, which can become very bloated in huge systems (this could be hundreds of GBs for kmem caches for 4096 CPU, 1024 nodes systems). SLQB also has similar freelist, slablist structures per-node, which are protected by a lock, and usable by any CPU in order to do node specific allocations. These allocations tend not to be too frequent (short lived allocations should be node local, long lived allocations should not be too frequent). There is a good overview and illustration of the design here: http://lwn.net/Articles/311502/ By using LIFO freelists like SLAB, SLQB tries to be very page-size agnostic. It tries very hard to use order-0 pages. This is good for both page allocator fragmentation, and slab fragmentation. SLQB initialistaion code attempts to be as simple and un-clever as possible. There are no multiple phases where different things come up. There is no weird self bootstrapping stuff. It just statically allocates the structures required to create the slabs that allocate other slab structures. SLQB uses much of the debugging infrastructure, and fine-grained sysfs statistics from SLUB. There is also a Documentation/vm/slqbinfo.c, derived from slabinfo.c, which can query the sysfs data. Documentation/vm/slqbinfo.c | 1054 +++++++++++++ arch/x86/include/asm/page.h | 1 include/linux/mm.h | 4 include/linux/rcu_types.h | 18 include/linux/rcupdate.h | 11 include/linux/slab.h | 10 include/linux/slqb_def.h | 295 +++ init/Kconfig | 9 lib/Kconfig.debug | 20 mm/Makefile | 1 mm/slqb.c | 3562 ++++++++++++++++++++++++++++++++++++++++++++ 11 files changed, 4971 insertions(+), 14 deletions(-) Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'lib')
-rw-r--r--lib/Kconfig.debug20
1 files changed, 20 insertions, 0 deletions
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index c6e854f215fa..94a2cfb9eb3e 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -336,6 +336,26 @@ config SLUB_STATS
out which slabs are relevant to a particular load.
Try running: slabinfo -DA
+config SLQB_DEBUG
+ default y
+ bool "Enable SLQB debugging support"
+ depends on SLQB
+
+config SLQB_DEBUG_ON
+ default n
+ bool "SLQB debugging on by default"
+ depends on SLQB_DEBUG
+
+config SLQB_SYSFS
+ bool "Create SYSFS entries for slab caches"
+ default n
+ depends on SLQB
+
+config SLQB_STATS
+ bool "Enable SLQB performance statistics"
+ default n
+ depends on SLQB_SYSFS
+
config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64)