summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorAndrew Morton <akpm@linux-foundation.org>2013-02-07 12:32:09 +1100
committerStephen Rothwell <sfr@canb.auug.org.au>2013-02-14 15:26:40 +1100
commit3f7545686ee7e39193b674a99706ce95b73396f8 (patch)
treed9528fba0cc280cb1a507b8c6aa1465bc5f017a9 /include
parent567f919cda22befb6d1deb2691adb675406a4e8b (diff)
generic-dynamic-per-cpu-refcounting-doc-fix
a little tidy Cc: Kent Overstreet <koverstreet@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/percpu-refcount.h13
1 files changed, 6 insertions, 7 deletions
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index bed9a0d29f66..d0cf8872dc43 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -21,19 +21,18 @@
* BACKGROUND:
*
* Percpu refcounts are quite useful for performance, but if we blindly
- * converted all refcounts to percpu counters we'd waste quite a bit of memory
- * think about all the refcounts embedded in kobjects, files, etc. most of which
- * aren't used much.
+ * converted all refcounts to percpu counters we'd waste quite a bit of memory.
*
- * These start out as simple atomic counters - a little bigger than a bare
- * atomic_t, 16 bytes instead of 4 - but if we exceed some arbitrary number of
- * gets in one second, we then switch to percpu counters.
+ * Think about all the refcounts embedded in kobjects, files, etc. most of which
+ * aren't used much. These start out as simple atomic counters - a little bigger
+ * than a bare atomic_t, 16 bytes instead of 4 - but if we exceed some arbitrary
+ * number of gets in one second, we then switch to percpu counters.
*
* This heuristic isn't perfect because it'll fire if the refcount was only
* being used on one cpu; ideally we'd be able to count the number of cache
* misses on percpu_ref_get() or something similar, but that'd make the non
* percpu path significantly heavier/more complex. We can count the number of
- * gets() without any extra atomic instructions, on arches that support
+ * gets() without any extra atomic instructions on arches that support
* atomic64_t - simply by changing the atomic_inc() to atomic_add_return().
*
* USAGE: