From aed21ad28b1323b2807faea019e5ac388a7bc837 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Wed, 23 Nov 2011 11:44:41 -0600 Subject: writeback: comment on the bdi dirty threshold We do "floating proportions" to let active devices to grow its target share of dirty pages and stalled/inactive devices to decrease its target share over time. It works well except in the case of "an inactive disk suddenly goes busy", where the initial target share may be too small. To mitigate this, bdi_position_ratio() has the below line to raise a small bdi_thresh when it's safe to do so, so that the disk be feed with enough dirty pages for efficient IO and in turn fast rampup of bdi_thresh: bdi_thresh = max(bdi_thresh, (limit - dirty) / 8); balance_dirty_pages() normally does negative feedback control which adjusts ratelimit to balance the bdi dirty pages around the target. In some extreme cases when that is not enough, it will have to block the tasks completely until the bdi dirty pages drop below bdi_thresh. Acked-by: Jan Kara Acked-by: Peter Zijlstra Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 71252486bc6f..155efca4c123 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -411,8 +411,13 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) * * Returns @bdi's dirty limit in pages. The term "dirty" in the context of * dirty balancing includes all PG_dirty, PG_writeback and NFS unstable pages. - * And the "limit" in the name is not seriously taken as hard limit in - * balance_dirty_pages(). + * + * Note that balance_dirty_pages() will only seriously take it as a hard limit + * when sleeping max_pause per page is not enough to keep the dirty pages under + * control. For example, when the device is completely stalled due to some error + * conditions, or when there are 1000 dd tasks writing to a slow 10MB/s USB key. + * In the other normal situations, it acts more gently by throttling the tasks + * more (rather than completely block them) when the bdi dirty pages go high. * * It allocates high/low dirty limits to fast/slow devices, in order to prevent * - starving fast devices @@ -594,6 +599,13 @@ static unsigned long bdi_position_ratio(struct backing_dev_info *bdi, */ if (unlikely(bdi_thresh > thresh)) bdi_thresh = thresh; + /* + * It's very possible that bdi_thresh is close to 0 not because the + * device is slow, but that it has remained inactive for long time. + * Honour such devices a reasonable good (hopefully IO efficient) + * threshold, so that the occasional writes won't be blocked and active + * writes can rampup the threshold quickly. + */ bdi_thresh = max(bdi_thresh, (limit - dirty) / 8); /* * scale global setpoint to bdi's: -- cgit v1.2.3 From c5c6343c4d75f9d3226e05a72e7861e967fc8099 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Fri, 2 Dec 2011 10:21:33 -0600 Subject: writeback: permit through good bdi even when global dirty exceeded On a system with 1 local mount and 1 NFS mount, if the NFS server becomes not responding when dd to the NFS mount, the NFS dirty pages may exceed the global dirty limit and _every_ task involving writing will be blocked. The whole system appears unresponsive. The workaround is to permit through the bdi's that only has a small number of dirty pages. The number chosen (bdi_stat_error pages) is not enough to enable the local disk to run in optimal throughput, however is enough to make the system responsive on a broken NFS mount. The user can then kill the dirtiers on the NFS mount and increase the global dirty limit to bring up the local disk's throughput. It risks allowing dirty pages to grow much larger than the global dirty limit when there are 1000+ mounts, however that's very unlikely to happen, especially in low memory profiles. Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 155efca4c123..17403e3a7c89 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1148,6 +1148,19 @@ pause: if (task_ratelimit) break; + /* + * In the case of an unresponding NFS server and the NFS dirty + * pages exceeds dirty_thresh, give the other good bdi's a pipe + * to go through, so that tasks on them still remain responsive. + * + * In theory 1 page is enough to keep the comsumer-producer + * pipe going: the flusher cleans 1 page => the task dirties 1 + * more page. However bdi_dirty has accounting errors. So use + * the larger and more IO friendly bdi_stat_error. + */ + if (bdi_dirty <= bdi_stat_error(bdi)) + break; + if (fatal_signal_pending(current)) break; } -- cgit v1.2.3 From 82e230a07de3812a5e87a27979f033dad59172e3 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Fri, 2 Dec 2011 18:21:51 -0600 Subject: writeback: set max_pause to lowest value on zero bdi_dirty Some trace shows lots of bdi_dirty=0 lines where it's actually some small value if w/o the accounting errors in the per-cpu bdi stats. In this case the max pause time should really be set to the smallest (non-zero) value to avoid IO queue underrun and improve throughput. Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 17403e3a7c89..50f08241f981 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -989,8 +989,7 @@ static unsigned long bdi_max_pause(struct backing_dev_info *bdi, * * 8 serves as the safety ratio. */ - if (bdi_dirty) - t = min(t, bdi_dirty * HZ / (8 * bw + 1)); + t = min(t, bdi_dirty * HZ / (8 * bw + 1)); /* * The pause time will be settled within range (max_pause/4, max_pause). -- cgit v1.2.3 From 54848d73f9f254631303d6eab9b976855988b266 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Tue, 5 Apr 2011 13:21:19 -0600 Subject: writeback: charge leaked page dirties to active tasks It's a years long problem that a large number of short-lived dirtiers (eg. gcc instances in a fast kernel build) may starve long-run dirtiers (eg. dd) as well as pushing the dirty pages to the global hard limit. The solution is to charge the pages dirtied by the exited gcc to the other random dirtying tasks. It sounds not perfect, however should behave good enough in practice, seeing as that throttled tasks aren't actually running so those that are running are more likely to pick it up and get throttled, therefore promoting an equal spread. Randy: fix compile error: 'dirty_throttle_leaks' undeclared in exit.c Acked-by: Jan Kara Acked-by: Peter Zijlstra Signed-off-by: Randy Dunlap Signed-off-by: Wu Fengguang --- include/linux/writeback.h | 2 ++ kernel/exit.c | 3 +++ mm/page-writeback.c | 27 +++++++++++++++++++++++++++ 3 files changed, 32 insertions(+) (limited to 'mm/page-writeback.c') diff --git a/include/linux/writeback.h b/include/linux/writeback.h index a378c295851f..05eaf5e3aad7 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -7,6 +7,8 @@ #include #include +DECLARE_PER_CPU(int, dirty_throttle_leaks); + /* * The 1/4 region under the global dirty thresh is for smooth dirty throttling: * diff --git a/kernel/exit.c b/kernel/exit.c index d0b7d988f873..d4aac24cc469 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -1037,6 +1038,8 @@ NORET_TYPE void do_exit(long code) validate_creds_for_do_exit(tsk); preempt_disable(); + if (tsk->nr_dirtied) + __this_cpu_add(dirty_throttle_leaks, tsk->nr_dirtied); exit_rcu(); /* causes final put_task_struct in finish_task_switch(). */ tsk->state = TASK_DEAD; diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 50f08241f981..619c445fc03c 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1214,6 +1214,22 @@ void set_page_dirty_balance(struct page *page, int page_mkwrite) static DEFINE_PER_CPU(int, bdp_ratelimits); +/* + * Normal tasks are throttled by + * loop { + * dirty tsk->nr_dirtied_pause pages; + * take a snap in balance_dirty_pages(); + * } + * However there is a worst case. If every task exit immediately when dirtied + * (tsk->nr_dirtied_pause - 1) pages, balance_dirty_pages() will never be + * called to throttle the page dirties. The solution is to save the not yet + * throttled page dirties in dirty_throttle_leaks on task exit and charge them + * randomly into the running tasks. This works well for the above worst case, + * as the new task will pick up and accumulate the old task's leaked dirty + * count and eventually get throttled. + */ +DEFINE_PER_CPU(int, dirty_throttle_leaks) = 0; + /** * balance_dirty_pages_ratelimited_nr - balance dirty memory state * @mapping: address_space which was dirtied @@ -1261,6 +1277,17 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping, ratelimit = 0; } } + /* + * Pick up the dirtied pages by the exited tasks. This avoids lots of + * short-lived tasks (eg. gcc invocations in a kernel build) escaping + * the dirty throttling and livelock other long-run dirtiers. + */ + p = &__get_cpu_var(dirty_throttle_leaks); + if (*p > 0 && current->nr_dirtied < ratelimit) { + nr_pages_dirtied = min(*p, ratelimit - current->nr_dirtied); + *p -= nr_pages_dirtied; + current->nr_dirtied += nr_pages_dirtied; + } preempt_enable(); if (unlikely(current->nr_dirtied >= ratelimit)) -- cgit v1.2.3 From d3bc1fef9389e409a772ea174a5e41a6f93d9b7b Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Thu, 14 Apr 2011 07:52:37 -0600 Subject: writeback: fix dirtied pages accounting on sub-page writes When dd in 512bytes, generic_perform_write() calls balance_dirty_pages_ratelimited() 8 times for the same page, but obviously the page is only dirtied once. Fix it by accounting tsk->nr_dirtied and bdp_ratelimits at page dirty time. Acked-by: Jan Kara Acked-by: Peter Zijlstra Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 619c445fc03c..5d1ef5d8613a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1258,8 +1258,6 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping, if (bdi->dirty_exceeded) ratelimit = min(ratelimit, 32 >> (PAGE_SHIFT - 10)); - current->nr_dirtied += nr_pages_dirtied; - preempt_disable(); /* * This prevents one CPU to accumulate too many dirtied pages without @@ -1270,12 +1268,9 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping, p = &__get_cpu_var(bdp_ratelimits); if (unlikely(current->nr_dirtied >= ratelimit)) *p = 0; - else { - *p += nr_pages_dirtied; - if (unlikely(*p >= ratelimit_pages)) { - *p = 0; - ratelimit = 0; - } + else if (unlikely(*p >= ratelimit_pages)) { + *p = 0; + ratelimit = 0; } /* * Pick up the dirtied pages by the exited tasks. This avoids lots of @@ -1768,6 +1763,8 @@ void account_page_dirtied(struct page *page, struct address_space *mapping) __inc_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE); __inc_bdi_stat(mapping->backing_dev_info, BDI_DIRTIED); task_io_account_write(PAGE_CACHE_SIZE); + current->nr_dirtied++; + this_cpu_inc(bdp_ratelimits); } } EXPORT_SYMBOL(account_page_dirtied); -- cgit v1.2.3 From 2f800fbd777b792de54187088df19a7df0251254 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Mon, 8 Aug 2011 15:22:00 -0600 Subject: writeback: fix dirtied pages accounting on redirty De-account the accumulative dirty counters on page redirty. Page redirties (very common in ext4) will introduce mismatch between counters (a) and (b) a) NR_DIRTIED, BDI_DIRTIED, tsk->nr_dirtied b) NR_WRITTEN, BDI_WRITTEN This will introduce systematic errors in balanced_rate and result in dirty page position errors (ie. the dirty pages are no longer balanced around the global/bdi setpoints). Acked-by: Jan Kara Acked-by: Peter Zijlstra Signed-off-by: Wu Fengguang --- include/linux/writeback.h | 2 ++ mm/page-writeback.c | 19 +++++++++++++++++++ 2 files changed, 21 insertions(+) (limited to 'mm/page-writeback.c') diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 05eaf5e3aad7..b30419cd425e 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -197,6 +197,8 @@ void writeback_set_ratelimit(void); void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end); +void account_page_redirty(struct page *page); + /* pdflush.c */ extern int nr_pdflush_threads; /* Global so it can be exported to sysctl read-only. */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 5d1ef5d8613a..96b3e7aa705c 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1824,6 +1824,24 @@ int __set_page_dirty_nobuffers(struct page *page) } EXPORT_SYMBOL(__set_page_dirty_nobuffers); +/* + * Call this whenever redirtying a page, to de-account the dirty counters + * (NR_DIRTIED, BDI_DIRTIED, tsk->nr_dirtied), so that they match the written + * counters (NR_WRITTEN, BDI_WRITTEN) in long term. The mismatches will lead to + * systematic errors in balanced_dirty_ratelimit and the dirty pages position + * control. + */ +void account_page_redirty(struct page *page) +{ + struct address_space *mapping = page->mapping; + if (mapping && mapping_cap_account_dirty(mapping)) { + current->nr_dirtied--; + dec_zone_page_state(page, NR_DIRTIED); + dec_bdi_stat(mapping->backing_dev_info, BDI_DIRTIED); + } +} +EXPORT_SYMBOL(account_page_redirty); + /* * When a writepage implementation decides that it doesn't want to write this * page for some reason, it should redirty the locked page via @@ -1832,6 +1850,7 @@ EXPORT_SYMBOL(__set_page_dirty_nobuffers); int redirty_page_for_writepage(struct writeback_control *wbc, struct page *page) { wbc->pages_skipped++; + account_page_redirty(page); return __set_page_dirty_nobuffers(page); } EXPORT_SYMBOL(redirty_page_for_writepage); -- cgit v1.2.3 From 83712358ba0a1497ce59a4f84ce4dd0f803fe6fc Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Sat, 11 Jun 2011 19:25:42 -0600 Subject: writeback: dirty ratelimit - think time compensation Compensate the task's think time when computing the final pause time, so that ->dirty_ratelimit can be executed accurately. think time := time spend outside of balance_dirty_pages() In the rare case that the task slept longer than the 200ms period time (result in negative pause time), the sleep time will be compensated in the following periods, too, if it's less than 1 second. Accumulated errors are carefully avoided as long as the max pause area is not hitted. Pseudo code: period = pages_dirtied / task_ratelimit; think = jiffies - dirty_paused_when; pause = period - think; 1) normal case: period > think pause = period - think dirty_paused_when = jiffies + pause nr_dirtied = 0 period time |===============================>| think time pause time |===============>|==============>| ------|----------------|---------------|------------------------ dirty_paused_when jiffies 2) no pause case: period <= think don't pause; reduce future pause time by: dirty_paused_when += period nr_dirtied = 0 period time |===============================>| think time |===================================================>| ------|--------------------------------+-------------------|---- dirty_paused_when jiffies Acked-by: Jan Kara Acked-by: Peter Zijlstra Signed-off-by: Wu Fengguang --- include/linux/sched.h | 1 + include/trace/events/writeback.h | 14 +++++++++++--- kernel/fork.c | 1 + mm/page-writeback.c | 36 ++++++++++++++++++++++++++++++++---- 4 files changed, 45 insertions(+), 7 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/include/linux/sched.h b/include/linux/sched.h index 1c4f3e9b9bc5..984c3b295978 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1527,6 +1527,7 @@ struct task_struct { */ int nr_dirtied; int nr_dirtied_pause; + unsigned long dirty_paused_when; /* start of a write-and-pause period */ #ifdef CONFIG_LATENCYTOP int latency_record_count; diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h index 99d1d0decf88..8588a8918023 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -300,12 +300,13 @@ TRACE_EVENT(balance_dirty_pages, unsigned long dirty_ratelimit, unsigned long task_ratelimit, unsigned long dirtied, + unsigned long period, long pause, unsigned long start_time), TP_ARGS(bdi, thresh, bg_thresh, dirty, bdi_thresh, bdi_dirty, dirty_ratelimit, task_ratelimit, - dirtied, pause, start_time), + dirtied, period, pause, start_time), TP_STRUCT__entry( __array( char, bdi, 32) @@ -320,6 +321,8 @@ TRACE_EVENT(balance_dirty_pages, __field(unsigned int, dirtied_pause) __field(unsigned long, paused) __field( long, pause) + __field(unsigned long, period) + __field( long, think) ), TP_fast_assign( @@ -336,6 +339,9 @@ TRACE_EVENT(balance_dirty_pages, __entry->task_ratelimit = KBps(task_ratelimit); __entry->dirtied = dirtied; __entry->dirtied_pause = current->nr_dirtied_pause; + __entry->think = current->dirty_paused_when == 0 ? 0 : + (long)(jiffies - current->dirty_paused_when) * 1000/HZ; + __entry->period = period * 1000 / HZ; __entry->pause = pause * 1000 / HZ; __entry->paused = (jiffies - start_time) * 1000 / HZ; ), @@ -346,7 +352,7 @@ TRACE_EVENT(balance_dirty_pages, "bdi_setpoint=%lu bdi_dirty=%lu " "dirty_ratelimit=%lu task_ratelimit=%lu " "dirtied=%u dirtied_pause=%u " - "paused=%lu pause=%ld", + "paused=%lu pause=%ld period=%lu think=%ld", __entry->bdi, __entry->limit, __entry->setpoint, @@ -358,7 +364,9 @@ TRACE_EVENT(balance_dirty_pages, __entry->dirtied, __entry->dirtied_pause, __entry->paused, /* ms */ - __entry->pause /* ms */ + __entry->pause, /* ms */ + __entry->period, /* ms */ + __entry->think /* ms */ ) ); diff --git a/kernel/fork.c b/kernel/fork.c index da4a6a10d088..f8668cf6a32d 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1296,6 +1296,7 @@ static struct task_struct *copy_process(unsigned long clone_flags, p->nr_dirtied = 0; p->nr_dirtied_pause = 128 >> (PAGE_SHIFT - 10); + p->dirty_paused_when = 0; /* * Ok, make it visible to the rest of the system. diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 96b3e7aa705c..491932155825 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1016,6 +1016,7 @@ static void balance_dirty_pages(struct address_space *mapping, unsigned long background_thresh; unsigned long dirty_thresh; unsigned long bdi_thresh; + long period; long pause = 0; long uninitialized_var(max_pause); bool dirty_exceeded = false; @@ -1026,6 +1027,8 @@ static void balance_dirty_pages(struct address_space *mapping, unsigned long start_time = jiffies; for (;;) { + unsigned long now = jiffies; + /* * Unstable writes are a feature of certain networked * filesystems (i.e. NFS) in which data may have been @@ -1045,8 +1048,11 @@ static void balance_dirty_pages(struct address_space *mapping, */ freerun = dirty_freerun_ceiling(dirty_thresh, background_thresh); - if (nr_dirty <= freerun) + if (nr_dirty <= freerun) { + current->dirty_paused_when = now; + current->nr_dirtied = 0; break; + } if (unlikely(!writeback_in_progress(bdi))) bdi_start_background_writeback(bdi); @@ -1104,10 +1110,21 @@ static void balance_dirty_pages(struct address_space *mapping, task_ratelimit = ((u64)dirty_ratelimit * pos_ratio) >> RATELIMIT_CALC_SHIFT; if (unlikely(task_ratelimit == 0)) { + period = max_pause; pause = max_pause; goto pause; } - pause = HZ * pages_dirtied / task_ratelimit; + period = HZ * pages_dirtied / task_ratelimit; + pause = period; + if (current->dirty_paused_when) + pause -= now - current->dirty_paused_when; + /* + * For less than 1s think time (ext3/4 may block the dirtier + * for up to 800ms from time to time on 1-HDD; so does xfs, + * however at much less frequency), try to compensate it in + * future periods by updating the virtual time; otherwise just + * do a reset, as it may be a light dirtier. + */ if (unlikely(pause <= 0)) { trace_balance_dirty_pages(bdi, dirty_thresh, @@ -1118,8 +1135,16 @@ static void balance_dirty_pages(struct address_space *mapping, dirty_ratelimit, task_ratelimit, pages_dirtied, + period, pause, start_time); + if (pause < -HZ) { + current->dirty_paused_when = now; + current->nr_dirtied = 0; + } else if (period) { + current->dirty_paused_when += period; + current->nr_dirtied = 0; + } pause = 1; /* avoid resetting nr_dirtied_pause below */ break; } @@ -1135,11 +1160,15 @@ pause: dirty_ratelimit, task_ratelimit, pages_dirtied, + period, pause, start_time); __set_current_state(TASK_KILLABLE); io_schedule_timeout(pause); + current->dirty_paused_when = now + pause; + current->nr_dirtied = 0; + /* * This is typically equal to (nr_dirty < dirty_thresh) and can * also keep "1000+ dd on a slow USB stick" under control. @@ -1167,11 +1196,10 @@ pause: if (!dirty_exceeded && bdi->dirty_exceeded) bdi->dirty_exceeded = 0; - current->nr_dirtied = 0; if (pause == 0) { /* in freerun area */ current->nr_dirtied_pause = dirty_poll_interval(nr_dirty, dirty_thresh); - } else if (pause <= max_pause / 4 && + } else if (period <= max_pause / 4 && pages_dirtied >= current->nr_dirtied_pause) { current->nr_dirtied_pause = clamp_val( dirty_ratelimit * (max_pause / 2) / HZ, -- cgit v1.2.3 From 7ccb9ad5364d6ac0c803096c67e76a7545cf7a77 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Wed, 30 Nov 2011 11:08:55 -0600 Subject: writeback: max, min and target dirty pause time Control the pause time and the call intervals to balance_dirty_pages() with three parameters: 1) max_pause, limited by bdi_dirty and MAX_PAUSE 2) the target pause time, grows with the number of dd tasks and is normally limited by max_pause/2 3) the minimal pause, set to half the target pause and is used to skip short sleeps and accumulate them into bigger ones The typical behaviors after patch: - if ever task_ratelimit is far below dirty_ratelimit, the pause time will remain constant at max_pause and nr_dirtied_pause will be fluctuating with task_ratelimit - in the normal cases, nr_dirtied_pause will remain stable (keep in the same pace with dirty_ratelimit) and the pause time will be fluctuating with task_ratelimit In summary, someone has to fluctuate with task_ratelimit, because task_ratelimit = nr_dirtied_pause / pause We normally prefer a stable nr_dirtied_pause, until reaching max_pause. The notable behavior changes are: - in stable workloads, there will no longer be sudden big trajectory switching of nr_dirtied_pause as concerned by Peter. It will be as smooth as dirty_ratelimit and changing proportionally with it (as always, assuming bdi bandwidth does not fluctuate across 2^N lines, otherwise nr_dirtied_pause will show up in 2+ parallel trajectories) - in the rare cases when something keeps task_ratelimit far below dirty_ratelimit, the smoothness can no longer be retained and nr_dirtied_pause will be "dancing" with task_ratelimit. This fixes a (not that destructive but still not good) bug that dirty_ratelimit gets brought down undesirably <= balanced_dirty_ratelimit is under estimated <= weakly executed task_ratelimit <= pause goes too large and gets trimmed down to max_pause <= nr_dirtied_pause (based on dirty_ratelimit) is set too large <= dirty_ratelimit being much larger than task_ratelimit - introduce min_pause to avoid small pause sleeps - when pause is trimmed down to max_pause, try to compensate it at the next pause time The "refactor" type of changes are: The max_pause equation is slightly transformed to make it slightly more efficient. We now scale target_pause by (N * 10ms) on 2^N concurrent tasks, which is effectively equal to the original scaling max_pause by (N * 20ms) because the original code does implicit target_pause ~= max_pause / 2. Based on the same implicit ratio, target_pause starts with 10ms on 1 dd. CC: Jan Kara CC: Peter Zijlstra Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 125 ++++++++++++++++++++++++++++++++++------------------ 1 file changed, 81 insertions(+), 44 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 491932155825..5830991f261a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -962,40 +962,81 @@ static unsigned long dirty_poll_interval(unsigned long dirty, return 1; } -static unsigned long bdi_max_pause(struct backing_dev_info *bdi, - unsigned long bdi_dirty) +static long bdi_max_pause(struct backing_dev_info *bdi, + unsigned long bdi_dirty) { - unsigned long bw = bdi->avg_write_bandwidth; - unsigned long hi = ilog2(bw); - unsigned long lo = ilog2(bdi->dirty_ratelimit); - unsigned long t; + long bw = bdi->avg_write_bandwidth; + long t; - /* target for 20ms max pause on 1-dd case */ - t = HZ / 50; + /* + * Limit pause time for small memory systems. If sleeping for too long + * time, a small pool of dirty/writeback pages may go empty and disk go + * idle. + * + * 8 serves as the safety ratio. + */ + t = bdi_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8)); + t++; + + return min_t(long, t, MAX_PAUSE); +} + +static long bdi_min_pause(struct backing_dev_info *bdi, + long max_pause, + unsigned long task_ratelimit, + unsigned long dirty_ratelimit, + int *nr_dirtied_pause) +{ + long hi = ilog2(bdi->avg_write_bandwidth); + long lo = ilog2(bdi->dirty_ratelimit); + long t; /* target pause */ + long pause; /* estimated next pause */ + int pages; /* target nr_dirtied_pause */ + + /* target for 10ms pause on 1-dd case */ + t = max(1, HZ / 100); /* * Scale up pause time for concurrent dirtiers in order to reduce CPU * overheads. * - * (N * 20ms) on 2^N concurrent tasks. + * (N * 10ms) on 2^N concurrent tasks. */ if (hi > lo) - t += (hi - lo) * (20 * HZ) / 1024; + t += (hi - lo) * (10 * HZ) / 1024; /* - * Limit pause time for small memory systems. If sleeping for too long - * time, a small pool of dirty/writeback pages may go empty and disk go - * idle. + * This is a bit convoluted. We try to base the next nr_dirtied_pause + * on the much more stable dirty_ratelimit. However the next pause time + * will be computed based on task_ratelimit and the two rate limits may + * depart considerably at some time. Especially if task_ratelimit goes + * below dirty_ratelimit/2 and the target pause is max_pause, the next + * pause time will be max_pause*2 _trimmed down_ to max_pause. As a + * result task_ratelimit won't be executed faithfully, which could + * eventually bring down dirty_ratelimit. * - * 8 serves as the safety ratio. + * We apply two rules to fix it up: + * 1) try to estimate the next pause time and if necessary, use a lower + * nr_dirtied_pause so as not to exceed max_pause. When this happens, + * nr_dirtied_pause will be "dancing" with task_ratelimit. + * 2) limit the target pause time to max_pause/2, so that the normal + * small fluctuations of task_ratelimit won't trigger rule (1) and + * nr_dirtied_pause will remain as stable as dirty_ratelimit. */ - t = min(t, bdi_dirty * HZ / (8 * bw + 1)); + t = min(t, 1 + max_pause / 2); + pages = dirty_ratelimit * t / roundup_pow_of_two(HZ); + + pause = HZ * pages / (task_ratelimit + 1); + if (pause > max_pause) { + t = max_pause; + pages = task_ratelimit * t / roundup_pow_of_two(HZ); + } + *nr_dirtied_pause = pages; /* - * The pause time will be settled within range (max_pause/4, max_pause). - * Apply a minimal value of 4 to get a non-zero max_pause/4. + * The minimal pause time will normally be half the target pause time. */ - return clamp_val(t, 4, MAX_PAUSE); + return 1 + t / 2; } /* @@ -1017,11 +1058,13 @@ static void balance_dirty_pages(struct address_space *mapping, unsigned long dirty_thresh; unsigned long bdi_thresh; long period; - long pause = 0; - long uninitialized_var(max_pause); + long pause; + long max_pause; + long min_pause; + int nr_dirtied_pause; bool dirty_exceeded = false; unsigned long task_ratelimit; - unsigned long uninitialized_var(dirty_ratelimit); + unsigned long dirty_ratelimit; unsigned long pos_ratio; struct backing_dev_info *bdi = mapping->backing_dev_info; unsigned long start_time = jiffies; @@ -1051,6 +1094,8 @@ static void balance_dirty_pages(struct address_space *mapping, if (nr_dirty <= freerun) { current->dirty_paused_when = now; current->nr_dirtied = 0; + current->nr_dirtied_pause = + dirty_poll_interval(nr_dirty, dirty_thresh); break; } @@ -1101,14 +1146,17 @@ static void balance_dirty_pages(struct address_space *mapping, nr_dirty, bdi_thresh, bdi_dirty, start_time); - max_pause = bdi_max_pause(bdi, bdi_dirty); - dirty_ratelimit = bdi->dirty_ratelimit; pos_ratio = bdi_position_ratio(bdi, dirty_thresh, background_thresh, nr_dirty, bdi_thresh, bdi_dirty); task_ratelimit = ((u64)dirty_ratelimit * pos_ratio) >> RATELIMIT_CALC_SHIFT; + max_pause = bdi_max_pause(bdi, bdi_dirty); + min_pause = bdi_min_pause(bdi, max_pause, + task_ratelimit, dirty_ratelimit, + &nr_dirtied_pause); + if (unlikely(task_ratelimit == 0)) { period = max_pause; pause = max_pause; @@ -1125,7 +1173,7 @@ static void balance_dirty_pages(struct address_space *mapping, * future periods by updating the virtual time; otherwise just * do a reset, as it may be a light dirtier. */ - if (unlikely(pause <= 0)) { + if (pause < min_pause) { trace_balance_dirty_pages(bdi, dirty_thresh, background_thresh, @@ -1136,7 +1184,7 @@ static void balance_dirty_pages(struct address_space *mapping, task_ratelimit, pages_dirtied, period, - pause, + min(pause, 0L), start_time); if (pause < -HZ) { current->dirty_paused_when = now; @@ -1144,11 +1192,15 @@ static void balance_dirty_pages(struct address_space *mapping, } else if (period) { current->dirty_paused_when += period; current->nr_dirtied = 0; - } - pause = 1; /* avoid resetting nr_dirtied_pause below */ + } else if (current->nr_dirtied_pause <= pages_dirtied) + current->nr_dirtied_pause += pages_dirtied; break; } - pause = min(pause, max_pause); + if (unlikely(pause > max_pause)) { + /* for occasional dropped task_ratelimit */ + now += min(pause - max_pause, max_pause); + pause = max_pause; + } pause: trace_balance_dirty_pages(bdi, @@ -1168,6 +1220,7 @@ pause: current->dirty_paused_when = now + pause; current->nr_dirtied = 0; + current->nr_dirtied_pause = nr_dirtied_pause; /* * This is typically equal to (nr_dirty < dirty_thresh) and can @@ -1196,22 +1249,6 @@ pause: if (!dirty_exceeded && bdi->dirty_exceeded) bdi->dirty_exceeded = 0; - if (pause == 0) { /* in freerun area */ - current->nr_dirtied_pause = - dirty_poll_interval(nr_dirty, dirty_thresh); - } else if (period <= max_pause / 4 && - pages_dirtied >= current->nr_dirtied_pause) { - current->nr_dirtied_pause = clamp_val( - dirty_ratelimit * (max_pause / 2) / HZ, - pages_dirtied + pages_dirtied / 8, - pages_dirtied * 4); - } else if (pause >= max_pause) { - current->nr_dirtied_pause = 1 | clamp_val( - dirty_ratelimit * (max_pause / 2) / HZ, - pages_dirtied / 4, - pages_dirtied - pages_dirtied / 8); - } - if (writeback_in_progress(bdi)) return; -- cgit v1.2.3 From 5b9b357435a51ff14835c06d8b00765a4c68f313 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Tue, 6 Dec 2011 13:17:17 -0600 Subject: writeback: avoid tiny dirty poll intervals The LKP tests see big 56% regression for the case fio_mmap_randwrite_64k. Shaohua manages to root cause it to be the much smaller dirty pause times and hence much more frequent invocations to the IO-less balance_dirty_pages(). Since fio_mmap_randwrite_64k effectively contains both reads and writes, the more frequent pauses triggered more idling in the cfq IO scheduler. The solution is to increase pause time all the way up to the max 200ms in this case, which is found to restore most performance. This will help reduce CPU overheads in other cases, too. Note that I don't expect many performance critical workloads to run this access pattern: the mmap read-on-write is rather inefficient and could be avoided by doing normal writes syscalls. CC: Jan Kara CC: Peter Zijlstra Reported-by: Li Shaohua Tested-by: Li Shaohua Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 5830991f261a..422cf4edab47 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -41,6 +41,12 @@ */ #define MAX_PAUSE max(HZ/5, 1) +/* + * Try to keep balance_dirty_pages() call intervals higher than this many pages + * by raising pause time to max_pause when falls below it. + */ +#define DIRTY_POLL_THRESH (128 >> (PAGE_SHIFT - 10)) + /* * Estimate write bandwidth at 200ms intervals. */ @@ -1026,6 +1032,23 @@ static long bdi_min_pause(struct backing_dev_info *bdi, t = min(t, 1 + max_pause / 2); pages = dirty_ratelimit * t / roundup_pow_of_two(HZ); + /* + * Tiny nr_dirtied_pause is found to hurt I/O performance in the test + * case fio-mmap-randwrite-64k, which does 16*{sync read, async write}. + * When the 16 consecutive reads are often interrupted by some dirty + * throttling pause during the async writes, cfq will go into idles + * (deadline is fine). So push nr_dirtied_pause as high as possible + * until reaches DIRTY_POLL_THRESH=32 pages. + */ + if (pages < DIRTY_POLL_THRESH) { + t = max_pause; + pages = dirty_ratelimit * t / roundup_pow_of_two(HZ); + if (pages > DIRTY_POLL_THRESH) { + pages = DIRTY_POLL_THRESH; + t = HZ * DIRTY_POLL_THRESH / dirty_ratelimit; + } + } + pause = HZ * pages / (task_ratelimit + 1); if (pause > max_pause) { t = max_pause; @@ -1036,7 +1059,7 @@ static long bdi_min_pause(struct backing_dev_info *bdi, /* * The minimal pause time will normally be half the target pause time. */ - return 1 + t / 2; + return pages >= DIRTY_POLL_THRESH ? 1 + t / 2 : t; } /* -- cgit v1.2.3 From 82791940545be38810dfd5e03ee701e749f04aab Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Sat, 3 Dec 2011 21:26:01 -0600 Subject: writeback: do strict bdi dirty_exceeded This helps to reduce dirty throttling polls and hence CPU overheads. bdi->dirty_exceeded typically only helps when suddenly starting 100+ dd's on a disk, in which case the dd's may need to poll balance_dirty_pages() earlier than tsk->nr_dirtied_pause. CC: Jan Kara CC: Peter Zijlstra Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 422cf4edab47..936dc7b61dc6 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1160,7 +1160,7 @@ static void balance_dirty_pages(struct address_space *mapping, bdi_stat(bdi, BDI_WRITEBACK); } - dirty_exceeded = (bdi_dirty > bdi_thresh) || + dirty_exceeded = (bdi_dirty > bdi_thresh) && (nr_dirty > dirty_thresh); if (dirty_exceeded && !bdi->dirty_exceeded) bdi->dirty_exceeded = 1; -- cgit v1.2.3 From bdaac4902a8225bf247ecaeac46c4b2980cc70e5 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Wed, 3 Aug 2011 14:30:36 -0600 Subject: writeback: balanced_rate cannot exceed write bandwidth Add an upper limit to balanced_rate according to the below inequality. This filters out some rare but huge singular points, which at least enables more readable gnuplot figures. When there are N dd dirtiers, balanced_dirty_ratelimit = write_bw / N So it holds that balanced_dirty_ratelimit <= write_bw The singular points originate from dirty_rate in the below formular: balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate where dirty_rate = (number of page dirties in the past 200ms) / 200ms In the extreme case, if all dd tasks suddenly get blocked on something else and hence no pages are dirtied at all, dirty_rate will be 0 and balanced_dirty_ratelimit will be inf. This could happen in reality. Note that these huge singular points are not a real threat, since they are _guaranteed_ to be filtered out by the min(balanced_dirty_ratelimit, task_ratelimit) line in bdi_update_dirty_ratelimit(). task_ratelimit is based on the number of dirty pages, which will never _suddenly_ fly away like balanced_dirty_ratelimit. So any weirdly large balanced_dirty_ratelimit will be cut down to the level of task_ratelimit. There won't be tiny singular points though, as long as the dirty pages lie inside the dirty throttling region (above the freerun region). Because there the dd tasks will be throttled by balanced_dirty_pages() and won't be able to suddenly dirty much more pages than average. Acked-by: Jan Kara Acked-by: Peter Zijlstra Signed-off-by: Wu Fengguang --- mm/page-writeback.c | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 936dc7b61dc6..0ae2008eb540 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -822,6 +822,11 @@ static void bdi_update_dirty_ratelimit(struct backing_dev_info *bdi, */ balanced_dirty_ratelimit = div_u64((u64)task_ratelimit * write_bw, dirty_rate | 1); + /* + * balanced_dirty_ratelimit ~= (write_bw / N) <= write_bw + */ + if (unlikely(balanced_dirty_ratelimit > write_bw)) + balanced_dirty_ratelimit = write_bw; /* * We could safely do this and return immediately: -- cgit v1.2.3 From ff01bb4832651c6d25ac509a06a10fcbd75c461c Mon Sep 17 00:00:00 2001 From: Al Viro Date: Fri, 16 Sep 2011 02:31:11 -0400 Subject: fs: move code out of buffer.c Move invalidate_bdev, block_sync_page into fs/block_dev.c. Export kill_bdev as well, so brd doesn't have to open code it. Reduce buffer_head.h requirement accordingly. Removed a rather large comment from invalidate_bdev, as it looked a bit obsolete to bother moving. The small comment replacing it says enough. Signed-off-by: Nick Piggin Cc: Al Viro Cc: Christoph Hellwig Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- arch/powerpc/sysdev/axonram.c | 1 - block/genhd.c | 1 - block/ioctl.c | 2 +- drivers/block/amiflop.c | 2 +- drivers/block/brd.c | 9 ++++---- drivers/block/floppy.c | 1 - drivers/block/loop.c | 1 - drivers/cdrom/cdrom.c | 1 - drivers/md/dm.c | 1 - drivers/md/md.c | 3 +-- drivers/mtd/devices/block2mtd.c | 1 - drivers/s390/block/dasd.c | 1 - drivers/scsi/scsicam.c | 1 - drivers/tty/sysrq.c | 2 +- fs/block_dev.c | 30 ++++++++++++++++++++++--- fs/buffer.c | 50 ----------------------------------------- fs/cachefiles/interface.c | 1 - fs/cramfs/inode.c | 1 - fs/fs-writeback.c | 1 - fs/libfs.c | 2 +- fs/quota/dquot.c | 1 - fs/quota/quota.c | 1 - fs/splice.c | 1 - fs/sync.c | 1 - include/linux/fs.h | 3 +++ kernel/power/swap.c | 1 - mm/page-writeback.c | 2 +- mm/swap_state.c | 1 - 28 files changed, 40 insertions(+), 83 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/arch/powerpc/sysdev/axonram.c b/arch/powerpc/sysdev/axonram.c index ba4271919062..1c16141c031c 100644 --- a/arch/powerpc/sysdev/axonram.c +++ b/arch/powerpc/sysdev/axonram.c @@ -25,7 +25,6 @@ #include #include -#include #include #include #include diff --git a/block/genhd.c b/block/genhd.c index bf443a71b93e..b7d1a0e42686 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -15,7 +15,6 @@ #include #include #include -#include #include #include #include diff --git a/block/ioctl.c b/block/ioctl.c index ca939fc1030f..91e7b19c86f4 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -5,7 +5,7 @@ #include #include #include -#include +#include #include #include diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c index 8eba86bba599..386146d792d1 100644 --- a/drivers/block/amiflop.c +++ b/drivers/block/amiflop.c @@ -63,7 +63,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/drivers/block/brd.c b/drivers/block/brd.c index d22119d49e53..ec246437f5a4 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -17,7 +17,7 @@ #include #include #include -#include /* invalidate_bh_lrus() */ +#include #include #include @@ -402,14 +402,13 @@ static int brd_ioctl(struct block_device *bdev, fmode_t mode, error = -EBUSY; if (bdev->bd_openers <= 1) { /* - * Invalidate the cache first, so it isn't written - * back to the device. + * Kill the cache first, so it isn't written back to the + * device. * * Another thread might instantiate more buffercache here, * but there is not much we can do to close that race. */ - invalidate_bh_lrus(); - truncate_inode_pages(bdev->bd_inode->i_mapping, 0); + kill_bdev(bdev); brd_free_pages(brd); error = 0; } diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index 9955a53733b2..510fb10ec45a 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -188,7 +188,6 @@ static int print_unex = 1; #include #include #include -#include /* for invalidate_buffers() */ #include #include #include diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 1e888c9e85b3..f00257782fcc 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -69,7 +69,6 @@ #include #include #include -#include /* for invalidate_bdev() */ #include #include #include diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c index f997c27d79e2..2118211aff99 100644 --- a/drivers/cdrom/cdrom.c +++ b/drivers/cdrom/cdrom.c @@ -267,7 +267,6 @@ #include #include -#include #include #include #include diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 4720f68f817e..b89c548ec3f8 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/md/md.c b/drivers/md/md.c index f47f1f8ac44b..5d1b6762f108 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -36,8 +36,7 @@ #include #include #include -#include -#include /* for invalidate_bdev */ +#include #include #include #include diff --git a/drivers/mtd/devices/block2mtd.c b/drivers/mtd/devices/block2mtd.c index b78f23169d4e..ebeabc727f70 100644 --- a/drivers/mtd/devices/block2mtd.c +++ b/drivers/mtd/devices/block2mtd.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index 65894f05a801..2de5b60ee8c8 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -17,7 +17,6 @@ #include #include #include -#include #include #include #include diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c index 6803b1e26ecc..92d24d6dcb39 100644 --- a/drivers/scsi/scsicam.c +++ b/drivers/scsi/scsicam.c @@ -16,7 +16,6 @@ #include #include #include -#include #include #include diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index 43db715f1502..7867b7c4538e 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -32,7 +32,6 @@ #include #include #include -#include /* for fsync_bdev() */ #include #include #include @@ -41,6 +40,7 @@ #include #include #include +#include #include #include diff --git a/fs/block_dev.c b/fs/block_dev.c index 7866cdd9fe70..69a5b6fbee2b 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -25,6 +26,7 @@ #include #include #include +#include #include #include "internal.h" @@ -82,13 +84,35 @@ static sector_t max_block(struct block_device *bdev) } /* Kill _all_ buffers and pagecache , dirty or not.. */ -static void kill_bdev(struct block_device *bdev) +void kill_bdev(struct block_device *bdev) { - if (bdev->bd_inode->i_mapping->nrpages == 0) + struct address_space *mapping = bdev->bd_inode->i_mapping; + + if (mapping->nrpages == 0) return; + invalidate_bh_lrus(); - truncate_inode_pages(bdev->bd_inode->i_mapping, 0); + truncate_inode_pages(mapping, 0); } +EXPORT_SYMBOL(kill_bdev); + +/* Invalidate clean unused buffers and pagecache. */ +void invalidate_bdev(struct block_device *bdev) +{ + struct address_space *mapping = bdev->bd_inode->i_mapping; + + if (mapping->nrpages == 0) + return; + + invalidate_bh_lrus(); + lru_add_drain_all(); /* make sure all lru add caches are flushed */ + invalidate_mapping_pages(mapping, 0, -1); + /* 99% of the time, we don't need to flush the cleancache on the bdev. + * But, for the strange corners, lets be cautious + */ + cleancache_flush_inode(mapping); +} +EXPORT_SYMBOL(invalidate_bdev); int set_blocksize(struct block_device *bdev, int size) { diff --git a/fs/buffer.c b/fs/buffer.c index 19d8eb7fdc81..1a30db77af32 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -41,7 +41,6 @@ #include #include #include -#include static int fsync_buffers_list(spinlock_t *lock, struct list_head *list); @@ -231,55 +230,6 @@ out: return ret; } -/* If invalidate_buffers() will trash dirty buffers, it means some kind - of fs corruption is going on. Trashing dirty data always imply losing - information that was supposed to be just stored on the physical layer - by the user. - - Thus invalidate_buffers in general usage is not allwowed to trash - dirty buffers. For example ioctl(FLSBLKBUF) expects dirty data to - be preserved. These buffers are simply skipped. - - We also skip buffers which are still in use. For example this can - happen if a userspace program is reading the block device. - - NOTE: In the case where the user removed a removable-media-disk even if - there's still dirty data not synced on disk (due a bug in the device driver - or due an error of the user), by not destroying the dirty buffers we could - generate corruption also on the next media inserted, thus a parameter is - necessary to handle this case in the most safe way possible (trying - to not corrupt also the new disk inserted with the data belonging to - the old now corrupted disk). Also for the ramdisk the natural thing - to do in order to release the ramdisk memory is to destroy dirty buffers. - - These are two special cases. Normal usage imply the device driver - to issue a sync on the device (without waiting I/O completion) and - then an invalidate_buffers call that doesn't trash dirty buffers. - - For handling cache coherency with the blkdev pagecache the 'update' case - is been introduced. It is needed to re-read from disk any pinned - buffer. NOTE: re-reading from disk is destructive so we can do it only - when we assume nobody is changing the buffercache under our I/O and when - we think the disk contains more recent information than the buffercache. - The update == 1 pass marks the buffers we need to update, the update == 2 - pass does the actual I/O. */ -void invalidate_bdev(struct block_device *bdev) -{ - struct address_space *mapping = bdev->bd_inode->i_mapping; - - if (mapping->nrpages == 0) - return; - - invalidate_bh_lrus(); - lru_add_drain_all(); /* make sure all lru add caches are flushed */ - invalidate_mapping_pages(mapping, 0, -1); - /* 99% of the time, we don't need to flush the cleancache on the bdev. - * But, for the strange corners, lets be cautious - */ - cleancache_flush_inode(mapping); -} -EXPORT_SYMBOL(invalidate_bdev); - /* * Kick the writeback threads then try to free up some ZONE_NORMAL memory. */ diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c index 1064805e653b..67bef6d01484 100644 --- a/fs/cachefiles/interface.c +++ b/fs/cachefiles/interface.c @@ -11,7 +11,6 @@ #include #include -#include #include "internal.h" #define list_to_page(head) (list_entry((head)->prev, struct page, lru)) diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index 739fb59bcdc2..c37adb222113 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 517f211a3bd4..80a4574028f1 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include "internal.h" diff --git a/fs/libfs.c b/fs/libfs.c index f6d411eef1e7..5b2dbb3ba4fc 100644 --- a/fs/libfs.c +++ b/fs/libfs.c @@ -12,7 +12,7 @@ #include #include #include -#include +#include /* sync_mapping_buffers */ #include diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 5b572c89e6c4..5d81e92daf83 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -73,7 +73,6 @@ #include #include #include -#include #include #include #include "../internal.h" /* ugh */ diff --git a/fs/quota/quota.c b/fs/quota/quota.c index 35f4b0ecdeb3..7898cd688a00 100644 --- a/fs/quota/quota.c +++ b/fs/quota/quota.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include diff --git a/fs/splice.c b/fs/splice.c index fa2defa8afcf..1ec0493266b3 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include #include diff --git a/fs/sync.c b/fs/sync.c index 101b8ef901d7..f3501ef39235 100644 --- a/fs/sync.c +++ b/fs/sync.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include "internal.h" diff --git a/include/linux/fs.h b/include/linux/fs.h index cec429d76ab0..e853ba5eddd4 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2092,6 +2092,7 @@ extern void bd_forget(struct inode *inode); extern void bdput(struct block_device *); extern void invalidate_bdev(struct block_device *); extern int sync_blockdev(struct block_device *bdev); +extern void kill_bdev(struct block_device *); extern struct super_block *freeze_bdev(struct block_device *); extern void emergency_thaw_all(void); extern int thaw_bdev(struct block_device *bdev, struct super_block *sb); @@ -2099,6 +2100,7 @@ extern int fsync_bdev(struct block_device *); #else static inline void bd_forget(struct inode *inode) {} static inline int sync_blockdev(struct block_device *bdev) { return 0; } +static inline void kill_bdev(struct block_device *bdev) {} static inline void invalidate_bdev(struct block_device *bdev) {} static inline struct super_block *freeze_bdev(struct block_device *sb) @@ -2415,6 +2417,7 @@ extern ssize_t blkdev_aio_write(struct kiocb *iocb, const struct iovec *iov, unsigned long nr_segs, loff_t pos); extern int blkdev_fsync(struct file *filp, loff_t start, loff_t end, int datasync); +extern void block_sync_page(struct page *page); /* fs/splice.c */ extern ssize_t generic_file_splice_read(struct file *, loff_t *, diff --git a/kernel/power/swap.c b/kernel/power/swap.c index 11a594c4ba25..3739ecced085 100644 --- a/kernel/power/swap.c +++ b/kernel/power/swap.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 50f08241f981..8616ef3025a4 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -32,7 +32,7 @@ #include #include #include -#include +#include /* __set_page_dirty_buffers */ #include #include diff --git a/mm/swap_state.c b/mm/swap_state.c index 78cc4d1f6cce..ea6b32d61873 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include -- cgit v1.2.3 From 1edf223485c42c99655dcd001db1e46ad5e5d2d7 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Tue, 10 Jan 2012 15:06:57 -0800 Subject: mm/page-writeback.c: make determine_dirtyable_memory static again The tracing ring-buffer used this function briefly, but not anymore. Make it local to the writeback code again. Also, move the function so that no forward declaration needs to be reintroduced. Signed-off-by: Johannes Weiner Acked-by: Mel Gorman Reviewed-by: Michal Hocko Cc: Wu Fengguang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/writeback.h | 2 - mm/page-writeback.c | 122 +++++++++++++++++++++++----------------------- 2 files changed, 60 insertions(+), 64 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/include/linux/writeback.h b/include/linux/writeback.h index a378c295851f..34a005515fef 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -138,8 +138,6 @@ extern int vm_highmem_is_dirtyable; extern int block_dump; extern int laptop_mode; -extern unsigned long determine_dirtyable_memory(void); - extern int dirty_background_ratio_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 8616ef3025a4..c081bf62202b 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -129,6 +129,66 @@ unsigned long global_dirty_limit; */ static struct prop_descriptor vm_completions; +/* + * Work out the current dirty-memory clamping and background writeout + * thresholds. + * + * The main aim here is to lower them aggressively if there is a lot of mapped + * memory around. To avoid stressing page reclaim with lots of unreclaimable + * pages. It is better to clamp down on writers than to start swapping, and + * performing lots of scanning. + * + * We only allow 1/2 of the currently-unmapped memory to be dirtied. + * + * We don't permit the clamping level to fall below 5% - that is getting rather + * excessive. + * + * We make sure that the background writeout level is below the adjusted + * clamping level. + */ +static unsigned long highmem_dirtyable_memory(unsigned long total) +{ +#ifdef CONFIG_HIGHMEM + int node; + unsigned long x = 0; + + for_each_node_state(node, N_HIGH_MEMORY) { + struct zone *z = + &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; + + x += zone_page_state(z, NR_FREE_PAGES) + + zone_reclaimable_pages(z); + } + /* + * Make sure that the number of highmem pages is never larger + * than the number of the total dirtyable memory. This can only + * occur in very strange VM situations but we want to make sure + * that this does not occur. + */ + return min(x, total); +#else + return 0; +#endif +} + +/** + * determine_dirtyable_memory - amount of memory that may be used + * + * Returns the numebr of pages that can currently be freed and used + * by the kernel for direct mappings. + */ +static unsigned long determine_dirtyable_memory(void) +{ + unsigned long x; + + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + + if (!vm_highmem_is_dirtyable) + x -= highmem_dirtyable_memory(x); + + return x + 1; /* Ensure that we never return 0 */ +} + /* * couple the period to the dirty_ratio: * @@ -196,7 +256,6 @@ int dirty_ratio_handler(struct ctl_table *table, int write, return ret; } - int dirty_bytes_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) @@ -291,67 +350,6 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio) } EXPORT_SYMBOL(bdi_set_max_ratio); -/* - * Work out the current dirty-memory clamping and background writeout - * thresholds. - * - * The main aim here is to lower them aggressively if there is a lot of mapped - * memory around. To avoid stressing page reclaim with lots of unreclaimable - * pages. It is better to clamp down on writers than to start swapping, and - * performing lots of scanning. - * - * We only allow 1/2 of the currently-unmapped memory to be dirtied. - * - * We don't permit the clamping level to fall below 5% - that is getting rather - * excessive. - * - * We make sure that the background writeout level is below the adjusted - * clamping level. - */ - -static unsigned long highmem_dirtyable_memory(unsigned long total) -{ -#ifdef CONFIG_HIGHMEM - int node; - unsigned long x = 0; - - for_each_node_state(node, N_HIGH_MEMORY) { - struct zone *z = - &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; - - x += zone_page_state(z, NR_FREE_PAGES) + - zone_reclaimable_pages(z); - } - /* - * Make sure that the number of highmem pages is never larger - * than the number of the total dirtyable memory. This can only - * occur in very strange VM situations but we want to make sure - * that this does not occur. - */ - return min(x, total); -#else - return 0; -#endif -} - -/** - * determine_dirtyable_memory - amount of memory that may be used - * - * Returns the numebr of pages that can currently be freed and used - * by the kernel for direct mappings. - */ -unsigned long determine_dirtyable_memory(void) -{ - unsigned long x; - - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); - - if (!vm_highmem_is_dirtyable) - x -= highmem_dirtyable_memory(x); - - return x + 1; /* Ensure that we never return 0 */ -} - static unsigned long dirty_freerun_ceiling(unsigned long thresh, unsigned long bg_thresh) { -- cgit v1.2.3 From ab8fabd46f811d5153d8a0cd2fac9a0d41fb593d Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Tue, 10 Jan 2012 15:07:42 -0800 Subject: mm: exclude reserved pages from dirtyable memory Per-zone dirty limits try to distribute page cache pages allocated for writing across zones in proportion to the individual zone sizes, to reduce the likelihood of reclaim having to write back individual pages from the LRU lists in order to make progress. This patch: The amount of dirtyable pages should not include the full number of free pages: there is a number of reserved pages that the page allocator and kswapd always try to keep free. The closer (reclaimable pages - dirty pages) is to the number of reserved pages, the more likely it becomes for reclaim to run into dirty pages: +----------+ --- | anon | | +----------+ | | | | | | -- dirty limit new -- flusher new | file | | | | | | | | | -- dirty limit old -- flusher old | | | +----------+ --- reclaim | reserved | +----------+ | kernel | +----------+ This patch introduces a per-zone dirty reserve that takes both the lowmem reserve as well as the high watermark of the zone into account, and a global sum of those per-zone values that is subtracted from the global amount of dirtyable pages. The lowmem reserve is unavailable to page cache allocations and kswapd tries to keep the high watermark free. We don't want to end up in a situation where reclaim has to clean pages in order to balance zones. Not treating reserved pages as dirtyable on a global level is only a conceptual fix. In reality, dirty pages are not distributed equally across zones and reclaim runs into dirty pages on a regular basis. But it is important to get this right before tackling the problem on a per-zone level, where the distance between reclaim and the dirty pages is mostly much smaller in absolute numbers. [akpm@linux-foundation.org: fix highmem build] Signed-off-by: Johannes Weiner Reviewed-by: Rik van Riel Reviewed-by: Michal Hocko Reviewed-by: Minchan Kim Acked-by: Mel Gorman Cc: KAMEZAWA Hiroyuki Cc: Christoph Hellwig Cc: Wu Fengguang Cc: Dave Chinner Cc: Jan Kara Cc: Shaohua Li Cc: Chris Mason Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/mmzone.h | 6 ++++++ include/linux/swap.h | 1 + mm/page-writeback.c | 5 +++-- mm/page_alloc.c | 19 +++++++++++++++++++ 4 files changed, 29 insertions(+), 2 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 3ac040f19369..ca6ca92418a6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -317,6 +317,12 @@ struct zone { */ unsigned long lowmem_reserve[MAX_NR_ZONES]; + /* + * This is a per-zone reserve of pages that should not be + * considered dirtyable memory. + */ + unsigned long dirty_balance_reserve; + #ifdef CONFIG_NUMA int node; /* diff --git a/include/linux/swap.h b/include/linux/swap.h index 1e22e126d2ac..06061a7f8e69 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -207,6 +207,7 @@ struct swap_list_t { /* linux/mm/page_alloc.c */ extern unsigned long totalram_pages; extern unsigned long totalreserve_pages; +extern unsigned long dirty_balance_reserve; extern unsigned int nr_free_buffer_pages(void); extern unsigned int nr_free_pagecache_pages(void); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index c081bf62202b..9ab6de82d8e6 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -157,7 +157,7 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; x += zone_page_state(z, NR_FREE_PAGES) + - zone_reclaimable_pages(z); + zone_reclaimable_pages(z) - z->dirty_balance_reserve; } /* * Make sure that the number of highmem pages is never larger @@ -181,7 +181,8 @@ static unsigned long determine_dirtyable_memory(void) { unsigned long x; - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - + dirty_balance_reserve; if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 93baebcc06f3..2cb9eb71e282 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -97,6 +97,14 @@ EXPORT_SYMBOL(node_states); unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; +/* + * When calculating the number of globally allowed dirty pages, there + * is a certain number of per-zone reserves that should not be + * considered dirtyable memory. This is the sum of those reserves + * over all existing zones that contribute dirtyable memory. + */ +unsigned long dirty_balance_reserve __read_mostly; + int percpu_pagelist_fraction; gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; @@ -4822,8 +4830,19 @@ static void calculate_totalreserve_pages(void) if (max > zone->present_pages) max = zone->present_pages; reserve_pages += max; + /* + * Lowmem reserves are not available to + * GFP_HIGHUSER page cache allocations and + * kswapd tries to balance zones to their high + * watermark. As a result, neither should be + * regarded as dirtyable memory, to prevent a + * situation where reclaim has to clean pages + * in order to balance the zones. + */ + zone->dirty_balance_reserve = max; } } + dirty_balance_reserve = reserve_pages; totalreserve_pages = reserve_pages; } -- cgit v1.2.3 From ccafa2879fb8d13b8031337a8743eac4189e5d6e Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Tue, 10 Jan 2012 15:07:44 -0800 Subject: mm: writeback: cleanups in preparation for per-zone dirty limits The next patch will introduce per-zone dirty limiting functions in addition to the traditional global dirty limiting. Rename determine_dirtyable_memory() to global_dirtyable_memory() before adding the zone-specific version, and fix up its documentation. Also, move the functions to determine the dirtyable memory and the function to calculate the dirty limit based on that together so that their relationship is more apparent and that they can be commented on as a group. Signed-off-by: Johannes Weiner Reviewed-by: Minchan Kim Acked-by: Mel Gorman Reviewed-by: Michal Hocko Cc: KAMEZAWA Hiroyuki Cc: Christoph Hellwig Cc: Wu Fengguang Cc: Dave Chinner Cc: Jan Kara Cc: Shaohua Li Cc: Rik van Riel Cc: Chris Mason Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/page-writeback.c | 93 +++++++++++++++++++++++++++-------------------------- 1 file changed, 47 insertions(+), 46 deletions(-) (limited to 'mm/page-writeback.c') diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 9ab6de82d8e6..433fa990fe8b 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -146,6 +146,7 @@ static struct prop_descriptor vm_completions; * We make sure that the background writeout level is below the adjusted * clamping level. */ + static unsigned long highmem_dirtyable_memory(unsigned long total) { #ifdef CONFIG_HIGHMEM @@ -172,12 +173,12 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) } /** - * determine_dirtyable_memory - amount of memory that may be used + * global_dirtyable_memory - number of globally dirtyable pages * - * Returns the numebr of pages that can currently be freed and used - * by the kernel for direct mappings. + * Returns the global number of pages potentially available for dirty + * page cache. This is the base value for the global dirty limits. */ -static unsigned long determine_dirtyable_memory(void) +unsigned long global_dirtyable_memory(void) { unsigned long x; @@ -190,6 +191,47 @@ static unsigned long determine_dirtyable_memory(void) return x + 1; /* Ensure that we never return 0 */ } +/* + * global_dirty_limits - background-writeback and dirty-throttling thresholds + * + * Calculate the dirty thresholds based on sysctl parameters + * - vm.dirty_background_ratio or vm.dirty_background_bytes + * - vm.dirty_ratio or vm.dirty_bytes + * The dirty limits will be lifted by 1/4 for PF_LESS_THROTTLE (ie. nfsd) and + * real-time tasks. + */ +void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) +{ + unsigned long background; + unsigned long dirty; + unsigned long uninitialized_var(available_memory); + struct task_struct *tsk; + + if (!vm_dirty_bytes || !dirty_background_bytes) + available_memory = global_dirtyable_memory(); + + if (vm_dirty_bytes) + dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE); + else + dirty = (vm_dirty_ratio * available_memory) / 100; + + if (dirty_background_bytes) + background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE); + else + background = (dirty_background_ratio * available_memory) / 100; + + if (background >= dirty) + background = dirty / 2; + tsk = current; + if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) { + background += background / 4; + dirty += dirty / 4; + } + *pbackground = background; + *pdirty = dirty; + trace_global_dirty_state(background, dirty); +} + /* * couple the period to the dirty_ratio: * @@ -202,7 +244,7 @@ static int calc_period_shift(void) if (vm_dirty_bytes) dirty_total = vm_dirty_bytes / PAGE_SIZE; else - dirty_total = (vm_dirty_ratio * determine_dirtyable_memory()) / + dirty_total = (vm_dirty_ratio * global_dirtyable_memory()) / 100; return 2 + ilog2(dirty_total - 1); } @@ -362,47 +404,6 @@ static unsigned long hard_dirty_limit(unsigned long thresh) return max(thresh, global_dirty_limit); } -/* - * global_dirty_limits - background-writeback and dirty-throttling thresholds - * - * Calculate the dirty thresholds based on sysctl parameters - * - vm.dirty_background_ratio or vm.dirty_background_bytes - * - vm.dirty_ratio or vm.dirty_bytes - * The dirty limits will be lifted by 1/4 for PF_LESS_THROTTLE (ie. nfsd) and - * real-time tasks. - */ -void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) -{ - unsigned long background; - unsigned long dirty; - unsigned long uninitialized_var(available_memory); - struct task_struct *tsk; - - if (!vm_dirty_bytes || !dirty_background_bytes) - available_memory = determine_dirtyable_memory(); - - if (vm_dirty_bytes) - dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE); - else - dirty = (vm_dirty_ratio * available_memory) / 100; - - if (dirty_background_bytes) - background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE); - else - background = (dirty_background_ratio * available_memory) / 100; - - if (background >= dirty) - background = dirty / 2; - tsk = current; - if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) { - background += background / 4; - dirty += dirty / 4; - } - *pbackground = background; - *pdirty = dirty; - trace_global_dirty_state(background, dirty); -} - /** * bdi_dirty_limit - @bdi's share of dirty throttling threshold * @bdi: the backing_dev_info to query -- cgit v1.2.3 From a756cf5908530e8b40bdf569eb48b40139e8d7fd Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Tue, 10 Jan 2012 15:07:49 -0800 Subject: mm: try to distribute dirty pages fairly across zones The maximum number of dirty pages that exist in the system at any time is determined by a number of pages considered dirtyable and a user-configured percentage of those, or an absolute number in bytes. This number of dirtyable pages is the sum of memory provided by all the zones in the system minus their lowmem reserves and high watermarks, so that the system can retain a healthy number of free pages without having to reclaim dirty pages. But there is a flaw in that we have a zoned page allocator which does not care about the global state but rather the state of individual memory zones. And right now there is nothing that prevents one zone from filling up with dirty pages while other zones are spared, which frequently leads to situations where kswapd, in order to restore the watermark of free pages, does indeed have to write pages from that zone's LRU list. This can interfere so badly with IO from the flusher threads that major filesystems (btrfs, xfs, ext4) mostly ignore write requests from reclaim already, taking away the VM's only possibility to keep such a zone balanced, aside from hoping the flushers will soon clean pages from that zone. Enter per-zone dirty limits. They are to a zone's dirtyable memory what the global limit is to the global amount of dirtyable memory, and try to make sure that no single zone receives more than its fair share of the globally allowed dirty pages in the first place. As the number of pages considered dirtyable excludes the zones' lowmem reserves and high watermarks, the maximum number of dirty pages in a zone is such that the zone can always be balanced without requiring page cleaning. As this is a placement decision in the page allocator and pages are dirtied only after the allocation, this patch allows allocators to pass __GFP_WRITE when they know in advance that the page will be written to and become dirty soon. The page allocator will then attempt to allocate from the first zone of the zonelist - which on NUMA is determined by the task's NUMA memory policy - that has not exceeded its dirty limit. At first glance, it would appear that the diversion to lower zones can increase pressure on them, but this is not the case. With a full high zone, allocations will be diverted to lower zones eventually, so it is more of a shift in timing of the lower zone allocations. Workloads that previously could fit their dirty pages completely in the higher zone may be forced to allocate from lower zones, but the amount of pages that "spill over" are limited themselves by the lower zones' dirty constraints, and thus unlikely to become a problem. For now, the problem of unfair dirty page distribution remains for NUMA configurations where the zones allowed for allocation are in sum not big enough to trigger the global dirty limits, wake up the flusher threads and remedy the situation. Because of this, an allocation that could not succeed on any of the considered zones is allowed to ignore the dirty limits before going into direct reclaim or even failing the allocation, until a future patch changes the global dirty throttling and flusher thread activation so that they take individual zone states into account. Test results 15M DMA + 3246M DMA32 + 504 Normal = 3765M memory 40% dirty ratio 16G USB thumb drive 10 runs of dd if=/dev/zero of=disk/zeroes bs=32k count=$((10 << 15)) seconds nr_vmscan_write (stddev) min| median| max xfs vanilla: 549.747( 3.492) 0.000| 0.000| 0.000 patched: 550.996( 3.802) 0.000| 0.000| 0.000 fuse-ntfs vanilla: 1183.094(53.178) 54349.000| 59341.000| 65163.000 patched: 558.049(17.914) 0.000| 0.000| 43.000 btrfs vanilla: 573.679(14.015) 156657.000| 460178.000| 606926.000 patched: 563.365(11.368) 0.000| 0.000| 1362.000 ext4 vanilla: 561.197(15.782) 0.000|2725438.000|4143837.000 patched: 568.806(17.496) 0.000| 0.000| 0.000 Signed-off-by: Johannes Weiner Reviewed-by: Minchan Kim Acked-by: Mel Gorman Reviewed-by: Michal Hocko Tested-by: Wu Fengguang Cc: KAMEZAWA Hiroyuki Cc: Christoph Hellwig Cc: Dave Chinner Cc: Jan Kara Cc: Shaohua Li Cc: Rik van Riel Cc: Chris Mason Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/gfp.h | 4 ++- include/linux/writeback.h | 1 + mm/page-writeback.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 29 +++++++++++++++++ 4 files changed, 115 insertions(+), 1 deletion(-) (limited to 'mm/page-writeback.c') diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 66f172fdf5fe..581e74b7df95 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -36,6 +36,7 @@ struct vm_area_struct; #endif #define ___GFP_NO_KSWAPD 0x400000u #define ___GFP_OTHER_NODE 0x800000u +#define ___GFP_WRITE 0x1000000u /* * GFP bitmasks.. @@ -85,6 +86,7 @@ struct vm_area_struct; #define __GFP_NO_KSWAPD ((__force gfp_t)___GFP_NO_KSWAPD) #define __GFP_OTHER_NODE ((__force gfp_t)___GFP_OTHER_NODE) /* On behalf of other node */ +#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) /* Allocator intends to dirty page */ /* * This may seem redundant, but it's a way of annotating false positives vs. @@ -92,7 +94,7 @@ struct vm_area_struct; */ #define __GFP_NOTRACK_FALSE_POSITIVE (__GFP_NOTRACK) -#define __GFP_BITS_SHIFT 24 /* Room for N __GFP_FOO bits */ +#define __GFP_BITS_SHIFT 25 /* Room for N __GFP_FOO bits */ #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /* This equals 0, but use constants in case they ever change */ diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 34a005515fef..6dff47304971 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -124,6 +124,7 @@ void laptop_mode_timer_fn(unsigned long data); static inline void laptop_sync_completion(void) { } #endif void throttle_vm_writeout(gfp_t gfp_mask); +bool zone_dirty_ok(struct zone *zone); extern unsigned long global_dirty_limit; diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 433fa990fe8b..5cdd4f2b0c9d 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -147,6 +147,24 @@ static struct prop_descriptor vm_completions; * clamping level. */ +/* + * In a memory zone, there is a certain amount of pages we consider + * available for the page cache, which is essentially the number of + * free and reclaimable pages, minus some zone reserves to protect + * lowmem and the ability to uphold the zone's watermarks without + * requiring writeback. + * + * This number of dirtyable pages is the base value of which the + * user-configurable dirty ratio is the effictive number of pages that + * are allowed to be actually dirtied. Per individual zone, or + * globally by using the sum of dirtyable pages over all zones. + * + * Because the user is allowed to specify the dirty limit globally as + * absolute number of bytes, calculating the per-zone dirty limit can + * require translating the configured limit into a percentage of + * global dirtyable memory first. + */ + static unsigned long highmem_dirtyable_memory(unsigned long total) { #ifdef CONFIG_HIGHMEM @@ -232,6 +250,70 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) trace_global_dirty_state(background, dirty); } +/** + * zone_dirtyable_memory - number of dirtyable pages in a zone + * @zone: the zone + * + * Returns the zone's number of pages potentially available for dirty + * page cache. This is the base value for the per-zone dirty limits. + */ +static unsigned long zone_dirtyable_memory(struct zone *zone) +{ + /* + * The effective global number of dirtyable pages may exclude + * highmem as a big-picture measure to keep the ratio between + * dirty memory and lowmem reasonable. + * + * But this function is purely about the individual zone and a + * highmem zone can hold its share of dirty pages, so we don't + * care about vm_highmem_is_dirtyable here. + */ + return zone_page_state(zone, NR_FREE_PAGES) + + zone_reclaimable_pages(zone) - + zone->dirty_balance_reserve; +} + +/** + * zone_dirty_limit - maximum number of dirty pages allowed in a zone + * @zone: the zone + * + * Returns the maximum number of dirty pages allowed in a zone, based + * on the zone's dirtyable memory. + */ +static unsigned long zone_dirty_limit(struct zone *zone) +{ + unsigned long zone_memory = zone_dirtyable_memory(zone); + struct task_struct *tsk = current; + unsigned long dirty; + + if (vm_dirty_bytes) + dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE) * + zone_memory / global_dirtyable_memory(); + else + dirty = vm_dirty_ratio * zone_memory / 100; + + if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) + dirty += dirty / 4; + + return dirty; +} + +/** + * zone_dirty_ok - tells whether a zone is within its dirty limits + * @zone: the zone to check + * + * Returns %true when the dirty pages in @zone are within the zone's + * dirty limit, %false if the limit is exceeded. + */ +bool zone_dirty_ok(struct zone *zone) +{ + unsigned long limit = zone_dirty_limit(zone); + + return zone_page_state(zone, NR_FILE_DIRTY) + + zone_page_state(zone, NR_UNSTABLE_NFS) + + zone_page_state(zone, NR_WRITEBACK) <= limit; +} + /* * couple the period to the dirty_ratio: * diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2cb9eb71e282..4f95bcf0f2b1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1735,6 +1735,35 @@ zonelist_scan: if ((alloc_flags & ALLOC_CPUSET) && !cpuset_zone_allowed_softwall(zone, gfp_mask)) continue; + /* + * When allocating a page cache page for writing, we + * want to get it from a zone that is within its dirty + * limit, such that no single zone holds more than its + * proportional share of globally allowed dirty pages. + * The dirty limits take into account the zone's + * lowmem reserves and high watermark so that kswapd + * should be able to balance it without having to + * write pages from its LRU list. + * + * This may look like it could increase pressure on + * lower zones by failing allocations in higher zones + * before they are full. But the pages that do spill + * over are limited as the lower zones are protected + * by this very same mechanism. It should not become + * a practical burden to them. + * + * XXX: For now, allow allocations to potentially + * exceed the per-zone dirty limit in the slowpath + * (ALLOC_WMARK_LOW unset) before going into reclaim, + * which is important when on a NUMA setup the allowed + * zones are together not big enough to reach the + * global limit. The proper fix for these situations + * will require awareness of zones in the + * dirty-throttling and the flusher threads. + */ + if ((alloc_flags & ALLOC_WMARK_LOW) && + (gfp_mask & __GFP_WRITE) && !zone_dirty_ok(zone)) + goto this_zone_full; BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK); if (!(alloc_flags & ALLOC_NO_WATERMARKS)) { -- cgit v1.2.3