summaryrefslogtreecommitdiff
path: root/fs/ubifs
diff options
context:
space:
mode:
authorZhihao Cheng <chengzhihao1@huawei.com>2020-01-11 17:50:36 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2020-02-11 04:35:21 -0800
commit65e6f63ebfb93a55a1e215f2d632a3b63d19c30c (patch)
treec823e68076becd7a6c46e3c86ef1291d5facb486 /fs/ubifs
parente3a561aa5376bdfc14590790fd892e532dbc06a1 (diff)
ubifs: Fix deadlock in concurrent bulk-read and writepage
commit f5de5b83303e61b1f3fb09bd77ce3ac2d7a475f2 upstream. In ubifs, concurrent execution of writepage and bulk read on the same file may cause ABBA deadlock, for example (Reproduce method see Link): Process A(Bulk-read starts from page4) Process B(write page4 back) vfs_read wb_workfn or fsync ... ... generic_file_buffered_read write_cache_pages ubifs_readpage LOCK(page4) ubifs_bulk_read ubifs_writepage LOCK(ui->ui_mutex) ubifs_write_inode ubifs_do_bulk_read LOCK(ui->ui_mutex) find_or_create_page(alloc page4) ↑ LOCK(page4) <-- ABBA deadlock occurs! In order to ensure the serialization execution of bulk read, we can't remove the big lock 'ui->ui_mutex' in ubifs_bulk_read(). Instead, we allow ubifs_do_bulk_read() to lock page failed by replacing find_or_create_page(FGP_LOCK) with pagecache_get_page(FGP_LOCK | FGP_NOWAIT). Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Suggested-by: zhangyi (F) <yi.zhang@huawei.com> Cc: <Stable@vger.kernel.org> Fixes: 4793e7c5e1c ("UBIFS: add bulk-read facility") Link: https://bugzilla.kernel.org/show_bug.cgi?id=206153 Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'fs/ubifs')
-rw-r--r--fs/ubifs/file.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index 91362079f82a..a771273fba7e 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -786,7 +786,9 @@ static int ubifs_do_bulk_read(struct ubifs_info *c, struct bu_info *bu,
if (page_offset > end_index)
break;
- page = find_or_create_page(mapping, page_offset, ra_gfp_mask);
+ page = pagecache_get_page(mapping, page_offset,
+ FGP_LOCK|FGP_ACCESSED|FGP_CREAT|FGP_NOWAIT,
+ ra_gfp_mask);
if (!page)
break;
if (!PageUptodate(page))