path: root/mm/filemap.c
diff options
authorAneesh Kumar K.V <>2012-03-21 16:34:08 -0700
committerLinus Torvalds <>2012-03-21 17:54:58 -0700
commita05b0855fd15504972dba2358e5faa172a1e50ba (patch)
treef4cba4ef12d888ec472714f67ef7bcec318a5191 /mm/filemap.c
parentf5bf18fa22f8c41a13eb8762c7373eb3a93a7333 (diff)
hugetlbfs: avoid taking i_mutex from hugetlbfs_read()
Taking i_mutex in hugetlbfs_read() can result in deadlock with mmap as explained below Thread A: read() on hugetlbfs hugetlbfs_read() called i_mutex grabbed hugetlbfs_read_actor() called __copy_to_user() called page fault is triggered Thread B, sharing address space with A: mmap() the same file ->mmap_sem is grabbed on task_B->mm->mmap_sem hugetlbfs_file_mmap() is called attempt to grab ->i_mutex and block waiting for A to give it up Thread A: pagefault handled blocked on attempt to grab task_A->mm->mmap_sem, which happens to be the same thing as task_B->mm->mmap_sem. Block waiting for B to give it up. AFAIU the i_mutex locking was added to hugetlbfs_read() as per to take care of the race between truncate and read. This patch fixes this by looking at page->mapping under lock_page() (find_lock_page()) to ensure that the inode didn't get truncated in the range during a parallel read. Ideally we can extend the patch to make sure we don't increase i_size in mmap. But that will break userspace, because applications will now have to use truncate(2) to increase i_size in hugetlbfs. Based on the original patch from Hillf Danton. Signed-off-by: Aneesh Kumar K.V <> Cc: Hillf Danton <> Cc: KAMEZAWA Hiroyuki <> Cc: Al Viro <> Cc: Hugh Dickins <> Cc: <> [everything after 2007 :)] Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
Diffstat (limited to 'mm/filemap.c')
0 files changed, 0 insertions, 0 deletions