summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorDavid Hildenbrand <david@redhat.com>2019-09-23 15:35:59 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2019-09-24 15:54:09 -0700
commit5ecae6359e3a37624cd34d02e4b3401cf98bb62f (patch)
tree4238facda0ff53b98013776f2d2006240c1661fc /mm
parent00ff9a91bdb74933648a5b346d9f0edb99bd76d3 (diff)
mm/memory_hotplug: drop PageReserved() check in online_pages_range()
move_pfn_range_to_zone() will set all pages to PG_reserved via memmap_init_zone(). The only way a page could no longer be reserved would be if a MEM_GOING_ONLINE notifier would clear PG_reserved - which is not done (the online_page callback is used for that purpose by e.g., Hyper-V instead). walk_system_ram_range() will never call online_pages_range() with duplicate PFNs, so drop the PageReserved() check. This seems to be a leftover from ancient times where the memmap was initialized when adding memory and we wanted to check for already onlined memory. Link: http://lkml.kernel.org/r/20190814154109.3448-3-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Arun KS <arunks@codeaurora.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Borislav Petkov <bp@suse.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Nadav Amit <namit@vmware.com> Cc: Wei Yang <richardw.yang@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory_hotplug.c4
1 files changed, 1 insertions, 3 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c28e5dd017ba..c38050785d12 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -653,9 +653,7 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
{
unsigned long onlined_pages = *(unsigned long *)arg;
- if (PageReserved(pfn_to_page(start_pfn)))
- onlined_pages += online_pages_blocks(start_pfn, nr_pages);
-
+ onlined_pages += online_pages_blocks(start_pfn, nr_pages);
online_mem_sections(start_pfn, start_pfn + nr_pages);
*(unsigned long *)arg = onlined_pages;