summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-06-19powerpc/64s: Fix THP PMD collapse serialisationNicholas Piggin
commit 33258a1db165cf43a9e6382587ad06e9b7f8187c upstream. Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte helpers") changed the actual bitwise tests in pte_access_permitted by using pte_write() and pte_present() helpers rather than raw bitwise testing _PAGE_WRITE and _PAGE_PRESENT bits. The pte_present() change now returns true for PTEs which are !_PAGE_PRESENT and _PAGE_INVALID, which is the combination used by pmdp_invalidate() to synchronize access from lock-free lookups. pte_access_permitted() is used by pmd_access_permitted(), so allowing GUP lock free access to proceed with such PTEs breaks this synchronisation. This bug has been observed on a host using the hash page table MMU, with random crashes and corruption in guests, usually together with bad PMD messages in the host. Fix this by adding an explicit check in pmd_access_permitted(), and documenting the condition explicitly. The pte_write() change should be okay, and would prevent GUP from falling back to the slow path when encountering savedwrite PTEs, which matches what x86 (that does not implement savedwrite) does. Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte helpers") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19powerpc: Fix kexec failure on book3s/32Christophe Leroy
commit 6c284228eb356a1ec62a704b4d2329711831eaed upstream. In the old days, _PAGE_EXEC didn't exist on 6xx aka book3s/32. Therefore, allthough __mapin_ram_chunk() was already mapping kernel text with PAGE_KERNEL_TEXT and the rest with PAGE_KERNEL, the entire memory was executable. Part of the memory (first 512kbytes) was mapped with BATs instead of page table, but it was also entirely mapped as executable. In commit 385e89d5b20f ("powerpc/mm: add exec protection on powerpc 603"), we started adding exec protection to some 6xx, namely the 603, for pages mapped via pagetables. Then, in commit 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX"), the exec protection was extended to BAT mapped memory, so that really only the kernel text could be executed. The problem here is that kexec is based on copying some code into upper part of memory then executing it from there in order to install a fresh new kernel at its definitive location. However, the code is position independant and first part of it is just there to deactivate the MMU and jump to the second part. So it is possible to run this first part inplace instead of running the copy. Once the MMU is off, there is no protection anymore and the second part of the code will just run as before. Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Fixes: 63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19x86/resctrl: Prevent NULL pointer dereference when local MBM is disabledPrarit Bhargava
commit c7563e62a6d720aa3b068e26ddffab5f0df29263 upstream. Booting with kernel parameter "rdt=cmt,mbmtotal,memlocal,l3cat,mba" and executing "mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl" results in a NULL pointer dereference on systems which do not have local MBM support enabled.. BUG: kernel NULL pointer dereference, address: 0000000000000020 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 722 Comm: kworker/0:3 Not tainted 5.2.0-0.rc3.git0.1.el7_UNSUPPORTED.x86_64 #2 Workqueue: events mbm_handle_overflow RIP: 0010:mbm_handle_overflow+0x150/0x2b0 Only enter the bandwith update loop if the system has local MBM enabled. Fixes: de73f38f7680 ("x86/intel_rdt/mba_sc: Feedback loop to dynamically update mem bandwidth") Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Reinette Chatre <reinette.chatre@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190610171544.13474-1-prarit@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19x86/mm/KASLR: Compute the size of the vmemmap section properlyBaoquan He
commit 00e5a2bbcc31d5fea853f8daeba0f06c1c88c3ff upstream. The size of the vmemmap section is hardcoded to 1 TB to support the maximum amount of system RAM in 4-level paging mode - 64 TB. However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level, 64 TB of vmemmap area is needed: 4 * 1000^5 PB / 4096 bytes page size * 64 bytes per page struct / 1000^4 TB = 62.5 TB. This hardcoding may cause vmemmap to corrupt the following cpu_entry_area section, if KASLR puts vmemmap very close to it and the actual vmemmap size is bigger than 1 TB. So calculate the actual size of the vmemmap region needed and then align it up to 1 TB boundary. In 4-level paging mode it is always 1 TB. In 5-level it's adjusted on demand. The current code reserves 0.5 PB for vmemmap on 5-level. With this change, the space can be saved and thus used to increase entropy for the randomization. [ bp: Spell out how the 64 TB needed for vmemmap is computed and massage commit message. ] Fixes: eedb92abb9bb ("x86/mm: Make virtual memory layout dynamic for CONFIG_X86_5LEVEL=y") Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Kirill A. Shutemov <kirill@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: kirill.shutemov@linux.intel.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable <stable@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190523025744.3756-1-bhe@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19x86/kasan: Fix boot with 5-level paging and KASANAndrey Ryabinin
commit f3176ec9420de0c385023afa3e4970129444ac2f upstream. Since commit d52888aa2753 ("x86/mm: Move LDT remap out of KASLR region on 5-level paging") kernel doesn't boot with KASAN on 5-level paging machines. The bug is actually in early_p4d_offset() and introduced by commit 12a8cc7fcf54 ("x86/kasan: Use the same shadow offset for 4- and 5-level paging") early_p4d_offset() tries to convert pgd_val(*pgd) value to a physical address. This doesn't make sense because pgd_val() already contains the physical address. It did work prior to commit d52888aa2753 because the result of "__pa_nodebug(pgd_val(*pgd)) & PTE_PFN_MASK" was the same as "pgd_val(*pgd) & PTE_PFN_MASK". __pa_nodebug() just set some high bits which were masked out by applying PTE_PFN_MASK. After the change of the PAGE_OFFSET offset in commit d52888aa2753 __pa_nodebug(pgd_val(*pgd)) started to return a value with more high bits set and PTE_PFN_MASK wasn't enough to mask out all of them. So it returns a wrong not even canonical address and crashes on the attempt to dereference it. Switch back to pgd_val() & PTE_PFN_MASK to cure the issue. Fixes: 12a8cc7fcf54 ("x86/kasan: Use the same shadow offset for 4- and 5-level paging") Reported-by: Kirill A. Shutemov <kirill@shutemov.name> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: kasan-dev@googlegroups.com Cc: stable@vger.kernel.org Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20190614143149.2227-1-aryabinin@virtuozzo.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callbackBorislav Petkov
commit 78f4e932f7760d965fb1569025d1576ab77557c5 upstream. Adric Blake reported the following warning during suspend-resume: Enabling non-boot CPUs ... x86: Booting SMP configuration: smpboot: Booting Node 0 Processor 1 APIC 0x2 unchecked MSR access error: WRMSR to 0x10f (tried to write 0x0000000000000000) \ at rIP: 0xffffffff8d267924 (native_write_msr+0x4/0x20) Call Trace: intel_set_tfa intel_pmu_cpu_starting ? x86_pmu_dead_cpu x86_pmu_starting_cpu cpuhp_invoke_callback ? _raw_spin_lock_irqsave notify_cpu_starting start_secondary secondary_startup_64 microcode: sig=0x806ea, pf=0x80, revision=0x96 microcode: updated to revision 0xb4, date = 2019-04-01 CPU1 is up The MSR in question is MSR_TFA_RTM_FORCE_ABORT and that MSR is emulated by microcode. The log above shows that the microcode loader callback happens after the PMU restoration, leading to the conjecture that because the microcode hasn't been updated yet, that MSR is not present yet, leading to the #GP. Add a microcode loader-specific hotplug vector which comes before the PERF vectors and thus executes earlier and makes sure the MSR is present. Fixes: 400816f60c54 ("perf/x86/intel: Implement support for TSX Force Abort") Reported-by: Adric Blake <promarbler14@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: x86@kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=203637 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGIONChristian Borntraeger
[ Upstream commit 19ec166c3f39fe1d3789888a74cc95544ac266d4 ] kselftests exposed a problem in the s390 handling for memory slots. Right now we only do proper memory slot handling for creation of new memory slots. Neither MOVE, nor DELETION are handled properly. Let us implement those. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19KVM: x86/pmu: do not mask the value that is written to fixed PMUsPaolo Bonzini
[ Upstream commit 2924b52117b2812e9633d5ea337333299166d373 ] According to the SDM, for MSR_IA32_PERFCTR0/1 "the lower-order 32 bits of each MSR may be written with any value, and the high-order 8 bits are sign-extended according to the value of bit 31", but the fixed counters in real hardware are limited to the width of the fixed counters ("bits beyond the width of the fixed-function counter are reserved and must be written as zeros"). Fix KVM to do the same. Reported-by: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19KVM: x86/pmu: mask the result of rdpmc according to the width of the countersPaolo Bonzini
[ Upstream commit 0e6f467ee28ec97f68c7b74e35ec1601bb1368a7 ] This patch will simplify the changes in the next, by enforcing the masking of the counters to RDPMC and RDMSR. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19KVM: x86: do not spam dmesg with VMCS/VMCB dumpsPaolo Bonzini
[ Upstream commit 6f2f84532c153d32a5e53b6083b06a3e24368d30 ] Userspace can easily set up invalid processor state in such a way that dmesg will be filled with VMCS or VMCB dumps. Disable this by default using a module parameter. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19KVM: LAPIC: Fix lapic_timer_advance_ns parameter overflowWanpeng Li
[ Upstream commit 0e6edceb8f18a4e31526d83e6099fef1f29c3af5 ] After commit c3941d9e0 (KVM: lapic: Allow user to disable adaptive tuning of timer advancement), '-1' enables adaptive tuning starting from default advancment of 1000ns. However, we should expose an int instead of an overflow uint module parameter. Before patch: /sys/module/kvm/parameters/lapic_timer_advance_ns:4294967295 After patch: /sys/module/kvm/parameters/lapic_timer_advance_ns:-1 Fixes: c3941d9e0 (KVM: lapic: Allow user to disable adaptive tuning of timer advancement) Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: Liran Alon <liran.alon@oracle.com> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19kvm: vmx: Fix -Wmissing-prototypes warningsYi Wang
[ Upstream commit 4d259965655c92053f3255aca14d81aab1e21219 ] We get a warning when build kernel W=1: arch/x86/kvm/vmx/vmx.c:6365:6: warning: no previous prototype for ‘vmx_update_host_rsp’ [-Wmissing-prototypes] void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) Add the missing declaration to fix this. Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19KVM: nVMX: really fix the size checks on KVM_SET_NESTED_STATEPaolo Bonzini
[ Upstream commit db80927ea1977a845230a161df643b48fd1e1ea4 ] The offset for reading the shadow VMCS is sizeof(*kvm_state)+VMCS12_SIZE, so the correct size must be that plus sizeof(*vmcs12). This could lead to KVM reading garbage data from userspace and not reporting an error, but is otherwise not sensitive. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19KVM: arm/arm64: Move cc/it checks under hyp's Makefile to avoid instrumentationJames Morse
[ Upstream commit 623e1528d4090bd1abaf93ec46f047dee9a6fb32 ] KVM has helpers to handle the condition codes of trapped aarch32 instructions. These are marked __hyp_text and used from HYP, but they aren't built by the 'hyp' Makefile, which has all the runes to avoid ASAN and KCOV instrumentation. Move this code to a new hyp/aarch32.c to avoid a hyp-panic when starting an aarch32 guest on a host built with the ASAN/KCOV debug options. Fixes: 021234ef3752f ("KVM: arm64: Make kvm_condition_valid32() accessible from EL2") Fixes: 8cebe750c4d9a ("arm64: KVM: Make kvm_skip_instr32 available to HYP") Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19arm64/mm: Inhibit huge-vmap with ptdumpMark Rutland
[ Upstream commit 7ba36eccb3f83983a651efd570b4f933ecad1b5c ] The arm64 ptdump code can race with concurrent modification of the kernel page tables. At the time this was added, this was sound as: * Modifications to leaf entries could result in stale information being logged, but would not result in a functional problem. * Boot time modifications to non-leaf entries (e.g. freeing of initmem) were performed when the ptdump code cannot be invoked. * At runtime, modifications to non-leaf entries only occurred in the vmalloc region, and these were strictly additive, as intermediate entries were never freed. However, since commit: commit 324420bf91f6 ("arm64: add support for ioremap() block mappings") ... it has been possible to create huge mappings in the vmalloc area at runtime, and as part of this existing intermediate levels of table my be removed and freed. It's possible for the ptdump code to race with this, and continue to walk tables which have been freed (and potentially poisoned or reallocated). As a result of this, the ptdump code may dereference bogus addresses, which could be fatal. Since huge-vmap is a TLB and memory optimization, we can disable it when the runtime ptdump code is in use to avoid this problem. Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 324420bf91f60582 ("arm64: add support for ioremap() block mappings") Acked-by: Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-19arm64: Print physical address of page table base in show_pte()Will Deacon
[ Upstream commit 48caebf7e1313eb9f0a06fe59a07ac05b38a5806 ] When dumping the page table in response to an unexpected kernel page fault, we print the virtual (hashed) address of the page table base, but display physical addresses for everything else. Make the page table dumping code in show_pte() consistent, by printing the page table base pointer as a physical address. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: shmobile: porter: enable R-Car Gen2 regulator quirkMarek Vasut
[ Upstream commit d5aa84087eadd6f2619628bc9f3d028eeabded0f ] Porter needs the regulator quirk, just like the other boards. But unlike the other boards, the Porter uses DA9063L, which is at 0x5a. Otherwise, DA9063L and DA9210 IRQ line is still connected to CPU IRQ2 . Signed-off-by: Marek Vasut <marek.vasut+renesas@gmail.com> Acked-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: exynos: Fix undefined instruction during Exynos5422 resumeMarek Szyprowski
[ Upstream commit 4d8e3e951a856777720272ce27f2c738a3eeef8c ] During early system resume on Exynos5422 with performance counters enabled the following kernel oops happens: Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP ARM Modules linked in: CPU: 0 PID: 1433 Comm: bash Tainted: G W 5.0.0-rc5-next-20190208-00023-gd5fb5a8a13e6-dirty #5480 Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) ... Flags: nZCv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 4451006a DAC: 00000051 Process bash (pid: 1433, stack limit = 0xb7e0e22f) ... (reset_ctrl_regs) from [<c0112ad0>] (dbg_cpu_pm_notify+0x1c/0x24) (dbg_cpu_pm_notify) from [<c014c840>] (notifier_call_chain+0x44/0x84) (notifier_call_chain) from [<c014cbc0>] (__atomic_notifier_call_chain+0x7c/0x128) (__atomic_notifier_call_chain) from [<c01ffaac>] (cpu_pm_notify+0x30/0x54) (cpu_pm_notify) from [<c055116c>] (syscore_resume+0x98/0x3f4) (syscore_resume) from [<c0189350>] (suspend_devices_and_enter+0x97c/0xe74) (suspend_devices_and_enter) from [<c0189fb8>] (pm_suspend+0x770/0xc04) (pm_suspend) from [<c0187740>] (state_store+0x6c/0xcc) (state_store) from [<c09fa698>] (kobj_attr_store+0x14/0x20) (kobj_attr_store) from [<c030159c>] (sysfs_kf_write+0x4c/0x50) (sysfs_kf_write) from [<c0300620>] (kernfs_fop_write+0xfc/0x1e0) (kernfs_fop_write) from [<c0282be8>] (__vfs_write+0x2c/0x160) (__vfs_write) from [<c0282ea4>] (vfs_write+0xa4/0x16c) (vfs_write) from [<c0283080>] (ksys_write+0x40/0x8c) (ksys_write) from [<c0101000>] (ret_fast_syscall+0x0/0x28) Undefined instruction is triggered during CP14 reset, because bits: #16 (Secure privileged invasive debug disabled) and #17 (Secure privileged noninvasive debug disable) are set in DSCR. Those bits depend on SPNIDEN and SPIDEN lines, which are provided by Secure JTAG hardware block. That block in turn is powered from cluster 0 (big/Eagle), but the Exynos5422 boots on cluster 1 (LITTLE/KFC). To fix this issue it is enough to turn on the power on the cluster 0 for a while. This lets the Secure JTAG block to propagate the needed signals to LITTLE/KFC cores and change their DSCR. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: exynos: Always enable necessary APIO_1V8 and ABB_1V8 regulators on ↵Krzysztof Kozlowski
Arndale Octa [ Upstream commit 5ab99cf7d5e96e3b727c30e7a8524c976bd3723d ] The PVDD_APIO_1V8 (LDO2) and PVDD_ABB_1V8 (LDO8) regulators were turned off by Linux kernel as unused. However they supply critical parts of SoC so they should be always on: 1. PVDD_APIO_1V8 supplies SYS pins (gpx[0-3], PSHOLD), HDMI level shift, RTC, VDD1_12 (DRAM internal 1.8 V logic), pull-up for PMIC interrupt lines, TTL/UARTR level shift, reset pins and SW-TACT1 button. It also supplies unused blocks like VDDQ_SRAM (for SROM controller) and VDDQ_GPIO (gpm7, gpy7). The LDO2 cannot be turned off (S2MPS11 keeps it on anyway) so marking it "always-on" only reflects its real status. 2. PVDD_ABB_1V8 supplies Adaptive Body Bias Generator for ARM cores, memory and Mali (G3D). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15arm64: dts: qcom: qcs404: Fix regulator supply namesBjorn Andersson
[ Upstream commit f95f57e4372207ede83ac28f300aba719b271ed5 ] The regulator definition got their supply names cleaned up during upstreaming, so they no longer match the driver defined names. Update the supply names. Also fill out the missing voltage of SMPS 5. Fixes: 0b363f5b871c ("arm64: dts: qcom: qcs404: Add PMS405 RPM regulators") Reported-by: Nicolas Dechesne <nicolas.dechesne@linaro.org> Reviewed-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Andy Gross <andy.gross@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: OMAP2+: pm33xx-core: Do not Turn OFF CEFUSE as PPA may be using itKabir Sahane
[ Upstream commit 72aff4ecf1cb85a3c6e6b42ccbda0bc631b090b3 ] This area is used to store keys by HSPPA in case of AM438x SOC. Leave it active. Signed-off-by: Kabir Sahane <x0153567@ti.com> Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx6qdl: Specify IMX6QDL_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
[ Upstream commit b14c872eebc501b9640b04f4a152df51d6eaf2fc ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6QDL_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality(this at least breaks RAVE SP serdev driver on RDU2). Fix the code to specify IMX6QDL_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: Adam Ford <aford173@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx6sx: Specify IMX6SX_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
[ Upstream commit 8979117765c19edc3b01cc0ef853537bf93eea4b ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6SX_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6SX_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx6ul: Specify IMX6UL_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
[ Upstream commit 7b3132ecefdd1fcdf6b86e62021d0e55ea8034db ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6UL_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6UL_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx7d: Specify IMX7D_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
[ Upstream commit 412b032a1dc72fc9d1c258800355efa6671b6315 ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX7D_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX7D_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx6sll: Specify IMX6SLL_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
[ Upstream commit c5ed5daa65d5f665e666b76c3dbfa503066defde ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6SLL_CLK_SDMA result in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6SLL_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx6sx: Specify IMX6SX_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
[ Upstream commit cc839d0f8c284fcb7591780b568f13415bbb737c ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6SL_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6SL_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx53: Specify IMX5_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
[ Upstream commit 28c168018e0902c67eb9c60d0fc4c8aa166c4efe ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX5_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX5_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx50: Specify IMX5_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
[ Upstream commit b7b4fda2636296471e29b78c2aa9535d7bedb7a0 ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX5_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX5_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: dts: imx51: Specify IMX5_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
[ Upstream commit 918bbde8085ae147a43dcb491953e0dd8f3e9d6a ] Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX5_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX5_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15arm64: dts: imx8mq: Mark iomuxc_gpr as i.MX6Q compatibleAndrey Smirnov
[ Upstream commit beea0f22566cb32c35de89ab0980852b5bbc1c60 ] Mark iomuxc_gpr as compatible with "fsl,imx6q-iomuxc-gpr" in order for to allow i.MX6 PCIe driver to use it. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Acked-by: Lucas Stach <l.stach@pengutronix.de> Reviewed-by: Fabio Estevam <festevam@gmail.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Leonard Crestez <leonard.crestez@nxp.com> Cc: "A.s. Dong" <aisheng.dong@nxp.com> Cc: Richard Zhu <hongxing.zhu@nxp.com> Cc: linux-imx@nxp.com Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15x86/PCI: Fix PCI IRQ routing table memory leakWenwen Wang
[ Upstream commit ea094d53580f40c2124cef3d072b73b2425e7bfd ] In pcibios_irq_init(), the PCI IRQ routing table 'pirq_table' is first found through pirq_find_routing_table(). If the table is not found and CONFIG_PCI_BIOS is defined, the table is then allocated in pcibios_get_irq_routing_table() using kmalloc(). Later, if the I/O APIC is used, this table is actually not used. In that case, the allocated table is not freed, which is a memory leak. Free the allocated table if it is not used. Signed-off-by: Wenwen Wang <wang6495@umn.edu> [bhelgaas: added Ingo's reviewed-by, since the only change since v1 was to use the irq_routing_table local variable name he suggested] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15arm64: defconfig: Update UFSHCD for Hi3660 socValentin Schneider
[ Upstream commit 7b3320e6b1795d68b7e30eb3fad0860f2664aedd ] Commit 7ee7ef24d02d ("scsi: arm64: defconfig: enable configs for Hisilicon ufs") set 'CONFIG_SCSI_UFS_HISI=y', but the configs it depends on (CONFIG_SCSI_HFSHCD_PLATFORM && CONFIG_SCSI_UFSHCD) were left to being built as modules. Commit 1f4fa50dd48f ("arm64: defconfig: Regenerate for v4.20") "fixed" that by reverting to 'CONFIG_SCSI_UFS_HISI=m'. Thing is, if the rootfs is stored in the on-board flash (which is the "canonical" way of doing things), we either need these drivers to be built-in, or we need to fiddle with an initramfs to access that flash and eventually load the modules installed over there. The former is the easiest, do that. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15powerpc/pseries: Track LMB nid instead of using device treeNathan Fontenot
[ Upstream commit b2d3b5ee66f2a04a918cc043cec0c9ed3de58f40 ] When removing memory we need to remove the memory from the node it was added to instead of looking up the node it should be in in the device tree. During testing we have seen scenarios where the affinity for a LMB changes due to a partition migration or PRRN event. In these cases the node the LMB exists in may not match the node the device tree indicates it belongs in. This can lead to a system crash when trying to DLPAR remove the LMB after a migration or PRRN event. The current code looks up the node in the device tree to remove the LMB from, the crash occurs when we try to offline this node and it does not have any data, i.e. node_data[nid] == NULL. 36:mon> e cpu 0x36: Vector: 300 (Data Access) at [c0000001828b7810] pc: c00000000036d08c: try_offline_node+0x2c/0x1b0 lr: c0000000003a14ec: remove_memory+0xbc/0x110 sp: c0000001828b7a90 msr: 800000000280b033 dar: 9a28 dsisr: 40000000 current = 0xc0000006329c4c80 paca = 0xc000000007a55200 softe: 0 irq_happened: 0x01 pid = 76926, comm = kworker/u320:3 36:mon> t [link register ] c0000000003a14ec remove_memory+0xbc/0x110 [c0000001828b7a90] c00000000006a1cc arch_remove_memory+0x9c/0xd0 (unreliable) [c0000001828b7ad0] c0000000003a14e0 remove_memory+0xb0/0x110 [c0000001828b7b20] c0000000000c7db4 dlpar_remove_lmb+0x94/0x160 [c0000001828b7b60] c0000000000c8ef8 dlpar_memory+0x7e8/0xd10 [c0000001828b7bf0] c0000000000bf828 handle_dlpar_errorlog+0xf8/0x160 [c0000001828b7c60] c0000000000bf8cc pseries_hp_work_fn+0x3c/0xa0 [c0000001828b7c90] c000000000128cd8 process_one_work+0x298/0x5a0 [c0000001828b7d20] c000000000129068 worker_thread+0x88/0x620 [c0000001828b7dc0] c00000000013223c kthread+0x1ac/0x1c0 [c0000001828b7e30] c00000000000b45c ret_from_kernel_thread+0x5c/0x80 To resolve this we need to track the node a LMB belongs to when it is added to the system so we can remove it from that node instead of the node that the device tree indicates it should belong to. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15mips: Make sure dt memory regions are validSerge Semin
[ Upstream commit 93fa5b280761a4dbb14c5330f260380385ab2b49 ] There are situations when memory regions coming from dts may be too big for the platform physical address space. This especially concerns XPA-capable systems. Bootloader may determine more than 4GB memory available and pass it to the kernel over dts memory node, while kernel is built without XPA/64BIT support. In this case the region may either simply be truncated by add_memory_region() method or by u64->phys_addr_t type casting. But in worst case the method can even drop the memory region if it exceeds PHYS_ADDR_MAX size. So lets make sure the retrieved from dts memory regions are valid, and if some of them aren't, just manually truncate them with a warning printed out. Signed-off-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Cc: Huacai Chen <chenhc@lemote.com> Cc: Stefan Agner <stefan@agner.ch> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Alexandre Belloni <alexandre.belloni@bootlin.com> Cc: Juergen Gross <jgross@suse.com> Cc: Serge Semin <Sergey.Semin@t-platforms.ru> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15uml: fix a boot splat wrt use of cpu_all_maskMaciej Żenczykowski
[ Upstream commit 689a58605b63173acb0a8cf954af6a8f60440c93 ] Memory: 509108K/542612K available (3835K kernel code, 919K rwdata, 1028K rodata, 129K init, 211K bss, 33504K reserved, 0K cma-reserved) NR_IRQS: 15 clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 ns ------------[ cut here ]------------ WARNING: CPU: 0 PID: 0 at kernel/time/clockevents.c:458 clockevents_register_device+0x72/0x140 posix-timer cpumask == cpu_all_mask, using cpu_possible_mask instead Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.1.0-rc4-00048-ged79cc87302b #4 Stack: 604ebda0 603c5370 604ebe20 6046fd17 00000000 6006fcbb 604ebdb0 603c53b5 604ebe10 6003bfc4 604ebdd0 9000001ca Call Trace: [<6006fcbb>] ? printk+0x0/0x94 [<60083160>] ? clockevents_register_device+0x72/0x140 [<6001f16e>] show_stack+0x13b/0x155 [<603c5370>] ? dump_stack_print_info+0xe2/0xeb [<6006fcbb>] ? printk+0x0/0x94 [<603c53b5>] dump_stack+0x2a/0x2c [<6003bfc4>] __warn+0x10e/0x13e [<60070320>] ? vprintk_func+0xc8/0xcf [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<6003c08b>] warn_slowpath_fmt+0x97/0x99 [<600311a1>] ? set_signals+0x0/0x3f [<6003bff4>] ? warn_slowpath_fmt+0x0/0x99 [<600842cb>] ? tick_oneshot_mode_active+0x44/0x4f [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<6007d2d5>] ? __clocksource_select+0x20/0x1b1 [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<60083160>] clockevents_register_device+0x72/0x140 [<60031192>] ? get_signals+0x0/0xf [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<60002eec>] um_timer_setup+0xc8/0xca [<60001b59>] start_kernel+0x47f/0x57e [<600035bc>] start_kernel_proc+0x49/0x4d [<6006c483>] ? kmsg_dump_register+0x82/0x8a [<6001de62>] new_thread_handler+0x81/0xb2 [<60003571>] ? kmsg_dumper_stdout_init+0x1a/0x1c [<60020c75>] uml_finishsetup+0x54/0x59 random: get_random_bytes called from init_oops_id+0x27/0x34 with crng_init=0 ---[ end trace 00173d0117a88acb ]--- Calibrating delay loop... 6941.90 BogoMIPS (lpj=34709504) Signed-off-by: Maciej Żenczykowski <maze@google.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: linux-um@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15perf/x86/intel: Allow PEBS multi-entry in watermark modeStephane Eranian
[ Upstream commit c7a286577d7592720c2f179aadfb325a1ff48c95 ] This patch fixes a restriction/bug introduced by: 583feb08e7f7 ("perf/x86/intel: Fix handling of wakeup_events for multi-entry PEBS") The original patch prevented using multi-entry PEBS when wakeup_events != 0. However given that wakeup_events is part of a union with wakeup_watermark, it means that in watermark mode, PEBS multi-entry is also disabled which is not the intent. This patch fixes this by checking is watermark mode is enabled. Signed-off-by: Stephane Eranian <eranian@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: jolsa@redhat.com Cc: kan.liang@intel.com Cc: vincent.weaver@maine.edu Fixes: 583feb08e7f7 ("perf/x86/intel: Fix handling of wakeup_events for multi-entry PEBS") Link: http://lkml.kernel.org/r/20190514003400.224340-1-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15ARM: prevent tracing IPI_CPU_BACKTRACEArnd Bergmann
[ Upstream commit be167862ae7dd85c56d385209a4890678e1b0488 ] Patch series "compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING", v3. This patch (of 11): When function tracing for IPIs is enabled, we get a warning for an overflow of the ipi_types array with the IPI_CPU_BACKTRACE type as triggered by raise_nmi(): arch/arm/kernel/smp.c: In function 'raise_nmi': arch/arm/kernel/smp.c:489:2: error: array subscript is above array bounds [-Werror=array-bounds] trace_ipi_raise(target, ipi_types[ipinr]); This is a correct warning as we actually overflow the array here. This patch raise_nmi() to call __smp_cross_call() instead of smp_cross_call(), to avoid calling into ftrace. For clarification, I'm also adding a two new code comments describing how this one is special. The warning appears to have shown up after commit e7273ff49acf ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI"), which changed the number assignment from '15' to '8', but as far as I can tell has existed since the IPI tracepoints were first introduced. If we decide to backport this patch to stable kernels, we probably need to backport e7273ff49acf as well. [yamada.masahiro@socionext.com: rebase on v5.1-rc1] Link: http://lkml.kernel.org/r/20190423034959.13525-2-yamada.masahiro@socionext.com Fixes: e7273ff49acf ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI") Fixes: 365ec7b17327 ("ARM: add IPI tracepoints") # v3.17 Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Mathieu Malaterre <malat@debian.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Stefan Agner <stefan@agner.ch> Cc: Boris Brezillon <bbrezillon@kernel.org> Cc: Miquel Raynal <miquel.raynal@bootlin.com> Cc: Richard Weinberger <richard@nod.at> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Marek Vasut <marek.vasut@gmail.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Borislav Petkov <bp@suse.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-11MIPS: pistachio: Build uImage.gz by defaultPaul Burton
commit e4f2d1af7163becb181419af9dece9206001e0a6 upstream. The pistachio platform uses the U-Boot bootloader & generally boots a kernel in the uImage format. As such it's useful to build one when building the kernel, but to do so currently requires the user to manually specify a uImage target on the make command line. Make uImage.gz the pistachio platform's default build target, so that the default is to build a kernel image that we can actually boot on a board such as the MIPS Creator Ci40. Marked for stable backport as far as v4.1 where pistachio support was introduced. This is primarily useful for CI systems such as kernelci.org which will benefit from us building a suitable image which can then be booted as part of automated testing, extending our test coverage to the affected stable branches. Signed-off-by: Paul Burton <paul.burton@mips.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Tested-by: Kevin Hilman <khilman@baylibre.com> URL: https://groups.io/g/kernelci/message/388 Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-11MIPS: Bounds check virt_addr_validPaul Burton
commit 074a1e1167afd82c26f6d03a9a8b997d564bb241 upstream. The virt_addr_valid() function is meant to return true iff virt_to_page() will return a valid struct page reference. This is true iff the address provided is found within the unmapped address range between PAGE_OFFSET & MAP_BASE, but we don't currently check for that condition. Instead we simply mask the address to obtain what will be a physical address if the virtual address is indeed in the desired range, shift it to form a PFN & then call pfn_valid(). This can incorrectly return true if called with a virtual address which, after masking, happens to form a physical address corresponding to a valid PFN. For example we may vmalloc an address in the kernel mapped region starting a MAP_BASE & obtain the virtual address: addr = 0xc000000000002000 When masked by virt_to_phys(), which uses __pa() & in turn CPHYSADDR(), we obtain the following (bogus) physical address: addr = 0x2000 In a common system with PHYS_OFFSET=0 this will correspond to a valid struct page which should really be accessed by virtual address PAGE_OFFSET+0x2000, causing virt_addr_valid() to incorrectly return 1 indicating that the original address corresponds to a struct page. This is equivalent to the ARM64 change made in commit ca219452c6b8 ("arm64: Correctly bounds check virt_addr_valid"). This fixes fallout when hardened usercopy is enabled caused by the related commit 517e1fbeb65f ("mm/usercopy: Drop extra is_vmalloc_or_module() check") which removed a check for the vmalloc range that was present from the introduction of the hardened usercopy feature. Signed-off-by: Paul Burton <paul.burton@mips.com> References: ca219452c6b8 ("arm64: Correctly bounds check virt_addr_valid") References: 517e1fbeb65f ("mm/usercopy: Drop extra is_vmalloc_or_module() check") Reported-by: Julien Cristau <jcristau@debian.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: YunQiang Su <ysu@wavecomp.com> URL: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=929366 Cc: stable@vger.kernel.org # v4.12+ Cc: linux-mips@vger.kernel.org Cc: Yunqiang Su <ysu@wavecomp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-11s390/mm: fix address space detection in exception handlingGerald Schaefer
commit 962f0af83c239c0aef05639631e871c874b00f99 upstream. Commit 0aaba41b58bc ("s390: remove all code using the access register mode") removed access register mode from the kernel, and also from the address space detection logic. However, user space could still switch to access register mode (trans_exc_code == 1), and exceptions in that mode would not be correctly assigned. Fix this by adding a check for trans_exc_code == 1 to get_fault_type(), and remove the wrong comment line before that function. Fixes: 0aaba41b58bc ("s390: remove all code using the access register mode") Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: <stable@vger.kernel.org> # v4.15+ Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-11x86/insn-eval: Fix use-after-free access to LDT entryJann Horn
commit de9f869616dd95e95c00bdd6b0fcd3421e8a4323 upstream. get_desc() computes a pointer into the LDT while holding a lock that protects the LDT from being freed, but then drops the lock and returns the (now potentially dangling) pointer to its caller. Fix it by giving the caller a copy of the LDT entry instead. Fixes: 670f928ba09b ("x86/insn-eval: Add utility function to get segment descriptor") Cc: stable@vger.kernel.org Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-11x86/power: Fix 'nosmt' vs hibernation triple fault during resumeJiri Kosina
commit ec527c318036a65a083ef68d8ba95789d2212246 upstream. As explained in 0cc3cd21657b ("cpu/hotplug: Boot HT siblings at least once") we always, no matter what, have to bring up x86 HT siblings during boot at least once in order to avoid first MCE bringing the system to its knees. That means that whenever 'nosmt' is supplied on the kernel command-line, all the HT siblings are as a result sitting in mwait or cpudile after going through the online-offline cycle at least once. This causes a serious issue though when a kernel, which saw 'nosmt' on its commandline, is going to perform resume from hibernation: if the resume from the hibernated image is successful, cr3 is flipped in order to point to the address space of the kernel that is being resumed, which in turn means that all the HT siblings are all of a sudden mwaiting on address which is no longer valid. That results in triple fault shortly after cr3 is switched, and machine reboots. Fix this by always waking up all the SMT siblings before initiating the 'restore from hibernation' process; this guarantees that all the HT siblings will be properly carried over to the resumed kernel waiting in resume_play_dead(), and acted upon accordingly afterwards, based on the target kernel configuration. Symmetricaly, the resumed kernel has to push the SMT siblings to mwait again in case it has SMT disabled; this means it has to online all the siblings when resuming (so that they come out of hlt) and offline them again to let them reach mwait. Cc: 4.19+ <stable@vger.kernel.org> # v4.19+ Debugged-by: Thomas Gleixner <tglx@linutronix.de> Fixes: 0cc3cd21657b ("cpu/hotplug: Boot HT siblings at least once") Signed-off-by: Jiri Kosina <jkosina@suse.cz> Acked-by: Pavel Machek <pavel@ucw.cz> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-11parisc: Fix crash due alternative coding for NP iopdir_fdc bitHelge Deller
commit 527a1d1ede98479bf90c31a64822107ac7e6d276 upstream. According to the found documentation, data cache flushes and sync instructions are needed on the PCX-U+ (PA8200, e.g. C200/C240) platforms, while PCX-W (PA8500, e.g. C360) platforms aparently don't need those flushes when changing the IO PDIR data structures. We have no documentation for PCX-W+ (PA8600) and PCX-W2 (PA8700) CPUs, but Carlo Pisani reported that his C3600 machine (PA8600, PCX-W+) fails when the fdc instructions were removed. His firmware didn't set the NIOP bit, so one may assume it's a firmware bug since other C3750 machines had the bit set. Even if documentation (as mentioned above) states that PCX-W (PA8500, e.g. J5000) does not need fdc flushes, Sven could show that an Adaptec 29320A PCI-X SCSI controller reliably failed on a dd command during the first five minutes in his J5000 when fdc flushes were missing. Going forward, we will now NOT replace the fdc and sync assembler instructions by NOPS if: a) the NP iopdir_fdc bit was set by firmware, or b) we find a CPU up to and including a PCX-W+ (PA8600). This fixes the HPMC crashes on a C240 and C36XX machines. For other machines we rely on the firmware to set the bit when needed. In case one finds HPMC issues, people could try to boot their machines with the "no-alternatives" kernel option to turn off any alternative patching. Reported-by: Sven Schnelle <svens@stackframe.org> Reported-by: Carlo Pisani <carlojpisani@gmail.com> Tested-by: Sven Schnelle <svens@stackframe.org> Fixes: 3847dab77421 ("parisc: Add alternative coding infrastructure") Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org # 5.0+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-11ARC: mm: SIGSEGV userspace trying to access kernel virtual memoryEugeniy Paltsev
commit a8c715b4dd73c26a81a9cc8dc792aa715d8b4bb2 upstream. As of today if userspace process tries to access a kernel virtual addres (0x7000_0000 to 0x7ffff_ffff) such that a legit kernel mapping already exists, that process hangs instead of being killed with SIGSEGV Fix that by ensuring that do_page_fault() handles kenrel vaddr only if in kernel mode. And given this, we can also simplify the code a bit. Now a vmalloc fault implies kernel mode so its failure (for some reason) can reuse the @no_context label and we can remove @bad_area_nosemaphore. Reproduce user test for original problem: ------------------------>8----------------- #include <stdlib.h> #include <stdint.h> int main(int argc, char *argv[]) { volatile uint32_t temp; temp = *(uint32_t *)(0x70000000); } ------------------------>8----------------- Cc: <stable@vger.kernel.org> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-09x86/kprobes: Set instruction page as executableNadav Amit
[ Upstream commit 7298e24f904224fa79eb8fd7e0fbd78950ccf2db ] Set the page as executable after allocation. This patch is a preparatory patch for a following patch that makes module allocated pages non-executable. While at it, do some small cleanup of what appears to be unnecessary masking. Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <akpm@linux-foundation.org> Cc: <ard.biesheuvel@linaro.org> Cc: <deneen.t.dock@intel.com> Cc: <kernel-hardening@lists.openwall.com> Cc: <kristen@linux.intel.com> Cc: <linux_dti@icloud.com> Cc: <will.deacon@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190426001143.4983-11-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-09Revert "x86/build: Move _etext to actual end of .text"Greg Kroah-Hartman
This reverts commit 392bef709659abea614abfe53cf228e7a59876a4. It seems to cause lots of problems when using the gold linker, and no one really needs this at the moment, so just revert it from the stable trees. Cc: Sami Tolvanen <samitolvanen@google.com> Reported-by: Kees Cook <keescook@chromium.org> Cc: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Reported-by: Alec Ari <neotheuser@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-09x86/ima: Check EFI_RUNTIME_SERVICES before usingScott Wood
commit 558b523d46289f111d53d7c42211069063be5985 upstream. Checking efi_enabled(EFI_BOOT) is not sufficient to ensure that EFI runtime services are available, e.g. if efi=noruntime is used. Without this, I get an oops on a PREEMPT_RT kernel where efi=noruntime is the default. Fixes: 399574c64eaf94e8 ("x86/ima: retry detecting secure boot mode") Cc: stable@vger.kernel.org (linux-5.0) Signed-off-by: Scott Wood <swood@redhat.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-09arm64: Fix the arm64_personality() syscall wrapper redirectionCatalin Marinas
commit 00377277166bac6939d8f72b429301369acaf2d8 upstream. Following commit 4378a7d4be30 ("arm64: implement syscall wrappers"), the syscall function names gained the '__arm64_' prefix. Ensure that we have the correct #define for redirecting a default syscall through a wrapper. Fixes: 4378a7d4be30 ("arm64: implement syscall wrappers") Cc: <stable@vger.kernel.org> # 4.19.x- Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-09signal/arm64: Use force_sig not force_sig_fault for SIGKILLEric W. Biederman
commit d76cac67db40c172791ce07948367b96a758e45b upstream. I don't think this is userspace visible but SIGKILL does not have any si_codes that use the fault member of the siginfo union. Correct this the simple way and call force_sig instead of force_sig_fault when the signal is SIGKILL. The two know places where synchronous SIGKILL are generated are do_bad_area and fpsimd_save. The call paths to force_sig_fault are: do_bad_area arm64_force_sig_fault force_sig_fault force_signal_inject arm64_notify_die arm64_force_sig_fault force_sig_fault Which means correcting this in arm64_force_sig_fault is enough to ensure the arm64 code is not misusing the generic code, which could lead to maintenance problems later. Cc: stable@vger.kernel.org Cc: Dave Martin <Dave.Martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: af40ff687bc9 ("arm64: signal: Ensure si_code is valid for all fault signals") Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>