summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-05-05x86/entry: Convert SIMD coprocessor error exception to IDTENTRYThomas Gleixner
Convert #XF to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Handle INVD_BUG in C - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes - Remove the RCU warning as the new entry macro ensures correctness No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Alignment check exception to IDTENTRYThomas Gleixner
Convert #AC to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes - Remove the RCU warning as the new entry macro ensures correctness No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Coprocessor error exception to IDTENTRYThomas Gleixner
Convert #MF to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes - Remove the RCU warning as the new entry macro ensures correctness No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Spurious interrupt bug exception to IDTENTRYThomas Gleixner
Convert #SPURIOUS to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert General protection exception to IDTENTRYThomas Gleixner
Convert #GP to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes - Remove the RCU warning as the new entry macro ensures correctness No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Stack segment exception to IDTENTRYThomas Gleixner
Convert #SS to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Segment not present exception to IDTENTRYThomas Gleixner
Convert #NP to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Invalid TSS exception to IDTENTRYThomas Gleixner
Convert #TS to IDTENTRY_ERRORCODE: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Provide IDTENTRY_ERRORCODEThomas Gleixner
Same as IDTENTRY but the C entry point has an error code argument. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Coprocessor segment overrun exception to IDTENTRYThomas Gleixner
Convert #OLD_MF to IDTENTRY: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Device not available exception to IDTENTRYThomas Gleixner
Convert #NM to IDTENTRY: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes - Remove the RCU warning as the new entry macro ensures correctness No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Invalid Opcode exception to IDTENTRYThomas Gleixner
Convert #UD to IDTENTRY: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Fixup the FOOF bug call in fault.c - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Bounds exception to IDTENTRYThomas Gleixner
Convert #BR to IDTENTRY: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes - Remove the RCU warning as the new entry macro ensures correctness No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Overflow exception to IDTENTRYThomas Gleixner
Convert #OF to IDTENTRY: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code - Remove the old prototyoes No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Convert Divide Error to IDTENTRYThomas Gleixner
Convert #DE to IDTENTRY: - Implement the C entry point with DEFINE_IDTENTRY - Emit the ASM stub with DECLARE_IDTENTRY - Remove the ASM idtentry in 64bit - Remove the open coded ASM entry code in 32bit - Fixup the XEN/PV code No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
2020-05-05x86/traps: Prepare for using DEFINE_IDTENTRYThomas Gleixner
Prepare for using IDTENTRY to define the C exception/trap entry points. It would be possible to glue this into the existing macro maze, but it's simpler and better to read at the end to just make them distinct. Provide a trivial inline helper to read the trap address. The existing macros will be removed once all instances are converted. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
2020-05-05x86/entry/common: Provide idtentry_enter/exit()Thomas Gleixner
Provide functions which handle the low level entry and exit similiar to enter/exit from user mode. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/idtentry: Provide macros to define/declare IDT entry pointsThomas Gleixner
Provide DECLARE/DEFINE_IDTENTRY() macros. DEFINE_IDTENTRY() provides a wrapper which acts as the function definition. The exception handler body is just appended to it with curly brackets. The entry point is marked notrace/noprobe so that irq tracing and the enter_from_user_mode() can be moved into the C-entry point. As all C-entries use the same macro (or a later variant) the necessary entry handling can be implemented at one central place. DECLARE_IDTENTRY() provides the function prototypes: - The C entry point cfunc - The ASM entry point asm_cfunc - The XEN/PV entry point xen_asm_cfunc They all follow the same naming convention. When included from ASM code DECLARE_IDTENTRY() is a macro which emits the low level entry point in assembly by instantiating idtentry. IDTENTRY is the simplest variant which just has a pt_regs argument. It's going to be used for all exceptions which have no error code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
2020-05-05x86/entry/32: Provide macro to emit IDT entry stubsThomas Gleixner
32 and 64 bit have unnecessary different ways to populate the exception entry code. Provide a idtentry macro which allows to consolidate all of that. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
2020-05-05x86/entry/64: Provide sane error entry/exitThomas Gleixner
For gradual conversion provide a macro parameter and the required code which allows to handle instrumentation and interrupt flags tracking in C. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Distangle idtentryThomas Gleixner
idtentry is a completely unreadable maze. Split it into distinct idtentry variants which only contain the minimal code: - idtentry for regular exceptions - idtentry_mce_debug for #MCE and #DB - idtentry_df for #DF The generated binary code is equivalent. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
2020-05-05x86/entry/64: Reorder idtentriesThomas Gleixner
Move them all together so verifying the cleanup patches for binary equivalence will be easier. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Andy Lutomirski <luto@kernel.org>
2020-05-05x86/traps: Split trap numbers out in a seperate headerThomas Gleixner
So they can be used in ASM code. For this it is also necessary to convert them to defines. Will be used for the rework of the entry code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
2020-05-05x86/traps: Make interrupt enable/disable symmetric in C codeThomas Gleixner
Traps enable interrupts conditionally but rely on the ASM return code to disable them again. That results in redundant interrupt disable and trace calls. Make the trap handlers disable interrupts before returning to avoid that, which allows simplification of the ASM entry code. Originally-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Disable interrupts for native_load_gs_index() in C codeThomas Gleixner
There is absolutely no point in doing this in ASM code. Move it to C. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/traps: Mark sync_regs() noinstrThomas Gleixner
Replace the notrace and NOKPROBE annotations with noinstr. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/traps: Mark fixup_bad_iret() noinstrThomas Gleixner
This is called from deep entry ASM in a situation where instrumentation will cause more harm than providing useful information. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/kvm/svm: Move guest enter/exit into .noinstr.textentry-v4-part2Thomas Gleixner
Move the functions which are inside the RCU off region into the non-instrumentable text section. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
2020-05-05x86/kvm/vmx: Move guest enter/exit into .noinstr.textThomas Gleixner
Move the functions which are inside the RCU off region into the non-instrumentable text section. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
2020-05-05x86/kvm/svm: Handle hardirqs proper on guest enter/exitThomas Gleixner
Add hardirq tracing to guest enter/exit functions in the same way as it is done in the user mode enter/exit code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
2020-05-05x86/kvm/vmx: Add hardirq tracing to guest enter/exitThomas Gleixner
Add hardirq tracing to guest enter/exit functions in the same way as it is done in the user mode enter/exit code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
2020-05-05x86/kvm: Move context tracking where it belongsThomas Gleixner
Context tracking for KVM happens way too early in the vcpu_run() code. Anything after guest_enter_irqoff() and before guest_exit_irqoff() cannot use RCU and should also be not instrumented. The current way of doing this covers way too much code. Move it closer to the actual vmenter/exit code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
2020-05-05x86,objtool: Make entry_64_compat.S objtool cleanPeter Zijlstra
Currently entry_64_compat is exempt from objtool, but with vmlinux mode there is no hiding it. Make the following changes to make it pass: - change entry_SYSENTER_compat to STT_NOTYPE; it's not a function and doesn't have function type stack setup. - mark all STT_NOTYPE symbols with UNWIND_HINT_EMPTY; so we do validate them and don't treat them as unreachable. - don't abuse RSP as a temp register, this confuses objtool mightily as it (rightfully) thinks we're doing unspeakable things to the stack. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2020-05-05x86/entry/64: Mark ___preempt_schedule_notrace() thunk noinstrThomas Gleixner
Code calling this from noinstr sections, e.g. entry code, has interrupts disabled, so the actual call into the scheduler code does not happen. The objtool section check complains nevertheless, so mark the call "safe". Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry/64: Check IF in __preempt_enable_notrace() thunkThomas Gleixner
The preempt_enable_notrace() ASM thunk is called from tracing, entry code RCU and other places which are already in or going to be in the noinstr section which protects sensitve code from being instrumented. Calls out of these sections happen with interrupts disabled, which is handled in C code, but the push regs, call, pop regs sequence can be completely avoided in this case. This is also a preparatory step for annotating the call from the thunk to preempt_enable_notrace() safe from a noinstr section. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/speculation/mds: Mark mds_user_clear_cpu_buffers() __always_inlineThomas Gleixner
Prevent the compiler from uninlining and creating traceable/probable functions as this is invoked _after_ context tracking switched to CONTEXT_USER and rcu idle. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Move irq flags tracing to prepare_exit_to_usermode()Thomas Gleixner
This is another step towards more C-code and less convoluted ASM. Similar to the entry path, invoke the tracer before context tracking which might turn off RCU and invoke lockdep as the last step before going back to user space. Annotate the code sections in exit_to_user_mode() accordingly so objtool won't complain about the tracer invocation. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Move irq tracing on syscall entry to C-codeThomas Gleixner
Now that the C entry points are safe, move the irq flags tracing code into the entry helper: - Invoke lockdep before calling into context tracking - Use the safe trace_hardirqs_on_prepare() trace function after context tracking established state and RCU is watching. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry/common: Protect against instrumentationThomas Gleixner
Mark the various syscall entries with noinstr to protect them against instrumentation and add the noinstr_begin()/end() annotations to mark the parts of the functions which are safe to call out into instrumentable code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Mark enter_from_user_mode() noinstrThomas Gleixner
Both the callers in the low level ASM code and __context_tracking_exit() which is invoked from enter_from_user_mode() via user_exit_irqoff() are marked NOKPROBE. Allowing enter_from_user_mode() to be probed is inconsistent at best. Aside of that while function tracing per se is safe the function trace entry/exit points can be used via BPF as well which is not safe to use before context tracking has reached CONTEXT_KERNEL and adjusted RCU. Mark it noinstr which moves it into the instrumentation protected text section and includes notrace. Note, this needs further fixups in context tracking to ensure that the full call chain is protected. Will be addressed in follow up changes. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry/32: Move non entry code into .text sectionThomas Gleixner
All ASM code which is not part of the entry functionality can move out into the .text section. No reason to keep it in the non-instrumentable entry section. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry/64: Move non entry code into .text sectionThomas Gleixner
All ASM code which is not part of the entry functionality can move out into the .text section. No reason to keep it in the non-instrumentable entry section. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86: Replace ist_enter() with nmi_enter()Peter Zijlstra
A few exceptions (like #DB and #BP) can happen at any location in the code, this then means that tracers should treat events from these exceptions as NMI-like. The interrupted context could be holding locks with interrupts disabled for instance. Similarly, #MC is an actual NMI-like exception. All of them use ist_enter() which only concerns itself with RCU, but does not do any of the other setup that NMIs need. This means things like: printk() raw_spin_lock_irq(&logbuf_lock); <#DB/#BP/#MC> printk() raw_spin_lock_irq(&logbuf_lock); are entirely possible (well, not really since printk tries hard to play nice, but the concept stands). So replace ist_enter() with nmi_enter(). Also observe that any nmi_enter() caller must be both notrace and NOKPROBE, or in the noinstr text section. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86,tracing: Robustify ftrace_nmi_enter()Peter Zijlstra
ftrace_nmi_enter() trace_hwlat_callback() trace_clock_local() sched_clock() paravirt_sched_clock() native_sched_clock() All must not be traced or kprobed, it will be called from do_debug() before the kprobe handler. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-05-05sh/ftrace: Move arch_ftrace_nmi_{enter,exit} into nmi exceptionPeter Zijlstra
SuperH is the last remaining user of arch_ftrace_nmi_{enter,exit}(), remove it from the generic code and into the SuperH code. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Rich Felker <dalias@libc.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
2020-05-05x86/mce: Send #MC singal from task workPeter Zijlstra
Convert #MC over to using task_work_add(); it will run the same code slightly later, on the return to user path of the same exception. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
2020-05-05hardirq/nmi: Allow nested nmi_enter()Peter Zijlstra
Since there are already a number of sites (ARM64, PowerPC) that effectively nest nmi_enter(), make the primitive support this before adding even more. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au>
2020-05-05arm64: Prepare arch_nmi_enter() for recursionFrederic Weisbecker
Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com>
2020-05-05bug: Annotate WARN/BUG/stackfail as noinstr safeThomas Gleixner
Warnings, bugs and stack protection fails from noinstr sections, e.g. low level and early entry code, are likely to be fatal. Mark them as "safe" to be invoked from noinstr protected code to avoid annotating all usage sites. Getting the information out is important. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-05-05x86/entry: Exclude low level entry code from sanitizingPeter Zijlstra
The sanitizers are not really applicable to the fragile low level entry code. code. Entry code needs to carefully setup a normal 'runtime' environment. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>