summaryrefslogtreecommitdiff
path: root/arch/x86/include/asm
AgeCommit message (Collapse)Author
2009-02-26x86: apic: simplify secondary CPU wakeup methodsIngo Molnar
Impact: cleanup - rename apic->wakeup_cpu to apic->wakeup_secondary_cpu, to make it apparent that this is an SMP-only method - handle NULL ->wakeup_secondary_cpus to mean the default INIT wakeup sequence - this allows simplification of the APIC driver templates. Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-26x86: remove update_apic from x86_quirks, fixIngo Molnar
Impact: build fix wakeup_secondary_cpu_via_init(), the default platform method for booting a secondary CPU, is always used on UP due to probe_32.c, if CONFIG_X86_LOCAL_APIC is enabled but SMP is off. So provide a UP wrapper inline as well. Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-26x86: don't compile vsmp_64 for 32bitYinghai Lu
Impact: cleanup that is only needed when CONFIG_X86_VSMP is defined with 64bit also remove dead code about PCI, because CONFIG_X86_VSMP depends on PCI Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-26x86: remove update_apic from x86_quirksYinghai Lu
Impact: cleanup x86_quirks->update_apic() calling looks crazy. so try to remove it: 1. every apic take wakeup_cpu member directly 2. separate es7000_apic to es7000_apic_cluster 3. use uv_wakeup_cpu directly Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-26Merge branches 'x86/apic', 'x86/defconfig', 'x86/memtest', 'x86/mm' and ↵Ingo Molnar
'linus' into x86/core
2009-02-26Merge branches 'x86/urgent' and 'x86/pat' into x86/coreIngo Molnar
Conflicts: arch/x86/include/asm/pat.h
2009-02-26Merge branch 'x86/urgent' into x86/coreIngo Molnar
Conflicts: arch/x86/mach-voyager/voyager_smp.c
2009-02-25Merge branch 'x86/urgent' into x86/patIngo Molnar
2009-02-25gpu/drm, x86, PAT: PAT support for io_mapping_*Venkatesh Pallipadi
Make io_mapping_create_wc and io_mapping_free go through PAT to make sure that there are no memory type aliases. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Eric Anholt <eric@anholt.net> Cc: Keith Packard <keithp@keithp.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25gpu/drm, x86, PAT: io_mapping_create_wc and resource_size_tVenkatesh Pallipadi
io_mapping_create_wc should take a resource_size_t parameter in place of unsigned long. With unsigned long, there will be no way to map greater than 4GB address in i386/32 bit. On x86, greater than 4GB addresses cannot be mapped on i386 without PAE. Return error for such a case. Patch also adds a structure for io_mapping, that saves the base, size and type on HAVE_ATOMIC_IOMAP archs, that can be used to verify the offset on io_mapping_map calls. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Eric Anholt <eric@anholt.net> Cc: Keith Packard <keithp@keithp.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25gpu/drm, x86, PAT: routine to keep identity map in syncVenkatesh Pallipadi
Add a function to check and keep identity maps in sync, when changing any memory type. One of the follow on patches will also use this routine. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Eric Anholt <eric@anholt.net> Cc: Keith Packard <keithp@keithp.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25x86: usercopy: check for total size when deciding non-temporal cutoffIngo Molnar
Impact: make more types of copies non-temporal This change makes the following simple fix: 30d697f: x86: fix performance regression in write() syscall A bit more sophisticated: we check the 'total' number of bytes written to decide whether to copy in a cached or a non-temporal way. This will for example cause the tail (modulo 4096 bytes) chunk of a large write() to be non-temporal too - not just the page-sized chunks. Cc: Salman Qazi <sqazi@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25x86, mm: pass in 'total' to __copy_from_user_*nocache()Ingo Molnar
Impact: cleanup, enable future change Add a 'total bytes copied' parameter to __copy_from_user_*nocache(), and update all the callsites. The parameter is not used yet - architecture code can use it to more intelligently decide whether the copy should be cached or non-temporal. Cc: Salman Qazi <sqazi@google.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-25Merge branch 'x86/asm' into x86/mmIngo Molnar
2009-02-25x86: convert cacheflush macros inline functionsTejun Heo
Impact: cleanup Unused macro parameters cause spurious unused variable warnings. Convert all cacheflush macros to inline functions to avoid the warnings and achieve better type checking. Signed-off-by: Tejun Heo <tj@kernel.org>
2009-02-24Merge branch 'x86/core' into x86/mce2H. Peter Anvin
2009-02-24x86, mce, cmci: add CMCI supportAndi Kleen
Impact: Major new feature Intel CMCI (Corrected Machine Check Interrupt) is a new feature on Nehalem CPUs. It allows the CPU to trigger interrupts on corrected events, which allows faster reaction to them instead of with the traditional polling timer. Also use CMCI to discover shared banks. Machine check banks can be shared by CPU threads or even cores. Using the CMCI enable bit it is possible to detect the fact that another CPU already saw a specific bank. Use this to assign shared banks only to one CPU to avoid reporting duplicated events. On CPU hot unplug bank sharing is re discovered. This is done using a thread that cycles through all the CPUs. To avoid races between the poller and CMCI we only poll for banks that are not CMCI capable and only check CMCI owned banks on a interrupt. The shared banks ownership information is currently only used for CMCI interrupts, not polled banks. The sharing discovery code follows the algorithm recommended in the IA32 SDM Vol3a 14.5.2.1 The CMCI interrupt handler just calls the machine check poller to pick up the machine check event that caused the interrupt. I decided not to implement a separate threshold event like the AMD version has, because the threshold is always one currently and adding another event didn't seem to add any value. Some code inspired by Yunhong Jiang's Xen implementation, which was in term inspired by a earlier CMCI implementation by me. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-24x86, mce, cmci: define MSR names and fields for new CMCI registersAndi Kleen
Impact: New register definitions only CMCI means support for raising an interrupt on a corrected machine check event instead of having to poll for it. It's a new feature in Intel Nehalem CPUs available on some machine check banks. For details see the IA32 SDM Vol3a 14.5 Define the registers for it as a preparation for further patches. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-24x86, mce, cmci: use polled banks bitmap in machine check pollerAndi Kleen
Define a per cpu bitmap that contains the banks polled by the machine check poller. This is needed for the CMCI code in the next patches to be able to disable polling on specific banks. The bank by default contains all banks, so there is no behaviour change. Only future code will remove some banks from the polling set. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-24x86, mce, cmci: factor out threshold interrupt handlerAndi Kleen
Impact: cleanup; preparation for feature The mce_amd_64 code has an own private MC threshold vector with an own interrupt handler. Since Intel needs a similar handler it makes sense to share the vector because both can not be active at the same time. I factored the common APIC handler code into a separate file which can be used by both the Intel or AMD MC code. This is needed for the next patch which adds an Intel specific CMCI handler. This patch should be a nop for AMD, it just moves some code around. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-24x86, mce, cmci: export MAX_NR_BANKSAndi Kleen
Impact: Cleanup (code movement) Move MAX_NR_BANKS into mce.h because it's needed there for followup patches. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-24Merge branch 'tj-percpu' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc into core/percpu Conflicts: arch/x86/include/asm/pgtable.h
2009-02-24Merge branches 'x86/acpi', 'x86/apic', 'x86/asm', 'x86/cleanups', 'x86/mm', ↵Ingo Molnar
'x86/signal' and 'x86/urgent'; commit 'v2.6.29-rc6' into x86/core
2009-02-24x86: invalid_vm86_irq -- use predefined macrosCyrill Gorcunov
Impact: cleanup Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Cc: heukelum@fastmail.fm Cc: Cyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-24x86: fix performance regression in write() syscallSalman Qazi
While the introduction of __copy_from_user_nocache (see commit: 0812a579c92fefa57506821fa08e90f47cb6dbdd) may have been an improvement for sufficiently large writes, there is evidence to show that it is deterimental for small writes. Unixbench's fstime test gives the following results for 256 byte writes with MAX_BLOCK of 2000: 2.6.29-rc6 ( 5 samples, each in KB/sec ): 283750, 295200, 294500, 293000, 293300 2.6.29-rc6 + this patch (5 samples, each in KB/sec): 313050, 3106750, 293350, 306300, 307900 2.6.18 395700, 342000, 399100, 366050, 359850 See w_test() in src/fstime.c in unixbench version 4.1.0. Basically, the above test consists of counting how much we can write in this manner: alarm(10); while (!sigalarm) { for (f_blocks = 0; f_blocks < 2000; ++f_blocks) { write(f, buf, 256); } lseek(f, 0L, 0); } Note, there are other components to the write syscall regression that are not addressed here. Signed-off-by: Salman Qazi <sqazi@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-24x86: update populate_extra_pte() and add populate_extra_pmd()Tejun Heo
Impact: minor change to populate_extra_pte() and addition of pmd flavor Update populate_extra_pte() to return pointer to the pte_t for the specified address and add populate_extra_pmd() which only populates till the pmd and returns pointer to the pmd entry for the address. For 64bit, pud/pmd/pte fill functions are separated out from set_pte_vaddr[_pud]() and used for set_pte_vaddr[_pud]() and populate_extra_{pte|pmd}(). Signed-off-by: Tejun Heo <tj@kernel.org>
2009-02-24bootmem: clean up arch-specific bootmem wrappingTejun Heo
Impact: cleaner and consistent bootmem wrapping By setting CONFIG_HAVE_ARCH_BOOTMEM_NODE, archs can define arch-specific wrappers for bootmem allocation. However, this is done a bit strangely in that only the high level convenience macros can be changed while lower level, but still exported, interface functions can't be wrapped. This not only is messy but also leads to strange situation where alloc_bootmem() does what the arch wants it to do but the equivalent __alloc_bootmem() call doesn't although they should be able to be used interchangeably. This patch updates bootmem such that archs can override / wrap the backend function - alloc_bootmem_core() instead of the highlevel interface functions to allow simpler and consistent wrapping. Also, HAVE_ARCH_BOOTMEM_NODE is renamed to HAVE_ARCH_BOOTMEM. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@saeurebad.de>
2009-02-23Merge branch 'x86/urgent' into x86/mce2H. Peter Anvin
2009-02-23x86: refactor x86_quirks supportIngo Molnar
Impact: cleanup Make x86_quirks support more transparent. The highlevel methods are now named: extern void x86_quirk_pre_intr_init(void); extern void x86_quirk_intr_init(void); extern void x86_quirk_trap_init(void); extern void x86_quirk_pre_time_init(void); extern void x86_quirk_time_init(void); This makes it clear that if some platform extension has to do something here that it is considered ... weird, and is discouraged. Also remove arch_hooks.h and move it into setup.h (and other header files where appropriate). Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-23x86: remove various unused subarch hooksIngo Molnar
Impact: remove dead code Remove: - pre_setup_arch_hook() - mca_nmi_hook() If needed they can be added back via an x86_quirk handler. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-23x86: remove the Voyager 32-bit subarchIngo Molnar
Impact: remove unused/broken code The Voyager subarch last built successfully on the v2.6.26 kernel and has been stale since then and does not build on the v2.6.27, v2.6.28 and v2.6.29-rc5 kernels. No actual users beyond the maintainer reported this breakage. Patches were sent and most of the fixes were accepted but the discussion around how to do a few remaining issues cleanly fizzled out with no resolution and the code remained broken. In the v2.6.30 x86 tree development cycle 32-bit subarch support has been reworked and removed - and the Voyager code, beyond the build problems already known, needs serious and significant changes and probably a rewrite to support it. CONFIG_X86_VOYAGER has been marked BROKEN then. The maintainer has been notified but no patches have been sent so far to fix it. While all other subarchs have been converted to the new scheme, voyager is still broken. We'd prefer to receive patches which clean up the current situation in a constructive way, but even in case of removal there is no obstacle to add that support back after the issues have been sorted out in a mutually acceptable fashion. So remove this inactive code for now. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-22Merge branch 'linus' into x86/apicIngo Molnar
Conflicts: arch/x86/mach-default/setup.c Semantic conflict resolution: arch/x86/kernel/setup.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-22x86: select x2apic ops in early apic probe only if x2apic mode is enabledSuresh Siddha
If BIOS hands over the control to OS in legacy xapic mode, select legacy xapic related ops in the early apic probe and shift to x2apic ops later in the boot sequence, only after enabling x2apic mode. If BIOS hands over the control in x2apic mode, select x2apic related ops in the early apic probe. This fixes the early boot panic, where we were selecting x2apic ops, while the cpu is still in legacy xapic mode. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-21x86, mm: rename TASK_SIZE64 => TASK_SIZE_MAXIngo Molnar
Impact: cleanup Rename TASK_SIZE64 to TASK_SIZE_MAX, and provide the define on 32-bit too. (mapped to TASK_SIZE) This allows 32-bit code to make use of the (former-) TASK_SIZE64 symbol as well, in a clean way. Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-20Merge branches 'x86/asm', 'x86/cleanups' and 'x86/headers' into x86/coreIngo Molnar
2009-02-20Merge branch 'x86/urgent' into x86/coreIngo Molnar
2009-02-20x86: convert to the new dynamic percpu allocatorTejun Heo
Impact: use new dynamic allocator, unified access to static/dynamic percpu memory Convert to the new dynamic percpu allocator. * implement populate_extra_pte() for both 32 and 64 * update setup_per_cpu_areas() to use pcpu_setup_static() * define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() * define config HAVE_DYNAMIC_PER_CPU_AREA Signed-off-by: Tejun Heo <tj@kernel.org>
2009-02-19x86, mce: separate correct machine check poller and fatal exception handlerAndi Kleen
Impact: cleanup, performance enhancement The machine check poller is diverging more and more from the fatal exception handler. Instead of adding more special cases separate the code paths completely. The corrected poll path is actually quite simple, and this doesn't result in much code duplication. This makes both handlers much easier to read and results in cleaner code flow. The exception handler now only needs to care about uncorrected errors, which also simplifies the handling of multiple errors. The corrected poller also now always runs in standard interrupt context and does not need to do anything special to handle NMI context. Minor behaviour changes: - MCG status is now not cleared on polling. - Only the banks which had corrected errors get cleared on polling - The exception handler only clears banks with errors now v2: Forward port to new patch order. Add "uc" argument. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2009-02-19x86, mce: factor out duplicated struct mce setup into one functionAndi Kleen
Impact: cleanup This merely factors out duplicated code to set up the initial struct mce state into a single function. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2009-02-19Merge branch 'x86/untangle2' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen into x86/headers
2009-02-19x86: linkage - get rid of _X86 macrosCyrill Gorcunov
Impact: cleanup There was an attempt to bring build-time checking for missed ENTRY_X86/END_X86 and KPROBE... pairs. Using them will add messy in code. Get just rid of them. This commit could be easily restored if the need appear in future. Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-19x86: asm linkage - introduce GLOBAL macroCyrill Gorcunov
If the code is time critical and this entry is called from other places we use ENTRY to have it globally defined and especially aligned. Contrary we have some snippets which are size critical. So we use plane ".globl name; name:" directive. Introduce GLOBAL macro for this. Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-19x86: syscalls.h: remove asmlinkage from declaration of sys_rt_sigreturn()Hiroshi Shimamoto
Impact: cleanup asmlinkage for sys_rt_sigreturn() no longer exists in arch/x86/kernel/signal.c. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-19x86: include/asm/processor.h remove double declaration of print_cpu_infoJaswinder Singh Rajput
Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-18mm: clean up for early_pfn_to_nid()KAMEZAWA Hiroyuki
What's happening is that the assertion in mm/page_alloc.c:move_freepages() is triggering: BUG_ON(page_zone(start_page) != page_zone(end_page)); Once I knew this is what was happening, I added some annotations: if (unlikely(page_zone(start_page) != page_zone(end_page))) { printk(KERN_ERR "move_freepages: Bogus zones: " "start_page[%p] end_page[%p] zone[%p]\n", start_page, end_page, zone); printk(KERN_ERR "move_freepages: " "start_zone[%p] end_zone[%p]\n", page_zone(start_page), page_zone(end_page)); printk(KERN_ERR "move_freepages: " "start_pfn[0x%lx] end_pfn[0x%lx]\n", page_to_pfn(start_page), page_to_pfn(end_page)); printk(KERN_ERR "move_freepages: " "start_nid[%d] end_nid[%d]\n", page_to_nid(start_page), page_to_nid(end_page)); ... And here's what I got: move_freepages: Bogus zones: start_page[2207d0000] end_page[2207dffc0] zone[fffff8103effcb00] move_freepages: start_zone[fffff8103effcb00] end_zone[fffff8003fffeb00] move_freepages: start_pfn[0x81f600] end_pfn[0x81f7ff] move_freepages: start_nid[1] end_nid[0] My memory layout on this box is: [ 0.000000] Zone PFN ranges: [ 0.000000] Normal 0x00000000 -> 0x0081ff5d [ 0.000000] Movable zone start PFN for each node [ 0.000000] early_node_map[8] active PFN ranges [ 0.000000] 0: 0x00000000 -> 0x00020000 [ 0.000000] 1: 0x00800000 -> 0x0081f7ff [ 0.000000] 1: 0x0081f800 -> 0x0081fe50 [ 0.000000] 1: 0x0081fed1 -> 0x0081fed8 [ 0.000000] 1: 0x0081feda -> 0x0081fedb [ 0.000000] 1: 0x0081fedd -> 0x0081fee5 [ 0.000000] 1: 0x0081fee7 -> 0x0081ff51 [ 0.000000] 1: 0x0081ff59 -> 0x0081ff5d So it's a block move in that 0x81f600-->0x81f7ff region which triggers the problem. This patch: Declaration of early_pfn_to_nid() is scattered over per-arch include files, and it seems it's complicated to know when the declaration is used. I think it makes fix-for-memmap-init not easy. This patch moves all declaration to include/linux/mm.h After this, if !CONFIG_NODES_POPULATES_NODE_MAP && !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID -> Use static definition in include/linux/mm.h else if !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID -> Use generic definition in mm/page_alloc.c else -> per-arch back end function will be called. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reported-by: David Miller <davem@davemlloft.net> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: <stable@kernel.org> [2.6.25.x, 2.6.26.x, 2.6.27.x, 2.6.28.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-17x86, mce: don't disable machine checks during code patchingAndi Kleen
Impact: low priority bug fix This removes part of a a patch I added myself some time ago. After some consideration the patch was a bad idea. In particular it stopped machine check exceptions during code patching. To quote the comment: * MCEs only happen when something got corrupted and in this * case we must do something about the corruption. * Ignoring it is worse than a unlikely patching race. * Also machine checks tend to be broadcast and if one CPU * goes into machine check the others follow quickly, so we don't * expect a machine check to cause undue problems during to code * patching. So undo the machine check related parts of 8f4e956b313dcccbc7be6f10808952345e3b638c NMIs are still disabled. This only removes code, the only additions are a new comment. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-02-17Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, vm86: fix preemption bug x86, olpc: fix model detection without OFW x86, hpet: fix for LS21 + HPET = boot hang x86: CPA avoid repeated lazy mmu flush x86: warn if arch_flush_lazy_mmu_cpu is called in preemptible context x86/paravirt: make arch_flush_lazy_mmu/cpu disable preemption x86, pat: fix warn_on_once() while mapping 0-1MB range with /dev/mem x86/cpa: make sure cpa is safe to call in lazy mmu mode x86, ptrace, mm: fix double-free on race
2009-02-17x86: truncate ISA addresses to unsigned intH. Peter Anvin
Impact: Cleanup; fix inappropriate macro use ISA addresses on x86 are mapped 1:1 with the physical address space. Since the ISA address space is only 24 bits (32 for VLB or LPC) it will always fit in an unsigned int, and at least in the aha1542 driver using a wider type would cause an undesirable promotion. Hence explicitly cast the ISA bus addresses to unsigned int. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
2009-02-17x86, apic: rename 'genapic' to 'apic'Ingo Molnar
Impact: cleanup Now that all APIC code is consolidated there's nothing 'gen' about apics anymore - so rename 'struct genapic' to 'struct apic'. This shortens the code and is nicer to read as well. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-17x86, apic: remove ->store_NMI_vector()Ingo Molnar
Impact: cleanup It's not used by anything anymore. Signed-off-by: Ingo Molnar <mingo@elte.hu>