summaryrefslogtreecommitdiff
path: root/drivers/vfio
AgeCommit message (Collapse)Author
2021-01-09vfio/pci: Move dummy_resources_list init in vfio_pci_probe()Eric Auger
[ Upstream commit 16b8fe4caf499ae8e12d2ab1b1324497e36a7b83 ] In case an error occurs in vfio_pci_enable() before the call to vfio_pci_probe_mmaps(), vfio_pci_disable() will try to iterate on an uninitialized list and cause a kernel panic. Lets move to the initialization to vfio_pci_probe() to fix the issue. Signed-off-by: Eric Auger <eric.auger@redhat.com> Fixes: 05f0c03fbac1 ("vfio-pci: Allow to mmap sub-page MMIO BARs if the mmio page is exclusive") CC: Stable <stable@vger.kernel.org> # v4.7+ Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-12-29vfio-pci: Use io_remap_pfn_range() for PCI IO memoryJason Gunthorpe
[ Upstream commit 7b06a56d468b756ad6bb43ac21b11e474ebc54a0 ] commit f8f6ae5d077a ("mm: always have io_remap_pfn_range() set pgprot_decrypted()") allows drivers using mmap to put PCI memory mapped BAR space into userspace to work correctly on AMD SME systems that default to all memory encrypted. Since vfio_pci_mmap_fault() is working with PCI memory mapped BAR space it should be calling io_remap_pfn_range() otherwise it will not work on SME systems. Fixes: 11c4cd07ba11 ("vfio-pci: Fault mmaps to enable vma tracking") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Peter Xu <peterx@redhat.com> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-29vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pagesXiaoyang Xu
[ Upstream commit 2e6cfd496f5b57034cf2aec738799571b5a52124 ] pfn is not added to pfn_list when vfio_add_to_pfn_list fails. vfio_unpin_page_external will exit directly without calling vfio_iova_put_vfio_pfn. This will lead to a memory leak. Fixes: a54eb55045ae ("vfio iommu type1: Add support for mediated devices") Signed-off-by: Xiaoyang Xu <xuxiaoyang2@huawei.com> [aw: simplified logic, add Fixes] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-29vfio/pci: Clear token on bypass registration failureAlex Williamson
[ Upstream commit 852b1beecb6ff9326f7ca4bc0fe69ae860ebdb9e ] The eventfd context is used as our irqbypass token, therefore if an eventfd is re-used, our token is the same. The irqbypass code will return an -EBUSY in this case, but we'll still attempt to unregister the producer, where if that duplicate token still exists, results in removing the wrong object. Clear the token of failed producers so that they harmlessly fall out when unregistered. Fixes: 6d7425f109d2 ("vfio: Register/unregister irq_bypass_producer") Reported-by: guomin chen <guomin_chen@sina.com> Tested-by: guomin chen <guomin_chen@sina.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-01vfio/pci: fix racy on error and request eventfd ctxZeng Tao
[ Upstream commit b872d0640840018669032b20b6375a478ed1f923 ] The vfio_pci_release call will free and clear the error and request eventfd ctx while these ctx could be in use at the same time in the function like vfio_pci_request, and it's expected to protect them under the vdev->igate mutex, which is missing in vfio_pci_release. This issue is introduced since commit 1518ac272e78 ("vfio/pci: fix memory leaks of eventfd ctx"),and since commit 5c5866c593bb ("vfio/pci: Clear error and request eventfd ctx after releasing"), it's very easily to trigger the kernel panic like this: [ 9513.904346] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000008 [ 9513.913091] Mem abort info: [ 9513.915871] ESR = 0x96000006 [ 9513.918912] EC = 0x25: DABT (current EL), IL = 32 bits [ 9513.924198] SET = 0, FnV = 0 [ 9513.927238] EA = 0, S1PTW = 0 [ 9513.930364] Data abort info: [ 9513.933231] ISV = 0, ISS = 0x00000006 [ 9513.937048] CM = 0, WnR = 0 [ 9513.940003] user pgtable: 4k pages, 48-bit VAs, pgdp=0000007ec7d12000 [ 9513.946414] [0000000000000008] pgd=0000007ec7d13003, p4d=0000007ec7d13003, pud=0000007ec728c003, pmd=0000000000000000 [ 9513.956975] Internal error: Oops: 96000006 [#1] PREEMPT SMP [ 9513.962521] Modules linked in: vfio_pci vfio_virqfd vfio_iommu_type1 vfio hclge hns3 hnae3 [last unloaded: vfio_pci] [ 9513.972998] CPU: 4 PID: 1327 Comm: bash Tainted: G W 5.8.0-rc4+ #3 [ 9513.980443] Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 2280-V2 CS V3.B270.01 05/08/2020 [ 9513.989274] pstate: 80400089 (Nzcv daIf +PAN -UAO BTYPE=--) [ 9513.994827] pc : _raw_spin_lock_irqsave+0x48/0x88 [ 9513.999515] lr : eventfd_signal+0x6c/0x1b0 [ 9514.003591] sp : ffff800038a0b960 [ 9514.006889] x29: ffff800038a0b960 x28: ffff007ef7f4da10 [ 9514.012175] x27: ffff207eefbbfc80 x26: ffffbb7903457000 [ 9514.017462] x25: ffffbb7912191000 x24: ffff007ef7f4d400 [ 9514.022747] x23: ffff20be6e0e4c00 x22: 0000000000000008 [ 9514.028033] x21: 0000000000000000 x20: 0000000000000000 [ 9514.033321] x19: 0000000000000008 x18: 0000000000000000 [ 9514.038606] x17: 0000000000000000 x16: ffffbb7910029328 [ 9514.043893] x15: 0000000000000000 x14: 0000000000000001 [ 9514.049179] x13: 0000000000000000 x12: 0000000000000002 [ 9514.054466] x11: 0000000000000000 x10: 0000000000000a00 [ 9514.059752] x9 : ffff800038a0b840 x8 : ffff007ef7f4de60 [ 9514.065038] x7 : ffff007fffc96690 x6 : fffffe01faffb748 [ 9514.070324] x5 : 0000000000000000 x4 : 0000000000000000 [ 9514.075609] x3 : 0000000000000000 x2 : 0000000000000001 [ 9514.080895] x1 : ffff007ef7f4d400 x0 : 0000000000000000 [ 9514.086181] Call trace: [ 9514.088618] _raw_spin_lock_irqsave+0x48/0x88 [ 9514.092954] eventfd_signal+0x6c/0x1b0 [ 9514.096691] vfio_pci_request+0x84/0xd0 [vfio_pci] [ 9514.101464] vfio_del_group_dev+0x150/0x290 [vfio] [ 9514.106234] vfio_pci_remove+0x30/0x128 [vfio_pci] [ 9514.111007] pci_device_remove+0x48/0x108 [ 9514.115001] device_release_driver_internal+0x100/0x1b8 [ 9514.120200] device_release_driver+0x28/0x38 [ 9514.124452] pci_stop_bus_device+0x68/0xa8 [ 9514.128528] pci_stop_and_remove_bus_device+0x20/0x38 [ 9514.133557] pci_iov_remove_virtfn+0xb4/0x128 [ 9514.137893] sriov_disable+0x3c/0x108 [ 9514.141538] pci_disable_sriov+0x28/0x38 [ 9514.145445] hns3_pci_sriov_configure+0x48/0xb8 [hns3] [ 9514.150558] sriov_numvfs_store+0x110/0x198 [ 9514.154724] dev_attr_store+0x44/0x60 [ 9514.158373] sysfs_kf_write+0x5c/0x78 [ 9514.162018] kernfs_fop_write+0x104/0x210 [ 9514.166010] __vfs_write+0x48/0x90 [ 9514.169395] vfs_write+0xbc/0x1c0 [ 9514.172694] ksys_write+0x74/0x100 [ 9514.176079] __arm64_sys_write+0x24/0x30 [ 9514.179987] el0_svc_common.constprop.4+0x110/0x200 [ 9514.184842] do_el0_svc+0x34/0x98 [ 9514.188144] el0_svc+0x14/0x40 [ 9514.191185] el0_sync_handler+0xb0/0x2d0 [ 9514.195088] el0_sync+0x140/0x180 [ 9514.198389] Code: b9001020 d2800000 52800022 f9800271 (885ffe61) [ 9514.204455] ---[ end trace 648de00c8406465f ]--- [ 9514.212308] note: bash[1327] exited with preempt_count 1 Cc: Qian Cai <cai@lca.pw> Cc: Alex Williamson <alex.williamson@redhat.com> Fixes: 1518ac272e78 ("vfio/pci: fix memory leaks of eventfd ctx") Signed-off-by: Zeng Tao <prime.zeng@hisilicon.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-01vfio/pci: Clear error and request eventfd ctx after releasingAlex Williamson
[ Upstream commit 5c5866c593bbd444d0339ede6a8fb5f14ff66d72 ] The next use of the device will generate an underflow from the stale reference. Cc: Qian Cai <cai@lca.pw> Fixes: 1518ac272e78 ("vfio/pci: fix memory leaks of eventfd ctx") Reported-by: Daniel Wagner <dwagner@suse.de> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Tested-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-01vfio/pci: fix memory leaks of eventfd ctxQian Cai
[ Upstream commit 1518ac272e789cae8c555d69951b032a275b7602 ] Finished a qemu-kvm (-device vfio-pci,host=0001:01:00.0) triggers a few memory leaks after a while because vfio_pci_set_ctx_trigger_single() calls eventfd_ctx_fdget() without the matching eventfd_ctx_put() later. Fix it by calling eventfd_ctx_put() for those memory in vfio_pci_release() before vfio_device_release(). unreferenced object 0xebff008981cc2b00 (size 128): comm "qemu-kvm", pid 4043, jiffies 4294994816 (age 9796.310s) hex dump (first 32 bytes): 01 00 00 00 6b 6b 6b 6b 00 00 00 00 ad 4e ad de ....kkkk.....N.. ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff ....kkkk........ backtrace: [<00000000917e8f8d>] slab_post_alloc_hook+0x74/0x9c [<00000000df0f2aa2>] kmem_cache_alloc_trace+0x2b4/0x3d4 [<000000005fcec025>] do_eventfd+0x54/0x1ac [<0000000082791a69>] __arm64_sys_eventfd2+0x34/0x44 [<00000000b819758c>] do_el0_svc+0x128/0x1dc [<00000000b244e810>] el0_sync_handler+0xd0/0x268 [<00000000d495ef94>] el0_sync+0x164/0x180 unreferenced object 0x29ff008981cc4180 (size 128): comm "qemu-kvm", pid 4043, jiffies 4294994818 (age 9796.290s) hex dump (first 32 bytes): 01 00 00 00 6b 6b 6b 6b 00 00 00 00 ad 4e ad de ....kkkk.....N.. ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff ....kkkk........ backtrace: [<00000000917e8f8d>] slab_post_alloc_hook+0x74/0x9c [<00000000df0f2aa2>] kmem_cache_alloc_trace+0x2b4/0x3d4 [<000000005fcec025>] do_eventfd+0x54/0x1ac [<0000000082791a69>] __arm64_sys_eventfd2+0x34/0x44 [<00000000b819758c>] do_el0_svc+0x128/0x1dc [<00000000b244e810>] el0_sync_handler+0xd0/0x268 [<00000000d495ef94>] el0_sync+0x164/0x180 Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-09-12vfio/pci: Fix SR-IOV VF handling with MMIO blockingAlex Williamson
commit ebfa440ce38b7e2e04c3124aa89c8a9f4094cf21 upstream. SR-IOV VFs do not implement the memory enable bit of the command register, therefore this bit is not set in config space after pci_enable_device(). This leads to an unintended difference between PF and VF in hand-off state to the user. We can correct this by setting the initial value of the memory enable bit in our virtualized config space. There's really no need however to ever fault a user on a VF though as this would only indicate an error in the user's management of the enable bit, versus a PF where the same access could trigger hardware faults. Fixes: abafbc551fdd ("vfio-pci: Invalidate mmaps and block MMIO access on disabled memory") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12vfio-pci: Invalidate mmaps and block MMIO access on disabled memoryAlex Williamson
commit abafbc551fddede3e0a08dee1dcde08fc0eb8476 upstream. Accessing the disabled memory space of a PCI device would typically result in a master abort response on conventional PCI, or an unsupported request on PCI express. The user would generally see these as a -1 response for the read return data and the write would be silently discarded, possibly with an uncorrected, non-fatal AER error triggered on the host. Some systems however take it upon themselves to bring down the entire system when they see something that might indicate a loss of data, such as this discarded write to a disabled memory space. To avoid this, we want to try to block the user from accessing memory spaces while they're disabled. We start with a semaphore around the memory enable bit, where writers modify the memory enable state and must be serialized, while readers make use of the memory region and can access in parallel. Writers include both direct manipulation via the command register, as well as any reset path where the internal mechanics of the reset may both explicitly and implicitly disable memory access, and manipulation of the MSI-X configuration, where the MSI-X vector table resides in MMIO space of the device. Readers include the read and write file ops to access the vfio device fd offsets as well as memory mapped access. In the latter case, we make use of our new vma list support to zap, or invalidate, those memory mappings in order to force them to be faulted back in on access. Our semaphore usage will stall user access to MMIO spaces across internal operations like reset, but the user might experience new behavior when trying to access the MMIO space while disabled via the PCI command register. Access via read or write while disabled will return -EIO and access via memory maps will result in a SIGBUS. This is expected to be compatible with known use cases and potentially provides better error handling capabilities than present in the hardware, while avoiding the more readily accessible and severe platform error responses that might otherwise occur. Fixes: CVE-2020-12888 Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> [Ajay: Regenerated the patch for v4.14] Signed-off-by: Ajay Kaher <akaher@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12vfio-pci: Fault mmaps to enable vma trackingAlex Williamson
commit 11c4cd07ba111a09f49625f9e4c851d83daf0a22 upstream. Rather than calling remap_pfn_range() when a region is mmap'd, setup a vm_ops handler to support dynamic faulting of the range on access. This allows us to manage a list of vmas actively mapping the area that we can later use to invalidate those mappings. The open callback invalidates the vma range so that all tracking is inserted in the fault handler and removed in the close handler. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> [Ajay: Regenerated the patch for v4.14] Signed-off-by: Ajay Kaher <akaher@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-12vfio/type1: Support faulting PFNMAP vmasAlex Williamson
commit 41311242221e3482b20bfed10fa4d9db98d87016 upstream. With conversion to follow_pfn(), DMA mapping a PFNMAP range depends on the range being faulted into the vma. Add support to manually provide that, in the same way as done on KVM with hva_to_pfn_remapped(). Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> [Ajay: Regenerated the patch for v4.14] Signed-off-by: Ajay Kaher <akaher@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-26vfio/type1: Add proper error unwind for vfio_iommu_replay()Alex Williamson
[ Upstream commit aae7a75a821a793ed6b8ad502a5890fb8e8f172d ] The vfio_iommu_replay() function does not currently unwind on error, yet it does pin pages, perform IOMMU mapping, and modify the vfio_dma structure to indicate IOMMU mapping. The IOMMU mappings are torn down when the domain is destroyed, but the other actions go on to cause trouble later. For example, the iommu->domain_list can be empty if we only have a non-IOMMU backed mdev attached. We don't currently check if the list is empty before getting the first entry in the list, which leads to a bogus domain pointer. If a vfio_dma entry is erroneously marked as iommu_mapped, we'll attempt to use that bogus pointer to retrieve the existing physical page addresses. This is the scenario that uncovered this issue, attempting to hot-add a vfio-pci device to a container with an existing mdev device and DMA mappings, one of which could not be pinned, causing a failure adding the new group to the existing container and setting the conditions for a subsequent attempt to explode. To resolve this, we can first check if the domain_list is empty so that we can reject replay of a bogus domain, should we ever encounter this inconsistent state again in the future. The real fix though is to add the necessary unwind support, which means cleaning up the current pinning if an IOMMU mapping fails, then walking back through the r-b tree of DMA entries, reading from the IOMMU which ranges are mapped, and unmapping and unpinning those ranges. To be able to do this, we also defer marking the DMA entry as IOMMU mapped until all entries are processed, in order to allow the unwind to know the disposition of each entry. Fixes: a54eb55045ae ("vfio iommu type1: Add support for mediated devices") Reported-by: Zhiyi Guo <zhguo@redhat.com> Tested-by: Zhiyi Guo <zhguo@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-25vfio/mdev: Fix reference count leak in add_mdev_supported_typeQiushi Wu
[ Upstream commit aa8ba13cae3134b8ef1c1b6879f66372531da738 ] kobject_init_and_add() takes reference even when it fails. If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. Thus, replace kfree() by kobject_put() to fix this issue. Previous commit "b8eb718348b8" fixed a similar problem. Fixes: 7b96953bc640 ("vfio: Mediated device Core driver") Signed-off-by: Qiushi Wu <wu000273@umn.edu> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-25vfio-pci: Mask cap zeroAlex Williamson
[ Upstream commit bc138db1b96264b9c1779cf18d5a3b186aa90066 ] The PCI Code and ID Assignment Specification changed capability ID 0 from reserved to a NULL capability in the v1.1 revision. The NULL capability is defined to include only the 16-bit capability header, ie. only the ID and next pointer. Unfortunately vfio-pci creates a map of config space, where ID 0 is used to reserve the standard type 0 header. Finding an actual capability with this ID therefore results in a bogus range marked in that map and conflicts with subsequent capabilities. As this seems to be a dummy capability anyway and we already support dropping capabilities, let's hide this one rather than delving into the potentially subtle dependencies within our map. Seen on an NVIDIA Tesla T4. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-06-25vfio/pci: fix memory leaks in alloc_perm_bits()Qian Cai
[ Upstream commit 3e63b94b6274324ff2e7d8615df31586de827c4e ] vfio_pci_disable() calls vfio_config_free() but forgets to call free_perm_bits() resulting in memory leaks, unreferenced object 0xc000000c4db2dee0 (size 16): comm "qemu-kvm", pid 4305, jiffies 4295020272 (age 3463.780s) hex dump (first 16 bytes): 00 00 ff 00 ff ff ff ff ff ff ff ff ff ff 00 00 ................ backtrace: [<00000000a6a4552d>] alloc_perm_bits+0x58/0xe0 [vfio_pci] [<00000000ac990549>] vfio_config_init+0xdf0/0x11b0 [vfio_pci] init_pci_cap_msi_perm at drivers/vfio/pci/vfio_pci_config.c:1125 (inlined by) vfio_msi_cap_len at drivers/vfio/pci/vfio_pci_config.c:1180 (inlined by) vfio_cap_len at drivers/vfio/pci/vfio_pci_config.c:1241 (inlined by) vfio_cap_init at drivers/vfio/pci/vfio_pci_config.c:1468 (inlined by) vfio_config_init at drivers/vfio/pci/vfio_pci_config.c:1707 [<000000006db873a1>] vfio_pci_open+0x234/0x700 [vfio_pci] [<00000000630e1906>] vfio_group_fops_unl_ioctl+0x8e0/0xb84 [vfio] [<000000009e34c54f>] ksys_ioctl+0xd8/0x130 [<000000006577923d>] sys_ioctl+0x28/0x40 [<000000006d7b1cf2>] system_call_exception+0x114/0x1e0 [<0000000008ea7dd5>] system_call_common+0xf0/0x278 unreferenced object 0xc000000c4db2e330 (size 16): comm "qemu-kvm", pid 4305, jiffies 4295020272 (age 3463.780s) hex dump (first 16 bytes): 00 ff ff 00 ff ff ff ff ff ff ff ff ff ff 00 00 ................ backtrace: [<000000004c71914f>] alloc_perm_bits+0x44/0xe0 [vfio_pci] [<00000000ac990549>] vfio_config_init+0xdf0/0x11b0 [vfio_pci] [<000000006db873a1>] vfio_pci_open+0x234/0x700 [vfio_pci] [<00000000630e1906>] vfio_group_fops_unl_ioctl+0x8e0/0xb84 [vfio] [<000000009e34c54f>] ksys_ioctl+0xd8/0x130 [<000000006577923d>] sys_ioctl+0x28/0x40 [<000000006d7b1cf2>] system_call_exception+0x114/0x1e0 [<0000000008ea7dd5>] system_call_common+0xf0/0x278 Fixes: 89e1f7d4c66d ("vfio: Add PCI device driver") Signed-off-by: Qian Cai <cai@lca.pw> [aw: rolled in follow-up patch] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-05-05vfio/type1: Fix VA->PA translation for PFNMAP VMAs in vaddr_get_pfn()Sean Christopherson
commit 5cbf3264bc715e9eb384e2b68601f8c02bb9a61d upstream. Use follow_pfn() to get the PFN of a PFNMAP VMA instead of assuming that vma->vm_pgoff holds the base PFN of the VMA. This fixes a bug where attempting to do VFIO_IOMMU_MAP_DMA on an arbitrary PFNMAP'd region of memory calculates garbage for the PFN. Hilariously, this only got detected because the first "PFN" calculated by vaddr_get_pfn() is PFN 0 (vma->vm_pgoff==0), and iommu_iova_to_phys() uses PA==0 as an error, which triggers a WARN in vfio_unmap_unpin() because the translation "failed". PFN 0 is now unconditionally reserved on x86 in order to mitigate L1TF, which causes is_invalid_reserved_pfn() to return true and in turns results in vaddr_get_pfn() returning success for PFN 0. Eventually the bogus calculation runs into PFNs that aren't reserved and leads to failure in vfio_pin_map_dma(). The subsequent call to vfio_remove_dma() attempts to unmap PFN 0 and WARNs. WARNING: CPU: 8 PID: 5130 at drivers/vfio/vfio_iommu_type1.c:750 vfio_unmap_unpin+0x2e1/0x310 [vfio_iommu_type1] Modules linked in: vfio_pci vfio_virqfd vfio_iommu_type1 vfio ... CPU: 8 PID: 5130 Comm: sgx Tainted: G W 5.6.0-rc5-705d787c7fee-vfio+ #3 Hardware name: Intel Corporation Mehlow UP Server Platform/Moss Beach Server, BIOS CNLSE2R1.D00.X119.B49.1803010910 03/01/2018 RIP: 0010:vfio_unmap_unpin+0x2e1/0x310 [vfio_iommu_type1] Code: <0f> 0b 49 81 c5 00 10 00 00 e9 c5 fe ff ff bb 00 10 00 00 e9 3d fe RSP: 0018:ffffbeb5039ebda8 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff9a55cbf8d480 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a52b771c200 RBP: 0000000000000000 R08: 0000000000000040 R09: 00000000fffffff2 R10: 0000000000000001 R11: ffff9a51fa896000 R12: 0000000184010000 R13: 0000000184000000 R14: 0000000000010000 R15: ffff9a55cb66ea08 FS: 00007f15d3830b40(0000) GS:ffff9a55d5600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000561cf39429e0 CR3: 000000084f75f005 CR4: 00000000003626e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: vfio_remove_dma+0x17/0x70 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0x9e3/0xa7b [vfio_iommu_type1] ksys_ioctl+0x92/0xb0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x4c/0x180 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f15d04c75d7 Code: <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 48 2d 00 f7 d8 64 89 01 48 Fixes: 73fa0d10d077 ("vfio: Type1 IOMMU implementation") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-05vfio: avoid possible overflow in vfio_iommu_type1_pin_pagesYan Zhao
commit 0ea971f8dcd6dee78a9a30ea70227cf305f11ff7 upstream. add parentheses to avoid possible vaddr overflow. Fixes: a54eb55045ae ("vfio iommu type1: Add support for mediated devices") Signed-off-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-01-27vfio/mdev: Fix aborting mdev child device removal if one failsParav Pandit
[ Upstream commit 6093e348a5e2475c5bb2e571346460f939998670 ] device_for_each_child() stops executing callback function for remaining child devices, if callback hits an error. Each child mdev device is independent of each other. While unregistering parent device, mdev core must remove all child mdev devices. Therefore, mdev_device_remove_cb() always returns success so that device_for_each_child doesn't abort if one child removal hits error. While at it, improve remove and unregister functions for below simplicity. There isn't need to pass forced flag pointer during mdev parent removal which invokes mdev_device_remove(). So simplify the flow. mdev_device_remove() is called from two paths. 1. mdev_unregister_driver() mdev_device_remove_cb() mdev_device_remove() 2. remove_store() mdev_device_remove() Fixes: 7b96953bc640 ("vfio: Mediated device Core driver") Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-01-27vfio/mdev: Avoid release parent reference during error pathParav Pandit
[ Upstream commit 60e7f2c3fe9919cee9534b422865eed49f4efb15 ] During mdev parent registration in mdev_register_device(), if parent device is duplicate, it releases the reference of existing parent device. This is incorrect. Existing parent device should not be touched. Fixes: 7b96953bc640 ("vfio: Mediated device Core driver") Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-01-27vfio_pci: Enable memory accesses before calling pci_map_romEric Auger
[ Upstream commit 0cfd027be1d6def4a462cdc180c055143af24069 ] pci_map_rom/pci_get_rom_size() performs memory access in the ROM. In case the Memory Space accesses were disabled, readw() is likely to trigger a synchronous external abort on some platforms. In case memory accesses were disabled, re-enable them before the call and disable them back again just after. Fixes: 89e1f7d4c66d ("vfio: Add PCI device driver") Signed-off-by: Eric Auger <eric.auger@redhat.com> Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-21vfio/pci: call irq_bypass_unregister_producer() before freeing irqJiang Yi
commit d567fb8819162099035e546b11a736e29c2af0ea upstream. Since irq_bypass_register_producer() is called after request_irq(), we should do tear-down in reverse order: irq_bypass_unregister_producer() then free_irq(). Specifically free_irq() may release resources required by the irqbypass del_producer() callback. Notably an example provided by Marc Zyngier on arm64 with GICv4 that he indicates has the potential to wedge the hardware: free_irq(irq) __free_irq(irq) irq_domain_deactivate_irq(irq) its_irq_domain_deactivate() [unmap the VLPI from the ITS] kvm_arch_irq_bypass_del_producer(cons, prod) kvm_vgic_v4_unset_forwarding(kvm, irq, ...) its_unmap_vlpi(irq) [Unmap the VLPI from the ITS (again), remap the original LPI] Signed-off-by: Jiang Yi <giangyi@amazon.com> Cc: stable@vger.kernel.org # v4.4+ Fixes: 6d7425f109d26 ("vfio: Register/unregister irq_bypass_producer") Link: https://lore.kernel.org/kvm/20191127164910.15888-1-giangyi@amazon.com Reviewed-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> [aw: commit log] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-05vfio/spapr_tce: Get rid of possible infinite loopAlexey Kardashevskiy
[ Upstream commit 517ad4ae8aa93dccdb9a88c27257ecb421c9e848 ] As a part of cleanup, the SPAPR TCE IOMMU subdriver releases preregistered memory. If there is a bug in memory release, the loop in tce_iommu_release() becomes infinite; this actually happened to me. This makes the loop finite and prints a warning on every failure to make the code more bug prone. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-20vfio/pci: Mask buggy SR-IOV VF INTx supportAlex Williamson
[ Upstream commit db04264fe9bc0f2b62e036629f9afb530324b693 ] The SR-IOV spec requires that VFs must report zero for the INTx pin register as VFs are precluded from INTx support. It's much easier for the host kernel to understand whether a device is a VF and therefore whether a non-zero pin register value is bogus than it is to do the same in userspace. Override the INTx count for such devices and virtualize the pin register to provide a consistent view of the device to the user. As this is clearly a spec violation, warn about it to support hardware validation, but also provide a known whitelist as it doesn't do much good to continue complaining if the hardware vendor doesn't plan to fix it. Known devices with this issue: 8086:270c Tested-by: Gage Eads <gage.eads@intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-20vfio/pci: Fix potential memory leak in vfio_msi_cap_lenLi Qiang
[ Upstream commit 30ea32ab1951c80c6113f300fce2c70cd12659e4 ] Free allocated vdev->msi_perm in error path. Signed-off-by: Li Qiang <liq3ea@gmail.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-10-07vfio_pci: Restore original state on releasehexin
[ Upstream commit 92c8026854c25093946e0d7fe536fd9eac440f06 ] vfio_pci_enable() saves the device's initial configuration information with the intent that it is restored in vfio_pci_disable(). However, the commit referenced in Fixes: below replaced the call to __pci_reset_function_locked(), which is not wrapped in a state save and restore, with pci_try_reset_function(), which overwrites the restored device state with the current state before applying it to the device. Reinstate use of __pci_reset_function_locked() to return to the desired behavior. Fixes: 890ed578df82 ("vfio-pci: Use pci "try" reset interface") Signed-off-by: hexin <hexin15@baidu.com> Signed-off-by: Liu Qi <liuqi16@baidu.com> Signed-off-by: Zhang Yu <zhangyu31@baidu.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-06-15vfio: Fix WARNING "do not call blocking ops when !TASK_RUNNING"Farhan Ali
[ Upstream commit 41be3e2618174fdf3361e49e64f2bf530f40c6b0 ] vfio_dev_present() which is the condition to wait_event_interruptible_timeout(), will call vfio_group_get_device and try to acquire the mutex group->device_lock. wait_event_interruptible_timeout() will set the state of the current task to TASK_INTERRUPTIBLE, before doing the condition check. This means that we will try to acquire the mutex while already in a sleeping state. The scheduler warns us by giving the following warning: [ 4050.264464] ------------[ cut here ]------------ [ 4050.264508] do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000b33c00e2>] prepare_to_wait_event+0x14a/0x188 [ 4050.264529] WARNING: CPU: 12 PID: 35924 at kernel/sched/core.c:6112 __might_sleep+0x76/0x90 .... 4050.264756] Call Trace: [ 4050.264765] ([<000000000017bbaa>] __might_sleep+0x72/0x90) [ 4050.264774] [<0000000000b97edc>] __mutex_lock+0x44/0x8c0 [ 4050.264782] [<0000000000b9878a>] mutex_lock_nested+0x32/0x40 [ 4050.264793] [<000003ff800d7abe>] vfio_group_get_device+0x36/0xa8 [vfio] [ 4050.264803] [<000003ff800d87c0>] vfio_del_group_dev+0x238/0x378 [vfio] [ 4050.264813] [<000003ff8015f67c>] mdev_remove+0x3c/0x68 [mdev] [ 4050.264825] [<00000000008e01b0>] device_release_driver_internal+0x168/0x268 [ 4050.264834] [<00000000008de692>] bus_remove_device+0x162/0x190 [ 4050.264843] [<00000000008daf42>] device_del+0x1e2/0x368 [ 4050.264851] [<00000000008db12c>] device_unregister+0x64/0x88 [ 4050.264862] [<000003ff8015ed84>] mdev_device_remove+0xec/0x130 [mdev] [ 4050.264872] [<000003ff8015f074>] remove_store+0x6c/0xa8 [mdev] [ 4050.264881] [<000000000046f494>] kernfs_fop_write+0x14c/0x1f8 [ 4050.264890] [<00000000003c1530>] __vfs_write+0x38/0x1a8 [ 4050.264899] [<00000000003c187c>] vfs_write+0xb4/0x198 [ 4050.264908] [<00000000003c1af2>] ksys_write+0x5a/0xb0 [ 4050.264916] [<0000000000b9e270>] system_call+0xdc/0x2d8 [ 4050.264925] 4 locks held by sh/35924: [ 4050.264933] #0: 000000001ef90325 (sb_writers#4){.+.+}, at: vfs_write+0x9e/0x198 [ 4050.264948] #1: 000000005c1ab0b3 (&of->mutex){+.+.}, at: kernfs_fop_write+0x1cc/0x1f8 [ 4050.264963] #2: 0000000034831ab8 (kn->count#297){++++}, at: kernfs_remove_self+0x12e/0x150 [ 4050.264979] #3: 00000000e152484f (&dev->mutex){....}, at: device_release_driver_internal+0x5c/0x268 [ 4050.264993] Last Breaking-Event-Address: [ 4050.265002] [<000000000017bbaa>] __might_sleep+0x72/0x90 [ 4050.265010] irq event stamp: 7039 [ 4050.265020] hardirqs last enabled at (7047): [<00000000001cee7a>] console_unlock+0x6d2/0x740 [ 4050.265029] hardirqs last disabled at (7054): [<00000000001ce87e>] console_unlock+0xd6/0x740 [ 4050.265040] softirqs last enabled at (6416): [<0000000000b8fe26>] __udelay+0xb6/0x100 [ 4050.265049] softirqs last disabled at (6415): [<0000000000b8fe06>] __udelay+0x96/0x100 [ 4050.265057] ---[ end trace d04a07d39d99a9f9 ]--- Let's fix this as described in the article https://lwn.net/Articles/628628/. Signed-off-by: Farhan Ali <alifm@linux.ibm.com> [remove now redundant vfio_dev_present()] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-05-08vfio/pci: use correct format charactersLouis Taylor
[ Upstream commit 426b046b748d1f47e096e05bdcc6fb4172791307 ] When compiling with -Wformat, clang emits the following warnings: drivers/vfio/pci/vfio_pci.c:1601:5: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~ drivers/vfio/pci/vfio_pci.c:1601:13: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~ drivers/vfio/pci/vfio_pci.c:1601:21: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~~~~ drivers/vfio/pci/vfio_pci.c:1601:32: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~~~~ drivers/vfio/pci/vfio_pci.c:1605:5: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~ drivers/vfio/pci/vfio_pci.c:1605:13: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~ drivers/vfio/pci/vfio_pci.c:1605:21: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~~~~ drivers/vfio/pci/vfio_pci.c:1605:32: warning: format specifies type 'unsigned short' but the argument has type 'unsigned int' [-Wformat] vendor, device, subvendor, subdevice, ^~~~~~~~~ The types of these arguments are unconditionally defined, so this patch updates the format character to the correct ones for unsigned ints. Link: https://github.com/ClangBuiltLinux/linux/issues/378 Signed-off-by: Louis Taylor <louis@kragniz.eu> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-05-02vfio/type1: Limit DMA mappings per containerAlex Williamson
commit 492855939bdb59c6f947b0b5b44af9ad82b7e38c upstream. Memory backed DMA mappings are accounted against a user's locked memory limit, including multiple mappings of the same memory. This accounting bounds the number of such mappings that a user can create. However, DMA mappings that are not backed by memory, such as DMA mappings of device MMIO via mmaps, do not make use of page pinning and therefore do not count against the user's locked memory limit. These mappings still consume memory, but the memory is not well associated to the process for the purpose of oom killing a task. To add bounding on this use case, we introduce a limit to the total number of concurrent DMA mappings that a user is allowed to create. This limit is exposed as a tunable module option where the default value of 64K is expected to be well in excess of any reasonable use case (a large virtual machine configuration would typically only make use of tens of concurrent mappings). This fixes CVE-2019-3882. Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-03vfio/type1: Fix task tracking for QEMU vCPU hotplugAlex Williamson
[ Upstream commit 48d8476b41eed63567dd2f0ad125c895b9ac648a ] MAP_DMA ioctls might be called from various threads within a process, for example when using QEMU, the vCPU threads are often generating these calls and we therefore take a reference to that vCPU task. However, QEMU also supports vCPU hotplug on some machines and the task that called MAP_DMA may have exited by the time UNMAP_DMA is called, resulting in the mm_struct pointer being NULL and thus a failure to match against the existing mapping. To resolve this, we instead take a reference to the thread group_leader, which has the same mm_struct and resource limits, but is less likely exit, at least in the QEMU case. A difficulty here is guaranteeing that the capabilities of the group_leader match that of the calling thread, which we resolve by tracking CAP_IPC_LOCK at the time of calling rather than at an indeterminate time in the future. Potentially this also results in better efficiency as this is now recorded once per MAP_DMA ioctl. Reported-by: Xu Yandong <xuyandong2@huawei.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-03vfio/mdev: Check globally for duplicate devicesAlex Williamson
[ Upstream commit 002fe996f67f4f46d8917b14cfb6e4313c20685a ] When we create an mdev device, we check for duplicates against the parent device and return -EEXIST if found, but the mdev device namespace is global since we'll link all devices from the bus. We do catch this later in sysfs_do_create_link_sd() to return -EEXIST, but with it comes a kernel warning and stack trace for trying to create duplicate sysfs links, which makes it an undesirable response. Therefore we should really be looking for duplicates across all mdev parent devices, or as implemented here, against our mdev device list. Using mdev_list to prevent duplicates means that we can remove mdev_parent.lock, but in order not to serialize mdev device creation and removal globally, we add mdev_device.active which allows UUIDs to be reserved such that we can drop the mdev_list_lock before the mdev device is fully in place. Two behavioral notes; first, mdev_parent.lock had the side-effect of serializing mdev create and remove ops per parent device. This was an implementation detail, not an intentional guarantee provided to the mdev vendor drivers. Vendor drivers can trivially provide this serialization internally if necessary. Second, review comments note the new -EAGAIN behavior when the device, and in particular the remove attribute, becomes visible in sysfs. If a remove is triggered prior to completion of mdev_device_create() the user will see a -EAGAIN error. While the errno is different, receiving an error during this period is not, the previous implementation returned -ENODEV for the same condition. Furthermore, the consistency to the user is improved in the case where mdev_device_remove_ops() returns error. Previously concurrent calls to mdev_device_remove() could see the device disappear with -ENODEV and return in the case of error. Now a user would see -EAGAIN while the device is in this transitory state. Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-03vfio: platform: Fix reset module leak in error pathGeert Uytterhoeven
[ Upstream commit 28a68387888997e8a7fa57940ea5d55f2e16b594 ] If the IOMMU group setup fails, the reset module is not released. Fixes: b5add544d677d363 ("vfio, platform: make reset driver a requirement by default") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-28KVM: PPC: Check if IOMMU page is contained in the pinned physical pageAlexey Kardashevskiy
commit 76fa4975f3ed12d15762bc979ca44078598ed8ee upstream. A VM which has: - a DMA capable device passed through to it (eg. network card); - running a malicious kernel that ignores H_PUT_TCE failure; - capability of using IOMMU pages bigger that physical pages can create an IOMMU mapping that exposes (for example) 16MB of the host physical memory to the device when only 64K was allocated to the VM. The remaining 16MB - 64K will be some other content of host memory, possibly including pages of the VM, but also pages of host kernel memory, host programs or other VMs. The attacking VM does not control the location of the page it can map, and is only allowed to map as many pages as it has pages of RAM. We already have a check in drivers/vfio/vfio_iommu_spapr_tce.c that an IOMMU page is contained in the physical page so the PCI hardware won't get access to unassigned host memory; however this check is missing in the KVM fastpath (H_PUT_TCE accelerated code). We were lucky so far and did not hit this yet as the very first time when the mapping happens we do not have tbl::it_userspace allocated yet and fall back to the userspace which in turn calls VFIO IOMMU driver, this fails and the guest does not retry, This stores the smallest preregistered page size in the preregistered region descriptor and changes the mm_iommu_xxx API to check this against the IOMMU page size. This calculates maximum page size as a minimum of the natural region alignment and compound page size. For the page shift this uses the shift returned by find_linux_pte() which indicates how the page is mapped to the current userspace - if the page is huge and this is not a zero, then it is a leaf pte and the page is mapped within the range. Fixes: 121f80ba68f1 ("KVM: PPC: VFIO: Add in-kernel acceleration for VFIO") Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-25vfio/spapr: Use IOMMU pageshift rather than pagesizeAlexey Kardashevskiy
commit 1463edca6734d42ab4406fa2896e20b45478ea36 upstream. The size is always equal to 1 page so let's use this. Later on this will be used for other checks which use page shifts to check the granularity of access. This should cause no behavioral change. Cc: stable@vger.kernel.org # v4.12+ Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-25vfio/pci: Fix potential Spectre v1Gustavo A. R. Silva
commit 0e714d27786ce1fb3efa9aac58abc096e68b1c2a upstream. info.index can be indirectly controlled by user-space, hence leading to a potential exploitation of the Spectre variant 1 vulnerability. This issue was detected with the help of Smatch: drivers/vfio/pci/vfio_pci.c:734 vfio_pci_ioctl() warn: potential spectre issue 'vdev->region' Fix this by sanitizing info.index before indirectly using it to index vdev->region Notice that given that speculation windows are large, the policy is to kill the speculation on the first load and not worry if it can be completed with a dependent load/store [1]. [1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2 Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-11vfio: Use get_user_pages_longterm correctlyJason Gunthorpe
commit bb94b55af3461e26b32f0e23d455abeae0cfca5d upstream. The patch noted in the fixes below converted get_user_pages_fast() to get_user_pages_longterm(), however the two calls differ in a few ways. First _fast() is documented to not require the mmap_sem, while _longterm() is documented to need it. Hold the mmap sem as required. Second, _fast accepts an 'int write' while _longterm uses 'unsigned int gup_flags', so the expression '!!(prot & IOMMU_WRITE)' is only working by luck as FOLL_WRITE is currently == 0x1. Use the expected FOLL_WRITE constant instead. Fixes: 94db151dc892 ("vfio: disable filesystem-dax page pinning") Cc: <stable@vger.kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-24vfio/pci: Virtualize Maximum Read Request SizeAlex Williamson
commit cf0d53ba4947aad6e471491d5b20a567cbe92e56 upstream. MRRS defines the maximum read request size a device is allowed to make. Drivers will often increase this to allow more data transfer with a single request. Completions to this request are bound by the MPS setting for the bus. Aside from device quirks (none known), it doesn't seem to make sense to set an MRRS value less than MPS, yet this is a likely scenario given that user drivers do not have a system-wide view of the PCI topology. Virtualize MRRS such that the user can set MRRS >= MPS, but use MPS as the floor value that we'll write to hardware. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-03-08vfio: disable filesystem-dax page pinningDan Williams
commit 94db151dc89262bfa82922c44e8320cea2334667 upstream. Filesystem-DAX is incompatible with 'longterm' page pinning. Without page cache indirection a DAX mapping maps filesystem blocks directly. This means that the filesystem must not modify a file's block map while any page in a mapping is pinned. In order to prevent the situation of userspace holding of filesystem operations indefinitely, disallow 'longterm' Filesystem-DAX mappings. RDMA has the same conflict and the plan there is to add a 'with lease' mechanism to allow the kernel to notify userspace that the mapping is being torn down for block-map maintenance. Perhaps something similar can be put in place for vfio. Note that xfs and ext4 still report: "DAX enabled. Warning: EXPERIMENTAL, use at your own risk" ...at mount time, and resolving the dax-dma-vs-truncate problem is one of the last hurdles to remove that designation. Acked-by: Alex Williamson <alex.williamson@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: kvm@vger.kernel.org Cc: <stable@vger.kernel.org> Reported-by: Haozhong Zhang <haozhong.zhang@intel.com> Tested-by: Haozhong Zhang <haozhong.zhang@intel.com> Fixes: d475c6346a38 ("dax,ext2: replace XIP read and write with DAX I/O") Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-25vfio/pci: Virtualize Maximum Payload SizeAlex Williamson
[ Upstream commit 523184972b282cd9ca17a76f6ca4742394856818 ] With virtual PCI-Express chipsets, we now see userspace/guest drivers trying to match the physical MPS setting to a virtual downstream port. Of course a lone physical device surrounded by virtual interconnects cannot make a correct decision for a proper MPS setting. Instead, let's virtualize the MPS control register so that writes through to hardware are disallowed. Userspace drivers like QEMU assume they can write anything to the device and we'll filter out anything dangerous. Since mismatched MPS can lead to AER and other faults, let's add it to the kernel side rather than relying on userspace virtualization to handle it. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-02License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-30vfio: platform: constify amba_idArvind Yadav
amba_id are not supposed to change at runtime. All functions working with const amba_id. So mark the non-const structs as const. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-08-30vfio: Stall vfio_del_group_dev() for container group detachAlex Williamson
When the user unbinds the last device of a group from a vfio bus driver, the devices within that group should be available for other purposes. We currently have a race that makes this generally, but not always true. The device can be unbound from the vfio bus driver, but remaining IOMMU context of the group attached to the container can result in errors as the next driver configures DMA for the device. Wait for the group to be detached from the IOMMU backend before allowing the bus driver remove callback to complete. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-08-30vfio: fix noiommu vfio_iommu_group_get reference countEric Auger
In vfio_iommu_group_get() we want to increase the reference count of the iommu group. In noiommu case, the group does not exist and is allocated. iommu_group_add_device() increases the group ref count. However we then call iommu_group_put() which decrements it. This leads to a "refcount_t: underflow WARN_ON". Only decrement the ref count in case of iommu_group_add_device failure. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-08-10vfio/type1: Give hardware MSI regions precedenceRobin Murphy
If the IOMMU driver advertises 'real' reserved regions for MSIs, but still includes the software-managed region as well, we are currently blind to the former and will configure the IOMMU domain to map MSIs into the latter, which is unlikely to work as expected. Since it would take a ridiculous hardware topology for both regions to be valid (which would be rather difficult to support in general), we should be safe to assume that the presence of any hardware regions makes the software region irrelevant. However, the IOMMU driver might still advertise the software region by default, particularly if the hardware regions are filled in elsewhere by generic code, so it might not be fair for VFIO to be super-strict about not mixing them. To that end, make vfio_iommu_has_sw_msi() robust against the presence of both region types at once, so that we end up doing what is almost certainly right, rather than what is almost certainly wrong. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-08-10vfio/type1: Cope with hardware MSI reserved regionsRobin Murphy
For ARM-based systems with a GICv3 ITS to provide interrupt isolation, but hardware limitations which are worked around by having MSIs bypass SMMU translation (e.g. HiSilicon Hip06/Hip07), VFIO neglects to check for the IRQ_DOMAIN_FLAG_MSI_REMAP capability, (and thus erroneously demands unsafe_interrupts) if a software-managed MSI region is absent. Fix this by always checking for isolation capability at both the IRQ domain and IOMMU domain levels, rather than predicating that on whether MSIs require an IOMMU mapping (which was always slightly tenuous logic). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-07-27vfio/pci: Fix handling of RC integrated endpoint PCIe capability sizeAlex Williamson
Root complex integrated endpoints do not have a link and therefore may use a smaller PCIe capability in config space than we expect when building our config map. Add a case for these to avoid reporting an erroneous overlap. Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-07-26vfio/pci: Use pci_try_reset_function() on initial openAlex Williamson
Device lock bites again; if a device .remove() callback races a user calling ioctl(VFIO_GROUP_GET_DEVICE_FD), the unbind request will hold the device lock, but the user ioctl may have already taken a vfio_device reference. In the case of a PCI device, the initial open will attempt to reset the device, which again attempts to get the device lock, resulting in deadlock. Use the trylock PCI reset interface and return error on the open path if reset fails due to lock contention. Link: https://lkml.org/lkml/2017/7/25/381 Reported-by: Wen Congyang <wencongyang2@huawei.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2017-07-13Merge tag 'vfio-v4.13-rc1' of git://github.com/awilliam/linux-vfioLinus Torvalds
Pull VFIO updates from Alex Williamson: - Include Intel XXV710 in INTx workaround (Alex Williamson) - Make use of ERR_CAST() for error return (Dan Carpenter) - Fix vfio_group release deadlock from iommu notifier (Alex Williamson) - Unset KVM-VFIO attributes only on group match (Alex Williamson) - Fix release path group/file matching with KVM-VFIO (Alex Williamson) - Remove unnecessary lock uses triggering lockdep splat (Alex Williamson) * tag 'vfio-v4.13-rc1' of git://github.com/awilliam/linux-vfio: vfio: Remove unnecessary uses of vfio_container.group_lock vfio: New external user group/file match kvm-vfio: Decouple only when we match a group vfio: Fix group release deadlock vfio: Use ERR_CAST() instead of open coding it vfio/pci: Add Intel XXV710 to hidden INTx devices
2017-07-07vfio: Remove unnecessary uses of vfio_container.group_lockAlex Williamson
The original intent of vfio_container.group_lock is to protect vfio_container.group_list, however over time it's become a crutch to prevent changes in container composition any time we call into the iommu driver backend. This introduces problems when we start to have more complex interactions, for example when a user's DMA unmap request triggers a notification to an mdev vendor driver, who responds by attempting to unpin mappings within that request, re-entering the iommu backend. We incorrectly assume that the use of read-locks here allow for this nested locking behavior, but a poorly timed write-lock could in fact trigger a deadlock. The current use of group_lock seems to fall into the trap of locking code, not data. Correct that by removing uses of group_lock that are not directly related to group_list. Note that the vfio type1 iommu backend has its own mutex, vfio_iommu.lock, which it uses to protect itself for each of these interfaces anyway. The group_lock appears to be a redundancy for these interfaces and type1 even goes so far as to release its mutex to allow for exactly the re-entrant code path above. Reported-by: Chuanxiao Dong <chuanxiao.dong@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru> Cc: stable@vger.kernel.org # v4.10+
2017-06-28vfio: New external user group/file matchAlex Williamson
At the point where the kvm-vfio pseudo device wants to release its vfio group reference, we can't always acquire a new reference to make that happen. The group can be in a state where we wouldn't allow a new reference to be added. This new helper function allows a caller to match a file to a group to facilitate this. Given a file and group, report if they match. Thus the caller needs to already have a group reference to match to the file. This allows the deletion of a group without acquiring a new reference. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Cc: stable@vger.kernel.org
2017-06-28vfio: Fix group release deadlockAlex Williamson
If vfio_iommu_group_notifier() acquires a group reference and that reference becomes the last reference to the group, then vfio_group_put introduces a deadlock code path where we're trying to unregister from the iommu notifier chain from within a callout of that chain. Use a work_struct to release this reference asynchronously. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Cc: stable@vger.kernel.org