summaryrefslogtreecommitdiff
path: root/drivers/nvme
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2022-09-22 11:41:51 -0600
committerJens Axboe <axboe@kernel.dk>2022-09-30 07:49:11 -0600
commit851eb780decb7180bcf09fad0035cba9aae669df (patch)
treed089afb600e24f9fdca9c0e2bbe2d2871b832b12 /drivers/nvme
parentc0a7ba77e81b8440d10f38559a5e1d219ff7e87c (diff)
nvme: enable batched completions of passthrough IO
Now that the normal passthrough end_io path doesn't need the request anymore, we can kill the explicit blk_mq_free_request() and just pass back RQ_END_IO_FREE instead. This enables the batched completion from freeing batches of requests at the time. This brings passthrough IO performance at least on par with bdev based O_DIRECT with io_uring. With this and batche allocations, peak performance goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also about 10% faster than previously, going from ~61M to ~67M IOPS. Reviewed-by: Anuj Gupta <anuj20.g@samsung.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Keith Busch <kbusch@kernel.org> Co-developed-by: Stefan Roesch <shr@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'drivers/nvme')
-rw-r--r--drivers/nvme/host/ioctl.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index f9d1f7e4d6d1..914b142b6f2b 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -430,8 +430,7 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
else
io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb);
- blk_mq_free_request(req);
- return RQ_END_IO_NONE;
+ return RQ_END_IO_FREE;
}
static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req,