summaryrefslogtreecommitdiff
path: root/io_uring
diff options
context:
space:
mode:
authorPavel Begunkov <asml.silence@gmail.com>2024-01-17 00:57:26 +0000
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2024-01-25 15:45:30 -0800
commite24bf5b47a5788a8f9c74012c808f5a9bc149a7f (patch)
treee95ab299c6b020d42dbb895c440c9ed7965c18fe /io_uring
parentc149cc7c88cadf956111bd85cd03c5c11618c0b7 (diff)
io_uring: adjust defer tw counting
[ Upstream commit dc12d1799ce710fd90abbe0ced71e7e1ae0894fc ] The UINT_MAX work item counting bias in io_req_local_work_add() in case of !IOU_F_TWQ_LAZY_WAKE works in a sense that we will not miss a wake up, however it's still eerie. In particular, if we add a lazy work item after a non-lazy one, we'll increment it and get nr_tw==0, and subsequent adds may try to unnecessarily wake up the task, which is though not so likely to happen in real workloads. Half the bias, it's still large enough to be larger than any valid ->cq_wait_nr, which is limited by IORING_MAX_CQ_ENTRIES, but further have a good enough of space before it overflows. Fixes: 8751d15426a31 ("io_uring: reduce scheduling due to tw") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/108b971e958deaf7048342930c341ba90f75d806.1705438669.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'io_uring')
-rw-r--r--io_uring/io_uring.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7f62f5990152..59f5791c90c3 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1346,7 +1346,7 @@ static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
nr_tw = nr_tw_prev + 1;
/* Large enough to fail the nr_wait comparison below */
if (!(flags & IOU_F_TWQ_LAZY_WAKE))
- nr_tw = -1U;
+ nr_tw = INT_MAX;
req->nr_tw = nr_tw;
req->io_task_work.node.next = first;