UPSTREAM: io_uring: lock overflowing for IOPOLL
commit 544d163d659d45a206d8929370d5a2984e546cb7 upstream.
syzbot reports an issue with overflow filling for IOPOLL:
WARNING: CPU: 0 PID: 28 at io_uring/io_uring.c:734 io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
CPU: 0 PID: 28 Comm: kworker/u4:1 Not tainted 6.2.0-rc3-syzkaller-16369-g358a161a6a9e #0
Workqueue: events_unbound io_ring_exit_work
Call trace:
io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
io_req_cqe_overflow+0x5c/0x70 io_uring/io_uring.c:773
io_fill_cqe_req io_uring/io_uring.h:168 [inline]
io_do_iopoll+0x474/0x62c io_uring/rw.c:1065
io_iopoll_try_reap_events+0x6c/0x108 io_uring/io_uring.c:1513
io_uring_try_cancel_requests+0x13c/0x258 io_uring/io_uring.c:3056
io_ring_exit_work+0xec/0x390 io_uring/io_uring.c:2869
process_one_work+0x2d8/0x504 kernel/workqueue.c:2289
worker_thread+0x340/0x610 kernel/workqueue.c:2436
kthread+0x12c/0x158 kernel/kthread.c:376
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:863
There is no real problem for normal IOPOLL as flush is also called with
uring_lock taken, but it's getting more complicated for IOPOLL|SQPOLL,
for which __io_cqring_overflow_flush() happens from the CQ waiting path.
Reported-and-tested-by: [email protected]
Cc: [email protected] # 5.10+
Change-Id: I3449b2ea1b71ff2f04f119741751b42870386923
Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Bug: 268174392
(cherry picked from commit de77faee280163ff03b7ab64af6c9d779a43d4c4)
Signed-off-by: Greg Kroah-Hartman <[email protected]>
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 2309fdc..35b877b 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -2492,12 +2492,26 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
io_init_req_batch(&rb);
while (!list_empty(done)) {
+ struct io_uring_cqe *cqe;
+ unsigned cflags;
+
req = list_first_entry(done, struct io_kiocb, inflight_entry);
list_del(&req->inflight_entry);
-
- io_fill_cqe_req(req, req->result, io_put_rw_kbuf(req));
+ cflags = io_put_rw_kbuf(req);
(*nr_events)++;
+ cqe = io_get_cqe(ctx);
+ if (cqe) {
+ WRITE_ONCE(cqe->user_data, req->user_data);
+ WRITE_ONCE(cqe->res, req->result);
+ WRITE_ONCE(cqe->flags, cflags);
+ } else {
+ spin_lock(&ctx->completion_lock);
+ io_cqring_event_overflow(ctx, req->user_data,
+ req->result, cflags);
+ spin_unlock(&ctx->completion_lock);
+ }
+
if (req_ref_put_and_test(req))
io_req_free_batch(&rb, req, &ctx->submit_state);
}