CVE-2026-23113 (GCVE-0-2026-23113)
Vulnerability from cvelistv5 – Published: 2026-02-14 15:09 – Updated: 2026-02-14 15:09
VLAI?
Title
io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop
Summary
In the Linux kernel, the following vulnerability has been resolved:
io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop
Currently this is checked before running the pending work. Normally this
is quite fine, as work items either end up blocking (which will create a
new worker for other items), or they complete fairly quickly. But syzbot
reports an issue where io-wq takes seemingly forever to exit, and with a
bit of debugging, this turns out to be because it queues a bunch of big
(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't
support ->read_iter(), loop_rw_iter() ends up handling them. Each read
returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of
these pending, processing the whole chain can take a long time. Easily
longer than the syzbot uninterruptible sleep timeout of 140 seconds.
This then triggers a complaint off the io-wq exit path:
INFO: task syz.4.135:6326 blocked for more than 143 seconds.
Not tainted syzkaller #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x1139/0x6150 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0xe7/0x3a0 kernel/sched/core.c:6960
schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121
io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356
io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203
io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651
io_uring_files_cancel include/linux/io_uring.h:19 [inline]
do_exit+0x2ce/0x2bd0 kernel/exit.c:911
do_group_exit+0xd3/0x2a0 kernel/exit.c:1112
get_signal+0x2671/0x26d0 kernel/signal.c:3034
arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337
__exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa02738f749
RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098
RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98
There's really nothing wrong here, outside of processing these reads
will take a LONG time. However, we can speed up the exit by checking the
IO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will
exit the ring after queueing up all of these reads. Then once the first
item is processed, io-wq will simply cancel the rest. That should avoid
syzbot running into this complaint again.
Severity ?
No CVSS data available.
Assigner
References
Impacted products
| Vendor | Product | Version | ||
|---|---|---|---|---|
| Linux | Linux |
Affected:
1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 , < 85eb83694a91c89d9abe615d717c0053c3efa714
(git)
Affected: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 , < 2e8ca1078b14142db2ce51cbd18ff9971560046b (git) Affected: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 , < bdf0bf73006ea8af9327cdb85cfdff4c23a5f966 (git) Affected: 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 , < 10dc959398175736e495f71c771f8641e1ca1907 (git) |
||
{
"containers": {
"cna": {
"affected": [
{
"defaultStatus": "unaffected",
"product": "Linux",
"programFiles": [
"io_uring/io-wq.c"
],
"repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
"vendor": "Linux",
"versions": [
{
"lessThan": "85eb83694a91c89d9abe615d717c0053c3efa714",
"status": "affected",
"version": "1da177e4c3f41524e886b7f1b8a0c1fc7321cac2",
"versionType": "git"
},
{
"lessThan": "2e8ca1078b14142db2ce51cbd18ff9971560046b",
"status": "affected",
"version": "1da177e4c3f41524e886b7f1b8a0c1fc7321cac2",
"versionType": "git"
},
{
"lessThan": "bdf0bf73006ea8af9327cdb85cfdff4c23a5f966",
"status": "affected",
"version": "1da177e4c3f41524e886b7f1b8a0c1fc7321cac2",
"versionType": "git"
},
{
"lessThan": "10dc959398175736e495f71c771f8641e1ca1907",
"status": "affected",
"version": "1da177e4c3f41524e886b7f1b8a0c1fc7321cac2",
"versionType": "git"
}
]
},
{
"defaultStatus": "affected",
"product": "Linux",
"programFiles": [
"io_uring/io-wq.c"
],
"repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
"vendor": "Linux",
"versions": [
{
"lessThanOrEqual": "6.6.*",
"status": "unaffected",
"version": "6.6.122",
"versionType": "semver"
},
{
"lessThanOrEqual": "6.12.*",
"status": "unaffected",
"version": "6.12.68",
"versionType": "semver"
},
{
"lessThanOrEqual": "6.18.*",
"status": "unaffected",
"version": "6.18.8",
"versionType": "semver"
},
{
"lessThanOrEqual": "*",
"status": "unaffected",
"version": "6.19",
"versionType": "original_commit_for_fix"
}
]
}
],
"cpeApplicability": [
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"versionEndExcluding": "6.6.122",
"vulnerable": true
},
{
"criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"versionEndExcluding": "6.12.68",
"vulnerable": true
},
{
"criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"versionEndExcluding": "6.18.8",
"vulnerable": true
},
{
"criteria": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
"versionEndExcluding": "6.19",
"vulnerable": true
}
],
"negate": false,
"operator": "OR"
}
]
}
],
"descriptions": [
{
"lang": "en",
"value": "In the Linux kernel, the following vulnerability has been resolved:\n\nio_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop\n\nCurrently this is checked before running the pending work. Normally this\nis quite fine, as work items either end up blocking (which will create a\nnew worker for other items), or they complete fairly quickly. But syzbot\nreports an issue where io-wq takes seemingly forever to exit, and with a\nbit of debugging, this turns out to be because it queues a bunch of big\n(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn\u0027t\nsupport -\u003eread_iter(), loop_rw_iter() ends up handling them. Each read\nreturns 16MB of data read, which takes 20 (!!) seconds. With a bunch of\nthese pending, processing the whole chain can take a long time. Easily\nlonger than the syzbot uninterruptible sleep timeout of 140 seconds.\nThis then triggers a complaint off the io-wq exit path:\n\nINFO: task syz.4.135:6326 blocked for more than 143 seconds.\n Not tainted syzkaller #0\n Blocked by coredump.\n\"echo 0 \u003e /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\ntask:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000\nCall Trace:\n \u003cTASK\u003e\n context_switch kernel/sched/core.c:5256 [inline]\n __schedule+0x1139/0x6150 kernel/sched/core.c:6863\n __schedule_loop kernel/sched/core.c:6945 [inline]\n schedule+0xe7/0x3a0 kernel/sched/core.c:6960\n schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75\n do_wait_for_common kernel/sched/completion.c:100 [inline]\n __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121\n io_wq_exit_workers io_uring/io-wq.c:1328 [inline]\n io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356\n io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203\n io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651\n io_uring_files_cancel include/linux/io_uring.h:19 [inline]\n do_exit+0x2ce/0x2bd0 kernel/exit.c:911\n do_group_exit+0xd3/0x2a0 kernel/exit.c:1112\n get_signal+0x2671/0x26d0 kernel/signal.c:3034\n arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337\n __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]\n exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75\n __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]\n syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]\n syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]\n syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]\n do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100\n entry_SYSCALL_64_after_hwframe+0x77/0x7f\nRIP: 0033:0x7fa02738f749\nRSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca\nRAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749\nRDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098\nRBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000\nR10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000\nR13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98\n\nThere\u0027s really nothing wrong here, outside of processing these reads\nwill take a LONG time. However, we can speed up the exit by checking the\nIO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will\nexit the ring after queueing up all of these reads. Then once the first\nitem is processed, io-wq will simply cancel the rest. That should avoid\nsyzbot running into this complaint again."
}
],
"providerMetadata": {
"dateUpdated": "2026-02-14T15:09:46.379Z",
"orgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"shortName": "Linux"
},
"references": [
{
"url": "https://git.kernel.org/stable/c/85eb83694a91c89d9abe615d717c0053c3efa714"
},
{
"url": "https://git.kernel.org/stable/c/2e8ca1078b14142db2ce51cbd18ff9971560046b"
},
{
"url": "https://git.kernel.org/stable/c/bdf0bf73006ea8af9327cdb85cfdff4c23a5f966"
},
{
"url": "https://git.kernel.org/stable/c/10dc959398175736e495f71c771f8641e1ca1907"
}
],
"title": "io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop",
"x_generator": {
"engine": "bippy-1.2.0"
}
}
},
"cveMetadata": {
"assignerOrgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"assignerShortName": "Linux",
"cveId": "CVE-2026-23113",
"datePublished": "2026-02-14T15:09:46.379Z",
"dateReserved": "2026-01-13T15:37:45.968Z",
"dateUpdated": "2026-02-14T15:09:46.379Z",
"state": "PUBLISHED"
},
"dataType": "CVE_RECORD",
"dataVersion": "5.2",
"vulnerability-lookup:meta": {
"nvd": "{\"cve\":{\"id\":\"CVE-2026-23113\",\"sourceIdentifier\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"published\":\"2026-02-14T15:16:06.380\",\"lastModified\":\"2026-02-14T15:16:06.380\",\"vulnStatus\":\"Received\",\"cveTags\":[],\"descriptions\":[{\"lang\":\"en\",\"value\":\"In the Linux kernel, the following vulnerability has been resolved:\\n\\nio_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop\\n\\nCurrently this is checked before running the pending work. Normally this\\nis quite fine, as work items either end up blocking (which will create a\\nnew worker for other items), or they complete fairly quickly. But syzbot\\nreports an issue where io-wq takes seemingly forever to exit, and with a\\nbit of debugging, this turns out to be because it queues a bunch of big\\n(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn\u0027t\\nsupport -\u003eread_iter(), loop_rw_iter() ends up handling them. Each read\\nreturns 16MB of data read, which takes 20 (!!) seconds. With a bunch of\\nthese pending, processing the whole chain can take a long time. Easily\\nlonger than the syzbot uninterruptible sleep timeout of 140 seconds.\\nThis then triggers a complaint off the io-wq exit path:\\n\\nINFO: task syz.4.135:6326 blocked for more than 143 seconds.\\n Not tainted syzkaller #0\\n Blocked by coredump.\\n\\\"echo 0 \u003e /proc/sys/kernel/hung_task_timeout_secs\\\" disables this message.\\ntask:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000\\nCall Trace:\\n \u003cTASK\u003e\\n context_switch kernel/sched/core.c:5256 [inline]\\n __schedule+0x1139/0x6150 kernel/sched/core.c:6863\\n __schedule_loop kernel/sched/core.c:6945 [inline]\\n schedule+0xe7/0x3a0 kernel/sched/core.c:6960\\n schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75\\n do_wait_for_common kernel/sched/completion.c:100 [inline]\\n __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121\\n io_wq_exit_workers io_uring/io-wq.c:1328 [inline]\\n io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356\\n io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203\\n io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651\\n io_uring_files_cancel include/linux/io_uring.h:19 [inline]\\n do_exit+0x2ce/0x2bd0 kernel/exit.c:911\\n do_group_exit+0xd3/0x2a0 kernel/exit.c:1112\\n get_signal+0x2671/0x26d0 kernel/signal.c:3034\\n arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337\\n __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]\\n exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75\\n __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]\\n syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]\\n syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]\\n syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]\\n do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100\\n entry_SYSCALL_64_after_hwframe+0x77/0x7f\\nRIP: 0033:0x7fa02738f749\\nRSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca\\nRAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749\\nRDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098\\nRBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000\\nR10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000\\nR13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98\\n\\nThere\u0027s really nothing wrong here, outside of processing these reads\\nwill take a LONG time. However, we can speed up the exit by checking the\\nIO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will\\nexit the ring after queueing up all of these reads. Then once the first\\nitem is processed, io-wq will simply cancel the rest. That should avoid\\nsyzbot running into this complaint again.\"}],\"metrics\":{},\"references\":[{\"url\":\"https://git.kernel.org/stable/c/10dc959398175736e495f71c771f8641e1ca1907\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/2e8ca1078b14142db2ce51cbd18ff9971560046b\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/85eb83694a91c89d9abe615d717c0053c3efa714\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/bdf0bf73006ea8af9327cdb85cfdff4c23a5f966\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"}]}}"
}
}
Loading…
Loading…
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.
Loading…
Loading…