ghsa-p247-4v6m-f77m
Vulnerability from github
In the Linux kernel, the following vulnerability has been resolved:
mm/slub: add missing TID updates on slab deactivation
The fastpath in slab_alloc_node() assumes that c->slab is stable as long as the TID stays the same. However, two places in __slab_alloc() currently don't update the TID when deactivating the CPU slab.
If multiple operations race the right way, this could lead to an object
getting lost; or, in an even more unlikely situation, it could even lead to
an object being freed onto the wrong slab's freelist, messing up the
inuse
counter and eventually causing a page to be freed to the page
allocator while it still contains slab objects.
(I haven't actually tested these cases though, this is just based on looking at the code. Writing testcases for this stuff seems like it'd be a pain...)
The race leading to state inconsistency is (all operations on the same CPU and kmem_cache):
- task A: begin do_slab_free():
- read TID
- read pcpu freelist (==NULL)
- check
slab == c->slab
(true)
- [PREEMPT A->B]
- task B: begin slab_alloc_node():
- fastpath fails (
c->freelist
is NULL) - enter __slab_alloc()
- slub_get_cpu_ptr() (disables preemption)
- enter ___slab_alloc()
- take local_lock_irqsave()
- read c->freelist as NULL
- get_freelist() returns NULL
- write
c->slab = NULL
- drop local_unlock_irqrestore()
- goto new_slab
- slub_percpu_partial() is NULL
- get_partial() returns NULL
- slub_put_cpu_ptr() (enables preemption)
- fastpath fails (
- [PREEMPT B->A]
- task A: finish do_slab_free():
- this_cpu_cmpxchg_double() succeeds()
- [CORRUPT STATE: c->slab==NULL, c->freelist!=NULL]
From there, the object on c->freelist will get lost if task B is allowed to continue from here: It will proceed to the retry_load_slab label, set c->slab, then jump to load_freelist, which clobbers c->freelist.
But if we instead continue as follows, we get worse corruption:
- task A: run __slab_free() on object from other struct slab:
- CPU_PARTIAL_FREE case (slab was on no list, is now on pcpu partial)
- task A: run slab_alloc_node() with NUMA node constraint:
- fastpath fails (c->slab is NULL)
- call __slab_alloc()
- slub_get_cpu_ptr() (disables preemption)
- enter ___slab_alloc()
- c->slab is NULL: goto new_slab
- slub_percpu_partial() is non-NULL
- set c->slab to slub_percpu_partial(c)
- [CORRUPT STATE: c->slab points to slab-1, c->freelist has objects from slab-2]
- goto redo
- node_match() fails
- goto deactivate_slab
- existing c->freelist is passed into deactivate_slab()
- inuse count of slab-1 is decremented to account for object from slab-2
At this point, the inuse count of slab-1 is 1 lower than it should be. This means that if we free all allocated objects in slab-1 except for one, SLUB will think that slab-1 is completely unused, and may free its page, leading to use-after-free.
{ "affected": [], "aliases": [ "CVE-2022-49700" ], "database_specific": { "cwe_ids": [ "CWE-416" ], "github_reviewed": false, "github_reviewed_at": null, "nvd_published_at": "2025-02-26T07:01:44Z", "severity": "HIGH" }, "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nmm/slub: add missing TID updates on slab deactivation\n\nThe fastpath in slab_alloc_node() assumes that c-\u003eslab is stable as long as\nthe TID stays the same. However, two places in __slab_alloc() currently\ndon\u0027t update the TID when deactivating the CPU slab.\n\nIf multiple operations race the right way, this could lead to an object\ngetting lost; or, in an even more unlikely situation, it could even lead to\nan object being freed onto the wrong slab\u0027s freelist, messing up the\n`inuse` counter and eventually causing a page to be freed to the page\nallocator while it still contains slab objects.\n\n(I haven\u0027t actually tested these cases though, this is just based on\nlooking at the code. Writing testcases for this stuff seems like it\u0027d be\na pain...)\n\nThe race leading to state inconsistency is (all operations on the same CPU\nand kmem_cache):\n\n - task A: begin do_slab_free():\n - read TID\n - read pcpu freelist (==NULL)\n - check `slab == c-\u003eslab` (true)\n - [PREEMPT A-\u003eB]\n - task B: begin slab_alloc_node():\n - fastpath fails (`c-\u003efreelist` is NULL)\n - enter __slab_alloc()\n - slub_get_cpu_ptr() (disables preemption)\n - enter ___slab_alloc()\n - take local_lock_irqsave()\n - read c-\u003efreelist as NULL\n - get_freelist() returns NULL\n - write `c-\u003eslab = NULL`\n - drop local_unlock_irqrestore()\n - goto new_slab\n - slub_percpu_partial() is NULL\n - get_partial() returns NULL\n - slub_put_cpu_ptr() (enables preemption)\n - [PREEMPT B-\u003eA]\n - task A: finish do_slab_free():\n - this_cpu_cmpxchg_double() succeeds()\n - [CORRUPT STATE: c-\u003eslab==NULL, c-\u003efreelist!=NULL]\n\nFrom there, the object on c-\u003efreelist will get lost if task B is allowed to\ncontinue from here: It will proceed to the retry_load_slab label,\nset c-\u003eslab, then jump to load_freelist, which clobbers c-\u003efreelist.\n\nBut if we instead continue as follows, we get worse corruption:\n\n - task A: run __slab_free() on object from other struct slab:\n - CPU_PARTIAL_FREE case (slab was on no list, is now on pcpu partial)\n - task A: run slab_alloc_node() with NUMA node constraint:\n - fastpath fails (c-\u003eslab is NULL)\n - call __slab_alloc()\n - slub_get_cpu_ptr() (disables preemption)\n - enter ___slab_alloc()\n - c-\u003eslab is NULL: goto new_slab\n - slub_percpu_partial() is non-NULL\n - set c-\u003eslab to slub_percpu_partial(c)\n - [CORRUPT STATE: c-\u003eslab points to slab-1, c-\u003efreelist has objects\n from slab-2]\n - goto redo\n - node_match() fails\n - goto deactivate_slab\n - existing c-\u003efreelist is passed into deactivate_slab()\n - inuse count of slab-1 is decremented to account for object from\n slab-2\n\nAt this point, the inuse count of slab-1 is 1 lower than it should be.\nThis means that if we free all allocated objects in slab-1 except for one,\nSLUB will think that slab-1 is completely unused, and may free its page,\nleading to use-after-free.", "id": "GHSA-p247-4v6m-f77m", "modified": "2025-02-27T21:32:15Z", "published": "2025-02-27T21:32:15Z", "references": [ { "type": "ADVISORY", "url": "https://nvd.nist.gov/vuln/detail/CVE-2022-49700" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/0515cc9b6b24877f59b222ade704bfaa42caa2a6" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/197e257da473c725dfe47759c3ee02f2398d8ea5" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/308c6d0e1f200fd26c71270c6e6bfcf0fc6ff082" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/6c32496964da0dc230cea763a0e934b2e02dabd5" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/d6a597450e686d4c6388bd3cdcb17224b4dae7f0" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/e2b2f0e2e34d71ae6c2a1114fd3c525930e84bc7" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/e7e3e90d671078455a3a08189f89d85b3da2de9e" }, { "type": "WEB", "url": "https://git.kernel.org/stable/c/eeaa345e128515135ccb864c04482180c08e3259" } ], "schema_version": "1.4.0", "severity": [ { "score": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "type": "CVSS_V3" } ] }
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.