fkie_cve-2025-39844
Vulnerability from fkie_nvd
Published
2025-09-19 16:15
Modified
2025-09-22 21:23
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved:
mm: move page table sync declarations to linux/pgtable.h
During our internal testing, we started observing intermittent boot
failures when the machine uses 4-level paging and has a large amount of
persistent memory:
BUG: unable to handle page fault for address: ffffe70000000034
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] SMP NOPTI
RIP: 0010:__init_single_page+0x9/0x6d
Call Trace:
<TASK>
__init_zone_device_page+0x17/0x5d
memmap_init_zone_device+0x154/0x1bb
pagemap_range+0x2e0/0x40f
memremap_pages+0x10b/0x2f0
devm_memremap_pages+0x1e/0x60
dev_dax_probe+0xce/0x2ec [device_dax]
dax_bus_probe+0x6d/0xc9
[... snip ...]
</TASK>
It turns out that the kernel panics while initializing vmemmap (struct
page array) when the vmemmap region spans two PGD entries, because the new
PGD entry is only installed in init_mm.pgd, but not in the page tables of
other tasks.
And looking at __populate_section_memmap():
if (vmemmap_can_optimize(altmap, pgmap))
// does not sync top level page tables
r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
else
// sync top level page tables in x86
r = vmemmap_populate(start, end, nid, altmap);
In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c
synchronizes the top level page table (See commit 9b861528a801 ("x86-64,
mem: Update all PGDs for direct mapping and vmemmap mapping changes")) so
that all tasks in the system can see the new vmemmap area.
However, when vmemmap_can_optimize() returns true, the optimized path
skips synchronization of top-level page tables. This is because
vmemmap_populate_compound_pages() is implemented in core MM code, which
does not handle synchronization of the top-level page tables. Instead,
the core MM has historically relied on each architecture to perform this
synchronization manually.
We're not the first party to encounter a crash caused by not-sync'd top
level page tables: earlier this year, Gwan-gyeong Mun attempted to address
the issue [1] [2] after hitting a kernel panic when x86 code accessed the
vmemmap area before the corresponding top-level entries were synced. At
that time, the issue was believed to be triggered only when struct page
was enlarged for debugging purposes, and the patch did not get further
updates.
It turns out that current approach of relying on each arch to handle the
page table sync manually is fragile because 1) it's easy to forget to sync
the top level page table, and 2) it's also easy to overlook that the
kernel should not access the vmemmap and direct mapping areas before the
sync.
# The solution: Make page table sync more code robust and harder to miss
To address this, Dave Hansen suggested [3] [4] introducing
{pgd,p4d}_populate_kernel() for updating kernel portion of the page tables
and allow each architecture to explicitly perform synchronization when
installing top-level entries. With this approach, we no longer need to
worry about missing the sync step, reducing the risk of future
regressions.
The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
vmalloc and ioremap to synchronize page tables.
pgd_populate_kernel() looks like this:
static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
p4d_t *p4d)
{
pgd_populate(&init_mm, pgd, p4d);
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
arch_sync_kernel_mappings(addr, addr);
}
It is worth noting that vmalloc() and apply_to_range() carefully
synchronizes page tables by calling p*d_alloc_track() and
arch_sync_kernel_mappings(), and thus they are not affected by
---truncated---
References
Impacted products
Vendor | Product | Version |
---|
{ "cveTags": [], "descriptions": [ { "lang": "en", "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nmm: move page table sync declarations to linux/pgtable.h\n\nDuring our internal testing, we started observing intermittent boot\nfailures when the machine uses 4-level paging and has a large amount of\npersistent memory:\n\n BUG: unable to handle page fault for address: ffffe70000000034\n #PF: supervisor write access in kernel mode\n #PF: error_code(0x0002) - not-present page\n PGD 0 P4D 0 \n Oops: 0002 [#1] SMP NOPTI\n RIP: 0010:__init_single_page+0x9/0x6d\n Call Trace:\n \u003cTASK\u003e\n __init_zone_device_page+0x17/0x5d\n memmap_init_zone_device+0x154/0x1bb\n pagemap_range+0x2e0/0x40f\n memremap_pages+0x10b/0x2f0\n devm_memremap_pages+0x1e/0x60\n dev_dax_probe+0xce/0x2ec [device_dax]\n dax_bus_probe+0x6d/0xc9\n [... snip ...]\n \u003c/TASK\u003e\n\nIt turns out that the kernel panics while initializing vmemmap (struct\npage array) when the vmemmap region spans two PGD entries, because the new\nPGD entry is only installed in init_mm.pgd, but not in the page tables of\nother tasks.\n\nAnd looking at __populate_section_memmap():\n if (vmemmap_can_optimize(altmap, pgmap)) \n // does not sync top level page tables\n r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);\n else \n // sync top level page tables in x86\n r = vmemmap_populate(start, end, nid, altmap);\n\nIn the normal path, vmemmap_populate() in arch/x86/mm/init_64.c\nsynchronizes the top level page table (See commit 9b861528a801 (\"x86-64,\nmem: Update all PGDs for direct mapping and vmemmap mapping changes\")) so\nthat all tasks in the system can see the new vmemmap area.\n\nHowever, when vmemmap_can_optimize() returns true, the optimized path\nskips synchronization of top-level page tables. This is because\nvmemmap_populate_compound_pages() is implemented in core MM code, which\ndoes not handle synchronization of the top-level page tables. Instead,\nthe core MM has historically relied on each architecture to perform this\nsynchronization manually.\n\nWe\u0027re not the first party to encounter a crash caused by not-sync\u0027d top\nlevel page tables: earlier this year, Gwan-gyeong Mun attempted to address\nthe issue [1] [2] after hitting a kernel panic when x86 code accessed the\nvmemmap area before the corresponding top-level entries were synced. At\nthat time, the issue was believed to be triggered only when struct page\nwas enlarged for debugging purposes, and the patch did not get further\nupdates.\n\nIt turns out that current approach of relying on each arch to handle the\npage table sync manually is fragile because 1) it\u0027s easy to forget to sync\nthe top level page table, and 2) it\u0027s also easy to overlook that the\nkernel should not access the vmemmap and direct mapping areas before the\nsync.\n\n# The solution: Make page table sync more code robust and harder to miss\n\nTo address this, Dave Hansen suggested [3] [4] introducing\n{pgd,p4d}_populate_kernel() for updating kernel portion of the page tables\nand allow each architecture to explicitly perform synchronization when\ninstalling top-level entries. With this approach, we no longer need to\nworry about missing the sync step, reducing the risk of future\nregressions.\n\nThe new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,\nPGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by\nvmalloc and ioremap to synchronize page tables.\n\npgd_populate_kernel() looks like this:\nstatic inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,\n p4d_t *p4d)\n{\n pgd_populate(\u0026init_mm, pgd, p4d);\n if (ARCH_PAGE_TABLE_SYNC_MASK \u0026 PGTBL_PGD_MODIFIED)\n arch_sync_kernel_mappings(addr, addr);\n}\n\nIt is worth noting that vmalloc() and apply_to_range() carefully\nsynchronizes page tables by calling p*d_alloc_track() and\narch_sync_kernel_mappings(), and thus they are not affected by\n---truncated---" } ], "id": "CVE-2025-39844", "lastModified": "2025-09-22T21:23:01.543", "metrics": {}, "published": "2025-09-19T16:15:43.160", "references": [ { "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "url": "https://git.kernel.org/stable/c/469f9d22751472b81eaaf8a27fcdb5a70741c342" }, { "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "url": "https://git.kernel.org/stable/c/4f7537772011fad832f83d6848f8eab282545bef" }, { "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "url": "https://git.kernel.org/stable/c/6797a8b3f71b2cb558b8771a03450dc3e004e453" }, { "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "url": "https://git.kernel.org/stable/c/732e62212f49d549c91071b4da7942ee3058f7a2" }, { "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "url": "https://git.kernel.org/stable/c/7cc183f2e67d19b03ee5c13a6664b8c6cc37ff9d" }, { "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "url": "https://git.kernel.org/stable/c/eceb44e1f94bd641b2a4e8c09b64c797c4eabc15" } ], "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67", "vulnStatus": "Awaiting Analysis" }
Loading…
Loading…
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.
Loading…
Loading…