Bug 212167

Summary: KASAN: don't proceed with invalid page_alloc and large kmalloc frees
Product: Memory Management Reporter: Andrey Konovalov (andreyknvl)
Component: SanitizersAssignee: MM/Sanitizers virtual assignee (mm_sanitizers)
Status: NEW ---    
Severity: normal CC: kasan-dev
Priority: P1    
Hardware: All   
OS: Linux   
Kernel Version: upstream Subsystem:
Regression: No Bisected commit-id:

Description Andrey Konovalov 2021-03-09 13:40:30 UTC
For slab allocations, if a double-free/invalid-free is detected, KASAN doesn't proceed with freeing the object. It might be a good idea to do the same for page_alloc.
Comment 1 Andrey Konovalov 2021-03-09 13:48:47 UTC
A part of this change would be checking that the memory is accessible in kasan_free_pages(), which might be useful on its own.
Comment 2 Andrey Konovalov 2023-12-19 22:27:30 UTC
We also need to implement this for large kmalloc allocations (the one that fall back onto page_alloc).
Comment 3 Andrey Konovalov 2024-01-10 02:57:14 UTC
Double-free and invalid-free checks have been implemented for page_alloc-backed mempool allocations (both page and large kmalloc) in the series [1].

We still need to add detection to kasan_poison_pages, propagate the return values from kasan_kfree_large and kasan_poison_pages to slab/page_alloc, and make slab/page_alloc to not reuse buggy objects.

Internally, for the page_alloc part, we should move the kasan_mempool_poison_pages implementation into kasan_poison_pages (except for the sampling check: page_alloc code already checks for it) and reuse kasan_poison_pages in kasan_mempool_poison_pages.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=280ec6ccb6422aa4a04f9ac4216ddcf055acc95d