HW_TAGS KASAN currently uses the STG instruction to set memory tags for all allocations. Using STGM when possible (e.g. when a whole page is being poisoned) should be faster.
> For MTE, we could look at optimising the poisoning code for page size to > use STGM or DC GZVA but I don't think we can make it unnoticeable for > large systems (especially with DC GZVA, that's like zeroing the whole > RAM at boot). Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
From Catalin: """ A quick hack here if you can give it a try. It can be made more optimal, maybe calling the set_mem_tag_page directly from kasan: diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h index 7ab500e2ad17..b9b9ca1976eb 100644 --- a/arch/arm64/include/asm/mte-kasan.h +++ b/arch/arm64/include/asm/mte-kasan.h @@ -48,6 +48,20 @@ static inline u8 mte_get_random_tag(void) return mte_get_ptr_tag(addr); } +static inline void __mte_set_mem_tag_page(u64 curr, u64 end) +{ + u64 bs = 4 << (read_cpuid(DCZID_EL0) & 0xf); + + do { + asm volatile(__MTE_PREAMBLE "dc gva, %0" + : + : "r" (curr) + : "memory"); + + curr += bs; + } while (curr != end); +} + /* * Assign allocation tags for a region of memory based on the pointer tag. * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and @@ -63,6 +77,11 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag) curr = (u64)__tag_set(addr, tag); end = curr + size; + if (IS_ALIGNED((unsigned long)addr, PAGE_SIZE) && size == PAGE_SIZE) { + __mte_set_mem_tag_page(curr, end); + return; + } + do { /* * 'asm volatile' is required to prevent the compiler to move """
Implemented with 3d0cca0b02ac ("kasan: speed up mte_set_mem_tag_range").