Bug 211817 - KASAN (hw-tags): optimize setting tags for large allocations
Summary: KASAN (hw-tags): optimize setting tags for large allocations
Status: RESOLVED CODE_FIX
Alias: None
Product: Memory Management
Classification: Unclassified
Component: Sanitizers (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: MM/Sanitizers virtual assignee
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-02-17 21:03 UTC by Andrey Konovalov
Modified: 2021-08-10 22:48 UTC (History)
1 user (show)

See Also:
Kernel Version: upstream
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Andrey Konovalov 2021-02-17 21:03:00 UTC
HW_TAGS KASAN currently uses the STG instruction to set memory tags for all allocations. Using STGM when possible (e.g. when a whole page is being poisoned) should be faster.
Comment 1 Andrey Konovalov 2021-02-18 19:53:48 UTC
> For MTE, we could look at optimising the poisoning code for page size to
> use STGM or DC GZVA but I don't think we can make it unnoticeable for
> large systems (especially with DC GZVA, that's like zeroing the whole
> RAM at boot).

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Comment 2 Andrey Konovalov 2021-02-26 01:34:39 UTC
From Catalin:

"""
A quick hack here if you can give it a try. It can be made more optimal,
maybe calling the set_mem_tag_page directly from kasan:

diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index 7ab500e2ad17..b9b9ca1976eb 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -48,6 +48,20 @@ static inline u8 mte_get_random_tag(void)
 	return mte_get_ptr_tag(addr);
 }
 
+static inline void __mte_set_mem_tag_page(u64 curr, u64 end)
+{
+	u64 bs = 4 << (read_cpuid(DCZID_EL0) & 0xf);
+
+	do {
+		asm volatile(__MTE_PREAMBLE "dc gva, %0"
+			     :
+			     : "r" (curr)
+			     : "memory");
+
+		curr += bs;
+	} while (curr != end);
+}
+
 /*
  * Assign allocation tags for a region of memory based on the pointer tag.
  * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
@@ -63,6 +77,11 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
 	curr = (u64)__tag_set(addr, tag);
 	end = curr + size;
 
+	if (IS_ALIGNED((unsigned long)addr, PAGE_SIZE) && size == PAGE_SIZE) {
+		__mte_set_mem_tag_page(curr, end);
+		return;
+	}
+
 	do {
 		/*
 		 * 'asm volatile' is required to prevent the compiler to move
"""
Comment 3 Andrey Konovalov 2021-08-10 22:48:24 UTC
Implemented with 3d0cca0b02ac ("kasan: speed up mte_set_mem_tag_range").

Note You need to log in before you can comment on or make changes to this bug.