Bug 198661
Summary: | KASAN: add checks to DMA transfers | ||
---|---|---|---|
Product: | Memory Management | Reporter: | Dmitry Vyukov (dvyukov) |
Component: | Sanitizers | Assignee: | Dmitry Vyukov (dvyukov) |
Status: | NEW --- | ||
Severity: | enhancement | CC: | andreyknvl, arnd, dvyukov, kasan-dev |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | ALL | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Dmitry Vyukov
2018-02-04 11:07:43 UTC
There was also a related request from kernel people during LPC: allow temporarily marking certain ranges of memory to be inaccessible by the kernel, as only the device is supposed to access those ranges via DMA. This can technically be already done via kasan_un/poison_pages or other related functions, but we might want to provide some DMA-specific wrappers. We also need to document this interface usage use case. +Arnd, do you know where exactly DMA memory needs to be poisoned and unpoisoned for KASAN? Or do you know who may know? The functions that need an annotation are the eight "streaming mapping" ones in kernel/dma/mapping.c: dma_map_page_attrs() dma_unmap_page_attrs() __dma_map_sg_attrs() dma_unmap_sg_attrs() dma_sync_single_for_cpu() dma_sync_single_for_device() dma_sync_sg_for_cpu() dma_sync_sg_for_device() In all cases, the "map" and "_for_device" functions transfer ownership to the device and would poison the memory, while the "unmap" and "for_cpu" functions transfer buffer ownership back und need to unpoison the buffers. Any access from the CPU to the data between the calls is a bug, and so is leaving them unpaired. It appears that we already have kmsan hooks in there from 7ade4f10779c ("dma: kmsan: unpoison DMA mappings"), but I suspect these are wrong because they mix up the "direction" bits with the ownership and only unpoison but not poison the buffers. The size of the poison area should *probably* extend to full cache lines for short buffers, rounding down the address to ARCH_DMA_MINALIGN and rounding up size to the next ARCH_DMA_MINALIGN boundary. While unpoisoning, the area should only cover the actual buffer that the DMA was written to in the DMA_FROM_DEVICE and DMA_BIDIRECTIONAL cases, any unaligned data around that buffer are technically undefined after the DMA has completed, so it makes sense to treat them as still poisoned. For DMA_TO_DEVICE transfers, the partial cache lines around the buffer remain valid after the transfer, but writing to those while DMA is ongoing corrupts the data inside the buffer as seen by the device. Thanks Arnd, this is useful. I see that dma_sync_single_for_device accepts dma_addr_t and the comment says: /* * A dma_addr_t can hold any valid DMA address, i.e., any address returned * by the DMA API. * * If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32 * bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits, * but drivers do memory-mapped I/O to ioremapped kernel virtual addresses, * so they don't care about the size of the actual bus addresses. */ typedef u64 dma_addr_t; So these are not the actual physical/virtual addresses that the kernel itself will use for that memory, right? This looks problematic b/c we need to poison/unpoison the kernel physical/virtual addresses, right? I see that "sg" versions accept scatterlist and scatterlist has the kernel address info IIUC (it looks like page_link is struct page* with mangled low bits, so at least we can infer physical addresses for these): struct scatterlist { unsigned long page_link; unsigned int offset; unsigned int length; Right, this is annoying, so it can't be done in the top level of the interface but has to be done separately based on the implementation (direct or iommu). On most architectures this means dma_direct_sync_single_for_{device,cpu}() and iommu_dma_sync_single_for_{device,cpu}(), which convert the DMA address into a physical address in different ways. On a couple of architectures (alpha, arm, mips, parisc, powerpc, sparc and x86), there are additional custom dma_map_ops, but most of these don't do KASAN or (for s390 and um) don't support the dma_mapping interface. To try things out, it's probably sufficient to ignore the custom operations. |