Bug 214425
Summary: | [drm][amdgpu][TTM] Page pool memory never gets freed | ||
---|---|---|---|
Product: | Drivers | Reporter: | Martin Doucha (doucha) |
Component: | Video(DRI - non Intel) | Assignee: | drivers_video-dri |
Status: | NEW --- | ||
Severity: | normal | CC: | auxsvr, rafael.ristovski, sh200105 |
Priority: | P1 | ||
Hardware: | x86-64 | ||
OS: | Linux | ||
Kernel Version: | 5.14.3 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Martin Doucha
2021-09-15 21:09:55 UTC
According to amdgpu devs, this is a feature where the allocated pages are kept around in case they are needed later on. TTM is able to release the memory in case the memory pressure increases. See comment here: https://gitlab.freedesktop.org/drm/amd/-/issues/1942#note_1311016 (In reply to Rafael Ristovski from comment #1) > According to amdgpu devs, this is a feature where the allocated pages are > kept around in case they are needed later on. TTM is able to release the > memory in case the memory pressure increases. I understand the logic behind keeping idle buffers allocated for a while. But it does not make sense to keep them for hours after last use and the release mechanism on increased memory pressure does not seem to be working. When I run a large compilation overnight, starting from a fresh reboot and shutting down all graphics software including the X server, I'll often come back in the morning to find that 70% of all RAM is allocated in idle TTM buffers and GCC is stuck swapping for hours. The TTM buffers were likely allocated by some GPU-accelerated build computation halfway through the night. But this is harder to reproduce than the games I've mentioned in the initial bugreport. (In reply to Martin Doucha from comment #2) > (In reply to Rafael Ristovski from comment #1) > > According to amdgpu devs, this is a feature where the allocated pages are > > kept around in case they are needed later on. TTM is able to release the > > memory in case the memory pressure increases. > > I understand the logic behind keeping idle buffers allocated for a while. > But it does not make sense to keep them for hours after last use and the > release mechanism on increased memory pressure does not seem to be working. > > When I run a large compilation overnight, starting from a fresh reboot and > shutting down all graphics software including the X server, I'll often come > back in the morning to find that 70% of all RAM is allocated in idle TTM > buffers and GCC is stuck swapping for hours. The TTM buffers were likely > allocated by some GPU-accelerated build computation halfway through the > night. But this is harder to reproduce than the games I've mentioned in the > initial bugreport. Indeed, I too run into situations where even if I purposefully trigger an OOM situation just to get the TTM "cache" to evict itself through memory pressure, _it still does not end up releasing all of the memory_. There are also the following two sysfs files, simply reading them triggers an eviction of GTT/VRAM: > cat /sys/kernel/debug/dri/0/amdgpu_evict_vram > cat /sys/kernel/debug/dri/0/amdgpu_evict_gtt this can be confirmed as working with tools like `radeontop`/`nvtop`. However, this once again does not release the TTM buffers. As you can see in the issue I linked, I never got a reply about a mechanism to manually release TTM memory. I will attempt coercing an answer on IRC, perhaps I will have better luck asking directly there. For what its worth, the following horrible incantation managed to release 2+GB of TTM buffers on one of my machines, after I purposefully ran a VRAM intensive game: > for i in {1..1000}; do cat /sys/kernel/debug/ttm/page_pool_shrink; done This seems to be the only sysfs mechanism to cause the memory to get released, and as of now I am not aware of a... better and mainly "cleaner" alternative. Newer kernel versions seem to feature https://www.kernel.org/doc/html/next/admin-guide/mm/shrinker_debugfs.html, which might be a better alternative, but I have not tested it yet, and its usage is not exactly clear to me. |