Created attachment 296939 [details]
config file that works otherwise
CPU: Ryzen 7 3700x
MB: Asrock X470 Taichi bios P4.70
Have been unable to boot 5.13 rc kernels but bisected the issue to this commit:
a799c2bd29d19c565f37fa038b31a0a1d44d0e4d is the first bad commit
Author: Mike Rapoport <email@example.com>
Date: Tue Mar 2 12:04:05 2021 +0200
x86/setup: Consolidate early memory reservations
When trying to boot, the system hangs with a black screen indefinitely.
I thought it might be an issue specific to my custom (and quite/overly lean) .config so I'll attach that too. If it's at all relevant, these options in it are the same as Ubuntu's defaults:
(Since the commit mentioned "x86/setup", I assume this is x86 specific so I hope I've filed this bug in the correct place!)
Can you please post the log of the latest working kernel with 'memblock=debug' added to the kernel command line.
Created attachment 296941 [details]
info from Ubuntu kernel 5.11.0-18
'dmesg | grep mem' info from Ubuntu kernel 5.11.0-18
This is probably the more helpful kernel as it still has some debugging stuff in it. (Also has hibernation support as the key difference in memory mapping).
Created attachment 296943 [details]
info from customised kernel 5.12 final
This kernel has a lot of debugging pulled out and no hibernation support etc.
Created attachment 296945 [details]
Full dmesg from ubuntu kernel 5.11.0-18
Created attachment 296947 [details]
I don't see anything that could give a clue in the logs.
The only idea I have for now is to partially revert a799c2bd29d1 and see if it helps. What I'm after is to find what of the reservations moved around causes the regression.
Interestingly perhaps, I double checked that the patch was definitely applied successfully - looks like partial-revert-of-a799c2bd29d1 causes the issue too!
or rather, it doesn't solve the problem*
It seems I have some more digging to do(!) because I just tried the ubuntu mainline kernel 5.13rc2 and that works fine.
As I feared, something in my custom config in particular is no longer kosher after the reorganisation in the commit for memory reservations.
I thought it might be because I didn't have memory hotplug enabled but even with
to match ubuntu again, it's still a nogo. Gonna be a good while trying to narrow it down - any suggestions what kernel options may simply need to be turned on now based on my 5.12 "config that works otherwise"? I'll report back once I solve the mystery ;)
Created attachment 296949 [details]
full dmesg of ubuntu mainline 5.13rc2 - working!
I don't think memory hotplug or hibernation options have anything to do with the hang. I'd rather look for EFI/BIOS related options.
Did you try completely reverting a799c2bd29d19c565f37fa038b31a0a1d44d0e4d from v5.13-rc2?
(There should be no conflicts if you first revert c361e5d4d07d63768880e1994c7ed999b3a94cd9)
Cause for celebrations! - I've managed to narrow down to the precise kernel config change which causes the inability to boot on the latest kernels.
So before the kernel commit a799c2bd29d19c565f37fa038b31a0a1d44d0e4d, booting worked fine with
However, by default this is set to 64 (also default for the Ubuntu kernels). Setting this back to 64 and I can boot normally again!
In think the past, I had set that to 640 to avoid any potential headaches in line with the guidance for it: "If you know your BIOS [to] have problems beyond the
default 64K area, you can set this to 640 to avoid using the entire low memory range." Thought it would be better to give it all the breathing room it needed since there was plenty of memory to spare.
Perhaps this is reproducible on other hardware with it set to 640? If not, I don't know, maybe similar effects can be seen on the same hardware setups as mine. An unintended consequence of the reservation reorganisations?
I have to imagine this is still a bug because some setups will necessitate having that set to 640 and may no longer be able to boot. Hopefully you can see a solution, I'm just happy to be able to test out the next couple of weeks' release candidates again!
I could reproduce it with qemu simply by setting reservelow=640k in the command line.
The reason for the failures is that when we reserve all 640k very early there is no enough memory for allocation of the real-mode trampoline.
We could move the reservation of the range between 64k and 640k after the real-mode trampoline setup, but in a sense this contradicts CONFIG_X86_RESERVE_LOW=640 (or reservelow=640k). If user requested to reserve the entire low memory range and we are anyway allocating real-mode trampoline there there is a danger that BIOS will corrupt the trampoline memory. OTOH, if we respect the user's request to keep the first 640k reserved there is no room for real mode trampoline.
I think the best way to move forward is to lower the upper limit of CONFIG_X86_RESERVE_LOW to, say 512K.
@Boris, what do you think?
They say the best bugs are the ones that get squashed. Thanks for your help, hopefully I can be somewhat useful again with the next bug haha
Hmm, probably it's because of the debug info. I think if you disable it in "Kernel Hacking" -> "Compile-time checks and compiler options" -> "Compile the kernel with debug info" in kernel configuration (e.g. make menuconfig) the size of the rpm will be much smaller.
(In reply to Mike Rapoport from comment #14)
> Hmm, probably it's because of the debug info. I think if you disable it in
> "Kernel Hacking" -> "Compile-time checks and compiler options" -> "Compile
> the kernel with debug info" in kernel configuration (e.g. make menuconfig)
> the size of the rpm will be much smaller.
sorry, this was posted to a wrong bug :)
5.13-rc5 should have the fix, can you test it to confirm please?
Confirming, all good with the patch f1d4d47c5851b348b7713007e152bc68b94d728b for the use of
5.13-rc5 is a 'go'!