Created attachment 301825 [details] Configuration file used to build kernel using gcc 12.2.0 I initially thought this was related to KVM and had created ticket 216388 for the 5.19 kernel. However, in time it appeared to affect all sorts of different parts of the kernel. Further, it was not present in 5.18.19 and it was not present in the very first release of 6.0 up through 6.0rc4. I had it running on 28 different machines with hardware ranging from i7-6700k, i7-6850k, and i7-9700k CPU physical hosts to various KVM guests on those machines. And it ran perfectly for a week. Now 6.0rc5 is back to being very problematic just like 5.19 was. This is built with gcc 12.2. All of these occurred within minutes to hours after boot, after rc4 had run totally clean for a week. This is from a Fedora machine that is running as a KVM-Qemu guest, the physical host is an i7-6700k with 64GB of RAM and an Asus motherboard. [ 60.464791] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 5-.... } 3 jiffies s: 57 root: 0x20/. [ 60.464801] rcu: blocking rcu_node structures (internal RCU debug): [ 60.464805] Task dump for CPU 5: [ 60.464806] task:modprobe state:R running task stack: 0 pid: 674 ppid: 673 flags:0x0000400a [ 60.464814] Call Trace: [ 60.464817] <TASK> [ 60.464821] ? kmem_cache_alloc_lru+0x1ac/0x370 [ 60.464826] ? __wake_up_common_lock+0x88/0xc0 [ 60.464829] ? sysvec_apic_timer_interrupt+0x90/0xa0 [ 60.464833] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 60.464836] ? load_new_mm_cr3+0x7f/0xe0 [ 60.464839] ? switch_mm_irqs_off+0x19e/0x570 [ 60.464841] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 60.464843] ? do_sync_core+0x2a/0x30 [ 60.464847] ? smp_call_function_many_cond+0xf5/0x3c0 [ 60.464850] ? text_poke_memset+0x20/0x20 [ 60.464853] ? on_each_cpu_cond_mask+0x1d/0x30 [ 60.464855] ? text_poke_bp_batch+0x1fb/0x210 [ 60.464859] ? xs_nospace+0x7c/0xb0 [sunrpc] [ 60.464909] ? __traceiter_rpc_socket_shutdown+0x50/0x50 [sunrpc] [ 60.464948] ? xs_nospace+0x7b/0xb0 [sunrpc] [ 60.465063] ? xs_tcp_shutdown+0xcd/0xe0 [sunrpc] [ 60.465109] ? text_poke_bp+0x59/0x80 [ 60.465112] ? __static_call_transform+0x7f/0x120 [ 60.465115] ? arch_static_call_transform+0x87/0xa0 [ 60.465118] ? __static_call_init+0x167/0x210 [ 60.465122] ? static_call_module_notify+0x13e/0x1a0 [ 60.465124] ? blocking_notifier_call_chain_robust+0x6f/0xd0 [ 60.465128] ? load_module+0x1f6b/0x2340 [ 60.465133] ? __do_sys_finit_module+0xbd/0x130 [ 60.465135] ? __do_sys_finit_module+0xbd/0x130 [ 60.465137] ? __x64_sys_finit_module+0x18/0x20 [ 60.465139] ? do_syscall_64+0x58/0x80 [ 60.465141] ? syscall_exit_to_user_mode+0x1b/0x40 [ 60.465143] ? do_syscall_64+0x67/0x80 [ 60.465144] ? switch_fpu_return+0x4e/0xc0 [ 60.465146] ? exit_to_user_mode_prepare+0x184/0x1e0 [ 60.465149] ? syscall_exit_to_user_mode+0x1b/0x40 [ 60.465151] ? do_syscall_64+0x67/0x80 [ 60.465152] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd This is from a physical host i7-6700k based machine, it's not the same machine the above guest is running on but it is identical architecture running Ubuntu 22.04. [ 612.979888] INFO: task jbd2/md0p2-8:916 blocked for more than 122 seconds. [ 612.979894] Not tainted 6.0.0-rc5 #1 [ 612.979895] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 612.979896] task:jbd2/md0p2-8 state:D stack: 0 pid: 916 ppid: 2 flags:0x00004000 [ 612.979900] Call Trace: [ 612.979901] <TASK> [ 612.979904] __schedule+0x31f/0x1400 [ 612.979911] ? md_submit_bio+0x61/0xa0 [ 612.979913] ? __submit_bio+0x6d/0x170 [ 612.979916] ? submit_bio_noacct_nocheck+0xe5/0x2b0 [ 612.979919] schedule+0x5d/0xf0 [ 612.979921] io_schedule+0x46/0x80 [ 612.979924] bit_wait_io+0x11/0x80 [ 612.979926] __wait_on_bit+0x4e/0x120 [ 612.979929] ? out_of_line_wait_on_bit_timeout+0xc0/0xc0 [ 612.979932] out_of_line_wait_on_bit+0x8d/0xb0 [ 612.979935] ? group_init+0x190/0x190 [ 612.979938] __wait_on_buffer+0x29/0x30 [ 612.979941] jbd2_journal_commit_transaction+0x13d7/0x17e0 [ 612.979946] kjournald2+0xac/0x270 [ 612.979948] ? destroy_sched_domains_rcu+0x30/0x30 [ 612.979950] ? load_superblock.part.0+0xc0/0xc0 [ 612.979952] kthread+0xca/0xf0 [ 612.979954] ? kthread_complete_and_exit+0x20/0x20 [ 612.979957] ret_from_fork+0x1f/0x30 [ 612.979960] </TASK> [ 612.979962] INFO: task jbd2/md0p3-8:923 blocked for more than 122 seconds. [ 612.979963] Not tainted 6.0.0-rc5 #1 [ 612.979964] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 612.979965] task:jbd2/md0p3-8 state:D stack: 0 pid: 923 ppid: 2 flags:0x00004000 [ 612.979968] Call Trace: [ 612.979969] <TASK> [ 612.979970] __schedule+0x31f/0x1400 [ 612.979973] ? md_submit_bio+0x61/0xa0 [ 612.979974] ? __submit_bio+0x6d/0x170 [ 612.979976] ? submit_bio_noacct_nocheck+0xe5/0x2b0 [ 612.979979] schedule+0x5d/0xf0 [ 612.979981] io_schedule+0x46/0x80 [ 612.979984] bit_wait_io+0x11/0x80 [ 612.979986] __wait_on_bit+0x4e/0x120 [ 612.979989] ? out_of_line_wait_on_bit_timeout+0xc0/0xc0 [ 612.979991] out_of_line_wait_on_bit+0x8d/0xb0 [ 612.979994] ? group_init+0x190/0x190 [ 612.979996] __wait_on_buffer+0x29/0x30 [ 612.979998] jbd2_journal_commit_transaction+0x13d7/0x17e0 [ 612.980003] kjournald2+0xac/0x270 [ 612.980005] ? destroy_sched_domains_rcu+0x30/0x30 [ 612.980007] ? load_superblock.part.0+0xc0/0xc0 [ 612.980009] kthread+0xca/0xf0 [ 612.980010] ? kthread_complete_and_exit+0x20/0x20 [ 612.980013] ret_from_fork+0x1f/0x30 [ 612.980016] </TASK>[ 612.980073] INFO: task worker:4047 blocked for more than 122 seconds. [ 612.980075] Not tainted 6.0.0-rc5 #1 [ 612.980076] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 612.980077] task:worker state:D stack: 0 pid: 4047 ppid: 1 flags:0x00004002 [ 612.980079] Call Trace: [ 612.980080] <TASK> [ 612.980081] __schedule+0x31f/0x1400 [ 612.980084] ? timerqueue_del+0x31/0x50 [ 612.980087] schedule+0x5d/0xf0 [ 612.980090] io_schedule+0x46/0x80 [ 612.980092] folio_wait_bit_common+0x13e/0x380 [ 612.980096] ? filemap_invalidate_unlock_two+0x50/0x50 [ 612.980099] folio_wait_bit+0x18/0x20 [ 612.980101] folio_wait_writeback+0x2c/0x80 [ 612.980103] wait_on_page_writeback+0x18/0x50 [ 612.980105] __filemap_fdatawait_range+0x98/0x140 [ 612.980108] file_write_and_wait_range+0x7b/0x90 [ 612.980111] ext4_sync_file+0xf6/0x320 [ 612.980114] __x64_sys_fdatasync+0x4a/0x90 [ 612.980117] ? syscall_enter_from_user_mode+0x52/0x80 [ 612.980119] do_syscall_64+0x59/0x90 [ 612.980122] ? do_syscall_64+0x69/0x90 [ 612.980124] ? do_syscall_64+0x69/0x90 [ 612.980126] ? do_syscall_64+0x69/0x90 [ 612.980129] ? sysvec_call_function_single+0x4e/0x90 [ 612.980130] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 612.980133] RIP: 0033:0x7f8e1151bb1b [ 612.980135] RSP: 002b:00007f8b9fffe5d0 EFLAGS: 00000293 ORIG_RAX: 000000000000004b [ 612.980138] RAX: ffffffffffffffda RBX: 00005633c09e2d70 RCX: 00007f8e1151bb1b [ 612.980139] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000000b [ 612.980141] RBP: 00005633c0771c40 R08: 0000000000000000 R09: 0000000000000000 [ 612.980142] R10: 00007f8b9fffe5f0 R11: 0000000000000293 R12: 0000000000000000 [ 612.980143] R13: 00005633c0771ca8 R14: 00005633c161a470 R15: 00007f8bbcff83c0 [ 612.980146] </TASK> This is from a KVM-Qemu guest running Ubuntu with an i7-6700k host both the host and the guest running Ubuntu 22.04. [ 80.384043] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 4-.... } 3 jiffies s: 93 root: 0x10/. [ 80.384055] rcu: blocking rcu_node structures (internal RCU debug): [ 80.384059] Task dump for CPU 4: [ 80.384061] task:modprobe state:R running task stack: 0 pid: 736 ppid: 484 flags:0x0000400a [ 80.384068] Call Trace: [ 80.384076] <TASK> [ 80.384078] ? __alloc_pages+0x1da/0x12c0 [ 80.384082] ? kmem_cache_alloc_lru+0x18a/0x380 [ 80.384085] ? sysvec_apic_timer_interrupt+0x4e/0x90 [ 80.384087] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 80.384090] ? __x2apic_send_IPI_mask+0x77/0x100 [ 80.384093] ? x2apic_send_IPI_mask+0x13/0x20 [ 80.384095] ? do_sync_core+0x2a/0x30 [ 80.384098] ? smp_call_function_many_cond+0x102/0x3d0 [ 80.384100] ? text_poke_memset+0x20/0x20 [ 80.384103] ? on_each_cpu_cond_mask+0x1d/0x30 [ 80.384104] ? text_poke_bp_batch+0x1df/0x210 [ 80.384108] ? xs_nospace+0x80/0xc0 [sunrpc] [ 80.384145] ? __traceiter_rpc_socket_shutdown+0x50/0x50 [sunrpc] [ 80.384172] ? xs_nospace+0x7f/0xc0 [sunrpc] [ 80.384196] ? text_poke_bp+0x49/0x70 [ 80.384199] ? __static_call_transform+0x7f/0x120 [ 80.384200] ? arch_static_call_transform+0x83/0xa0 [ 80.384203] ? xs_nospace+0x7f/0xc0 [sunrpc] [ 80.384226] ? __static_call_init.part.0+0x166/0x210 [ 80.384229] ? static_call_module_notify+0x13c/0x190 [ 80.384231] ? blocking_notifier_call_chain_robust+0x70/0xe0 [ 80.384234] ? load_module+0x1ca4/0x2230 [ 80.384239] ? __do_sys_finit_module+0xc8/0x140 [ 80.384240] ? __do_sys_finit_module+0xc8/0x140 [ 80.384243] ? __x64_sys_finit_module+0x18/0x20 [ 80.384245] ? do_syscall_64+0x59/0x90 [ 80.384248] ? ksys_mmap_pgoff+0x1 15/0x260 [ 80.384251] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 80.384252] ? syscall_exit_to_user_mode+0x1c/0x40 [ 80.384254] ? do_syscall_64+0x69/0x90 [ 80.384256] ? do_syscall_64+0x69/0x90 [ 80.384258] ? irqentry_exit+0x1d/0x30 [ 80.384259] ? exc_page_fault+0x87/0x160 [ 80.384261] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 80.384264] </TASK> This is from another KVM-QEMU guest on the same physical host as the above, like the previous guest and host, this is also running Ubuntu 22.04: [ 6.304512] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 0-.... } 3 jiffies s: 101 root: 0x1/. [ 6.304520] rcu: blocking rcu_node structures (internal RCU debug): [ 6.304524] Task dump for CPU 0: [ 6.304525] task:apparmor_parser state:R running task stack: 0 pid: 550 ppid: 534 flags:0x00000008 [ 6.304533] Call Trace: [ 6.304541] <TASK> [ 6.304544] ? longest_match+0xaa/0x230 [ 6.304550] ? deflate_slow+0x415/0x580 [ 6.304551] ? zlib_deflate+0xe1/0x570 [ 6.304553] ? aa_unpack+0x1169/0x1e50 [ 6.304557] ? aa_replace_profiles+0x98/0xfc0 [ 6.304559] ? policy_update+0xcc/0x160 [ 6.304561] ? profile_replace+0xac/0x130 [ 6.304562] ? vfs_write+0xc3/0x380 [ 6.304564] ? putname+0x59/0x70 [ 6.304567] ? ksys_write+0x67/0xf0 [ 6.304568] ? __x64_sys_write+0x19/0x20 [ 6.304570] ? do_syscall_64+0x59/0x90 [ 6.304572] ? exit_to_user_mode_prepare+0x184/0x1e0 [ 6.304574] ? syscall_exit_to_user_mode+0x1c/0x40 [ 6.304576] ? do_syscall_64+0x69/0x90 [ 6.304578] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 6.304581] </TASK> These are all from another Ubuntu 22.04 KVM-Qemu guest on the same host: [ 12.937690] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 9-.... } 3 jiffies s: 61 root: 0x200/. [ 12.938442] rcu: blocking rcu_node structures (internal RCU debug): [ 12.938647] Task dump for CPU 9: [ 12.938649] task:systemd-udevd state:R running task stack: 0 pid: 533 ppid: 514 flags:0x0000400a [ 12.938656] Call Trace: [ 12.938662] <TASK> [ 12.939313] ? smp_call_function_many_cond+0x12e/0x3d0 [ 12.939475] ? text_poke_memset+0x20/0x20 [ 12.939517] ? enter_smm.constprop.0+0x555/0xa90 [kvm] [ 12.939986] ? on_each_cpu_cond_mask+0x1d/0x30 [ 12.939988] ? text_poke_bp_batch+0xae/0x210 [ 12.939991] ? vmx_set_cr0+0x16d0/0x16d0 [kvm_intel] [ 12.940923] ? enter_smm.constprop.0+0x555/0xa90 [kvm] [ 12.941105] ? enter_smm.constprop.0+0x555/0xa90 [kvm] [ 12.941455] ? text_poke_bp+0x49/0x70 [ 12.941459] ? __static_call_transform+0x7f/0x120 [ 12.941461] ? arch_static_call_transform+0x83/0xa0 [ 12.941464] ? __static_call_update+0x175/0x220 [ 12.941468] ? find_next_bit+0x17/0x20 [kvm_intel] [ 12.941477] ? vmx_set_cr0+0x16d0/0x16d0 [kvm_intel] [ 12.941567] ? kvm_arch_hardware_setup+0x449/0x18a0 [kvm] [ 12.942381] ? kvm_init+0xa7/0x450 [kvm] [ 12.942553] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 12.942782] ? vmx_init+0xfe/0x254 [kvm_intel] [ 12.942795] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 12.942802] ? do_one_initcall+0x48/0x1e0 [ 12.942805] ? kmem_cache_alloc_trace+0x149/0x2c0 [ 12.942810] ? do_init_module+0x52/0x200 [ 12.942813] ? load_module+0x1e1d/0x2230 [ 12.942815] ? security_kernel_post_read_file+0x5c/0x70 [ 12.942819] ? __do_sys_finit_module+0xc8/0x140 [ 12.942820] ? __do_sys_finit_module+0xc8/0x140 [ 12.942823] ? __x64_sys_finit_module+0x18/0x20 [ 12.942824] ? do_syscall_64+0x5c/0x90 [ 12.942847] ? ksys_mmap_pgoff+0x115/0x260 [ 12.942850] ? syscall_trace_enter.constprop.0+0xa3/0x190 [ 12.942855] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 12.942856] ? syscall_exit_to_user_mode+0x1c/0x40 [ 12.942858] ? do_syscall_64+0x69/0x90 [ 12.942860] ? do_syscall_64+0x69/0x90 [ 12.942862] ? do_syscall_64+0x69/0x90 [ 12.942915] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 12.942919] </TASK> [ 13.051083] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 9-.... } 15 jiffies s: 61 root: 0x200/. [ 13.051093] rcu: blocking rcu_node structures (internal RCU debug): [ 13.051095] Task dump for CPU 9: [ 13.051096] task:systemd-udevd state:R running task stack: 0 pid: 533 ppid: 514 flags:0x0000400a [ 13.051101] Call Trace: [ 13.051103] <TASK> [ 13.051105] ? switch_mm_irqs_off+0x194/0x560 [ 13.051111] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 13.051118] do_sync_core+0x2a/0x30 [ 13.051123] ? smp_call_function_many_cond+0x102/0x3d0 [ 13.051126] ? text_poke_memset+0x20/0x20 [ 13.051129] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.051131] ? text_poke_bp_batch+0x1fd/0x210 [ 13.051135] ? kvm_get_linear_rip+0x74/0xf0 [kvm] [ 13.051189] ? vmx_get_cpl+0x30/0x30 [kvm_intel] [ 13.051206] ? kvm_get_linear_rip+0x73/0xf0 [kvm] [ 13.051245] ? kvm_get_linear_rip+0x73/0xf0 [kvm] [ 13.051284] ? text_poke_bp+0x49/0x70 [ 13.051288] ? __static_call_transform+0x7f/0x120 [ 13.051290] ? arch_static_call_transform+0x83/0xa0 [ 13.051294] ? __static_call_update+0x175/0x220 [ 13.051298] ? find_next_bit+0x17/0x20 [kvm_intel] [ 13.051308] ? vmx_get_cpl+0x30/0x30 [kvm_intel] [ 13.051318] ? kvm_arch_hardware_setup+0x46c/0x18a0 [kvm] [ 13.051358] ? kvm_init+0xa7/0x450 [kvm] [ 13.051393] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.051404] ? vmx_init+0xfe/0x254 [kvm_intel] [ 13.051412] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.051420] ? do_one_initcall+0x48/0x1e0 [ 13.051423] ? kmem_cache_alloc_trace+0x149/0x2c0 [ 13.051428] ? do_init_module+0x52/0x200 [ 13.051431] ? load_module+0x1e1d/0x2230 [ 13.051433] ? security_kernel_post_read_file+0x5c/0x70 [ 13.051437] ? __do_sys_finit_module+0xc8/0x140 [ 13.051439] ? __do_sys_finit_module+0xc8/0x140 [ 13.051442] ? __x64_sys_finit_module+0x18/0x20 [ 13.051444] ? do_syscall_64+0x5c/0x90 [ 13.051448] ? ksys_mmap_pgoff+0x115/0x260 [ 13.051452] ? syscall_trace_enter.constprop.0+0xa3/0x190 [ 13.051456] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.051458] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.051460] ? do_syscall_64+0x69/0x90 [ 13.051463] ? do_syscall_64+0x69/0x90 [ 13.051465] ? do_syscall_6[ 13.087635] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 2-.... 11-.... } 3 jiffies s: 69 root: 0x804/. [ 13.087647] rcu: blocking rcu_node structures (internal RCU debug): [ 13.087649] Task dump for CPU 2: [ 13.087651] task:modprobe state:R running task stack: 0 pid: 674 ppid: 298 flags:0x0000400a [ 13.087656] Call Trace: [ 13.087658] <TASK> [ 13.087661] ? text_poke_memset+0x20/0x20 [ 13.087671] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.087674] ? text_poke_bp_batch+0x1fd/0x210 [ 13.087682] ? call_status+0x1de/0x230 [sunrpc] [ 13.087786] ? __traceiter_xprt_retransmit+0x40/0x40 [sunrpc] [ 13.087906] ? call_status+0x1dd/0x230 [sunrpc] [ 13.087934] ? text_poke_bp+0x49/0x70 [ 13.087937] ? __static_call_transform+0x7f/0x120 [ 13.087939] ? arch_static_call_transform+0x83/0xa0 [ 13.087942] ? call_status+0x1dd/0x230 [sunrpc] [ 13.087970] ? __static_call_init.part.0+0x166/0x210 [ 13.087974] ? static_call_module_notify+0x13c/0x190 [ 13.087976] ? blocking_notifier_call_chain_robust+0x73/0xe0 [ 13.087979] ? load_module+0x1ca4/0x2230 [ 13.087983] ? __do_sys_finit_module+0xc8/0x140 [ 13.087985] ? __do_sys_finit_module+0xc8/0x140 [ 13.087987] ? __x64_sys_finit_module+0x18/0x20 [ 13.087989] ? do_syscall_64+0x5c/0x90 [ 13.087991] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.087993] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.087995] ? do_syscall_64+0x69/0x90 [ 13.087997] ? do_syscall_64+0x69/0x90 [ 13.087999] ? irqentry_exit_to_user_mode+0x9/0x30 [ 13.088001] ? irqentry_exit+0x1d/0x30 [ 13.088003] ? sysvec_call_function+0x4e/0x90 [ 13.088004] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.088008] </TASK> 4+0x69/0x90 [ 13.051467] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.051472] </TASK>[ 13.088009] Task dump for CPU 11: [ 13.088010] task:apparmor_parser state:R running task stack: 0 pid: 690 ppid: 672 flags:0x00000008 [ 13.088014] Call Trace: [ 13.088015] <TASK> [ 13.088015] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 13.088018] ? compress_block+0xb9/0x6d0 [ 13.088022] ? zlib_tr_flush_block+0x57b/0xb40 [ 13.088024] ? deflate_slow+0x49e/0x580 [ 13.088127] ? audit_policy+0xb0/0xb0 [ 13.088130] ? __wake_up+0x13/0x20 [ 13.088134] ? audit_log_end+0x60/0x110 [ 13.088138] ? common_lsm_audit+0x14f/0x900 [ 13.088140] ? __d_instantiate+0x39/0xf0 [ 13.088142] ? d_instantiate+0x44/0x60 [ 13.088144] ? aa_audit_msg+0x23/0x30 [ 13.088145] ? audit_policy+0x8d/0xb0 [ 13.088147] ? aa_replace_profiles+0xb35/0xfc0 [ 13.088150] ? policy_update+0xcc/0x160 [ 13.088151] ? profile_replace+0xac/0x130 [ 13.088153] ? vfs_write+0xc6/0x380 [ 13.088155] ? do_sys_openat2+0x8b/0x160 [ 13.088158] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.088160] ? ksys_write+0x67/0xf0 [ 13.088162] ? __x64_sys_write+0x19/0x20 [ 13.088163] ? do_syscall_64+0x5c/0x90 [ 13.088166] ? ksys_read+0x67/0xf0 [ 13.088167] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.088169] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.088170] ? do_syscall_64+0x69/0x90 [ 13.088172] ? irqentry_exit+0x1d/0x30 [ 13.088174] ? exc_page_fault+0x87/0x160 [ 13.088175] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.088178] </TASK> [ 13.187758] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 2-.... } 13 jiffies s: 69 root: 0x4/. [ 13.187770] rcu: blocking rcu_node structures (internal RCU debug): [ 13.187772] Task dump for CPU 2: [ 13.187774] task:modprobe state:R running task stack: 0 pid: 674 ppid: 298 flags:0x0000400a [ 13.187778] Call Trace: [ 13.187781] <TASK> [ 13.187782] ? native_send_call_func_single_ipi+0x1e/0x30 [ 13.187789] ? send_call_function_single_ipi+0x70/0xd0 [ 13.187793] ? __smp_call_single_queue+0x59/0x90 [ 13.187797] ? ttwu_queue_wakelist+0x131/0x1c0 [ 13.187801] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.187803] ? try_to_wake_up+0x1d4/0x5d0 [ 13.187808] ? xprt_transmit+0x32b/0x410 [sunrpc] [ 13.187856] ? wake_up_q+0x50/0x90 [ 13.187859] ? __mutex_unlock_slowpath.constprop.0+0x8d/0x120 [ 13.187862] ? mutex_unlock+0x25/0x30 [ 13.187864] ? static_call_module_notify+0x4d/0x190 [ 13.187868] ? blocking_notifier_call_chain_robust+0x73/0xe0 [ 13.187872] ? load_module+0x1ca4/0x2230 [ 13.187877] ? __do_sys_finit_module+0xc8/0x140 [ 13.187879] ? __do_sys_finit_module+0xc8/0x140 [ 13.187881] ? __x64_sys_finit_module+0x18/0x20 [ 13.187883] ? do_syscall_64+0x5c/0x90 [ 13.187887] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.187890] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.187892] ? do_syscall_64+0x69/0x90 [ 13.187895] ? do_syscall_64+0x69/0x90 [ 13.187897] ? irqentry_exit_to_user_mode+0x9/0x30 [ 13.187899] ? irqentry_exit+0x1d/0x30 [ 13.187901] ? sysvec_call_function+0x4e/0x90[ 13.237768] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 1-.... } 4 jiffies s: 73 root: 0x2/. [ 13.237778] rcu: blocking rcu_node structures (internal RCU debug): [ 13.237780] Task dump for CPU 1: [ 13.237781] task:apparmor_parser state:R running task stack: 0 pid: 688 ppid: 672 flags:0x00000008 [ 13.237785] Call Trace: [ 13.237787] <TASK> [ 13.237789] ? asm_sysvec_call_function+0x1b/0x20 [ 13.237795] ? build_tree+0x5c5/0xab0 [ 13.237798] ? asm_sysvec_call_function+0x1b/0x20 [ 13.237800] ? fill_window+0x3b5/0x480 [ 13.237803] ? deflate_slow+0x294/0x580 [ 13.237805] ? zlib_deflate+0xe4/0x570 [ 13.237807] ? aa_unpack+0x1169/0x1e50 [ 13.237811] ? aa_replace_profiles+0x98/0xfc0 [ 13.237813] ? policy_update+0xcc/0x160 [ 13.237815] ? profile_replace+0xac/0x130 [ 13.237816] ? vfs_write+0xc6/0x380 [ 13.237819] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.237821] ? ksys_write+0x67/0xf0 [ 13.237823] ? __x64_sys_write+0x19/0x20 [ 13.237824] ? do_syscall_64+0x5c/0x90 [ 13.237828] ? do_syscall_64+0x69/0x90 [ 13.237830] ? irqentry_exit_to_user_mode+0x9/0x30 [ 13.237832] ? irqentry_exit+0x1d/0x30 [ 13.237834] ? exc_page_fault+0x87/0x160 [ 13.237835] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.237838] </TASK> [ 13.317586] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 8-.... } 4 jiffies s: 77 root: 0x100/. [ 13.317599] rcu: blocking rcu_node structures (internal RCU debug): [ 13.317601] Task dump for CPU 8: [ 13.317604] task:systemd-udevd state:R running task stack: 0 pid: 533 ppid: 514 flags:0x0000400a [ 13.317609] Call Trace: [ 13.317612] <TASK> [ 13.317614] ? switch_mm_irqs_off+0x194/0x560 [ 13.317620] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 13.317624] ? native_send_call_func_ipi+0xcf/0x100 [ 13.317627] ? smp_call_function_many_cond+0x313/0x3d0 [ 13.317634] ? text_poke_memset+0x20/0x20 [ 13.317638] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.317640] ? text_poke_bp_batch+0x1df/0x210 [ 13.317644] ? handle_vmxon+0xec/0x240 [kvm_intel] [ 13.317667] ? ept_save_pdptrs+0x420/0x420 [kvm_intel] [ 13.317679] ? handle_vmxon+0xeb/0x240 [kvm_intel] [ 13.317692] ? handle_vmxon+0xeb/0x240 [kvm_intel] [ 13.317702] ? text_poke_bp+0x49/0x70 [ 13.317706] ? __static_call_transform+0x7f/0x120 [ 13.317708] ? arch_static_call_transform+0x83/0xa0 [ 13.317712] ? __static_call_update+0x175/0x220 [ 13.317715] ? find_next_bit+0x17/0x20 [kvm_intel] [ 13.317724] ? ept_save_pdptrs+0x420/0x420 [kvm_intel] [ 13.317734] ? kvm_arch_hardware_setup+0x607/0x18a0 [kvm] [ 13.317784] ? kvm_init+0xa7/0x450 [kvm] [ 13.317818] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.317828] ? vmx_init+0xfe/0x254 [kvm_intel] [ 13.317837] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.317845] ? do_one_initcall+0x48/0x1e0 [ 13.317848] ? kmem_cache_alloc_trace+0x149/0x2c0 [ 13.317851] ? do_init_module+0x52/0x200 [ 13.317854] ? load_module+0x1e1d/0x2230 [ 13.317857] ? security_kernel_post_read_file+0x5c/0x70 [ 13.317861] ? __do_sys_finit_module+0xc8/0x140 [ 13.317863] ? __do_sys_finit_module+0xc8/0x140 [ 13.317866] ? __x64_sys_finit_module+0x18/0x20 [ 13.317868] ? do_syscall_64+0x5c/0x90 [ 13.317872] ? ksys_mmap_pgoff+0x115/0x260 [ 13.317876] ? syscall_trace_enter.constprop.0+0xa3/0x190 [ 13.317880] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.317882] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.317884] ? do_syscall_64+0x69/0x90 [ 13.317886] ? do_syscall_64+0x69/0x90 [ 13.317889] ? do_syscall_64+0x69/0x90 [ 13.317891] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.317895] </TASK> [ 13.418515] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 8-.... } 14 jiffies s: 77 root: 0x100/. [ 13.418525] rcu: blocking rcu_node structures (internal RCU debug): [ 13.418528] Task dump for CPU 8: [ 13.418529] task:systemd-udevd state:R running task stack: 0 pid: 533 ppid: 514 flags:0x0000400a [ 13.418534] Call Trace: [ 13.418536] <TASK> [ 13.418539] ? switch_mm_irqs_off+0x194/0x560 [ 13.418544] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 13.418548] ? native_send_call_func_ipi+0xcf/0x100 [ 13.418552] ? smp_call_function_many_cond+0x313/0x3d0 [ 13.418555] ? text_poke_memset+0x20/0x20 [ 13.418562] ? kvm_emulate_hypercall.part.0+0x206/0x590 [kvm] [ 13.418617] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.418619] ? text_poke_bp_batch+0xae/0x210 [ 13.418623] ? skip_emulated_instruction+0x240/0x240 [kvm_intel] [ 13.418639] ? kvm_emulate_hypercall.part.0+0x206/0x590 [kvm] [ 13.418678] ? kvm_emulate_hypercall.part.0+0x206/0x590 [kvm] [ 13.418717] ? text_poke_bp+0x49/0x70 [ 13.418721] ? __static_call_transform+0x7f/0x120 [ 13.418723] ? arch_static_call_transform+0x83/0xa0 [ 13.418726] ? __static_call_update+0x175/0x220 [ 13.418730] ? find_next_bit+0x17/0x20 [kvm_intel] [ 13.418740] ? skip_emulated_instruction+0x240/0x240 [kvm_intel] [ 13.418750] ? kvm_arch_hardware_setup+0x7bc/0x18a0 [kvm] [ 13.418789] ? kvm_init+0xa7/0x450 [kvm] [ 13.418824] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.418835] ? vmx_init+0xfe/0x254 [kvm_intel] [ 13.418843] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.418851] ? do_one_initcall+0x48/0x1e0 [ 13.418855] ? kmem_cache_alloc_trace+0x149/0x2c0 [ 13.418858] ? do_init_module+0x52/0x200 [ 13.418861] ? load_module+0x1e1d/0x2230 [ 13.418863] ? security_kernel_post_read_file+0x5c/0x70 [ 13.418867] ? __do_sys_finit_module+0xc8/0x140 [ 13.418869] ? __do_sys_finit_module+0xc8/0x140 [ 13.418871] ? __x64_sys_finit_module+0x18/0x20 [ 13.418873] ? do_syscall_64+0x5c/0x90 [ 13.418877] ? ksys_mmap_pgoff+0x115/0x260 [ 13.418881] ? syscall_trace_enter.constprop.0+0xa3/0x190 [ 13.418886] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.418888] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.418890] ? do_syscall_64+0x69/0x90 [ 13.418892] ? do_syscall_64+0x69/0x90 [ 13.418895] ? do_syscall_64+0x69/0x90 [ 13.418897] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.418901] </TASK> [ 13.527959] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 8-.... } 24 jiffies s: 77 root: 0x100/. [ 13.527993] rcu: blocking rcu_node structures (internal RCU debug): [ 13.528022] Task dump for CPU 8: [ 13.528023] task:systemd-udevd state:R running task stack: 0 pid: 533 ppid: 514 flags:0x0000400a [ 13.528075] Call Trace: [ 13.528077] <TASK> [ 13.528082] ? smp_call_function_many_cond+0x12e/0x3d0 [ 13.528124] ? text_poke_memset+0x20/0x20 [ 13.528173] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.528191] ? text_poke_bp_batch+0x1df/0x210 [ 13.528282] ? kvm_vcpu_ioctl_x86_set_vcpu_events+0x26f/0x2e0 [kvm] [ 13.528433] ? vmwrite_error+0x1d0/0x1d0 [kvm_intel] [ 13.528509] ? kvm_vcpu_ioctl_x86_set_vcpu_events+0x26e/0x2e0 [kvm] [ 13.528627] ? kvm_vcpu_ioctl_x86_set_vcpu_events+0x26e/0x2e0 [kvm] [ 13.528772] ? text_poke_bp+0x49/0x70 [ 13.528776] ? __static_call_transform+0x7f/0x120 [ 13.528778] ? arch_static_call_transform+0x83/0xa0 [ 13.528804] ? __static_call_update+0x175/0x220 [ 13.528807] ? find_next_bit+0x17/0x20 [kvm_intel] [ 13.528816] ? vmwrite_error+0x1d0/0x1d0 [kvm_intel] [ 13.528849] ? kvm_arch_hardware_setup+0x7f9/0x18a0 [kvm] [ 13.528980] ? kvm_init+0xa7/0x450 [kvm] [ 13.529062] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.529093] ? vmx_init+0xfe/0x254 [kvm_intel] [ 13.529131] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.529139] ? do_one_initcall+0x48/0x1e0 [ 13.529142] ? kmem_cache_alloc_trace+0x149/0x2c0 [ 13.529180] ? do_init_module+0x52/0x200 [ 13.529212] ? load_module+0x1e1d/0x2230 [ 13.529217] ? security_kernel_post_read_file+0x5c/0x70 [ 13.529221] ? __do_sys_finit_module+0xc8/0x140 [ 13.529222] ? __do_sys_finit_module+0xc8/0x140 [ 13.529225] ? __x64_sys_finit_module+0x18/0x20 [ 13.529227] ? do_syscall_64+0x5c/0x90 [ 13.529232] ? ksys_mmap_pgoff+0x115/0x260 [ 13.529260] ? syscall_trace_enter.constprop.0+0xa3/0x190 [ 13.529283] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.529292] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.529296] ? do_syscall_64+0x69/0x90 [ 13.529301] ? do_syscall_64+0x69/0x90 [ 13.529323] ? do_syscall_64+0x69/0x90 [ 13.529326] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.529330] </TASK> [ 13.617678] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 8-.... } 34 jiffies s: 77 root: 0x100/. [ 13.617690] rcu: blocking rcu_node structures (internal RCU debug): [ 13.617692] Task dump for CPU 8: [ 13.617693] task:systemd-udevd state:R running task stack: 0 pid: 533 ppid: 514 flags:0x0000400a [ 13.617696] Call Trace: [ 13.617698] <TASK> [ 13.617701] ? switch_mm_irqs_off+0x194/0x560 [ 13.617706] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 13.617710] ? do_sync_core+0x2a/0x30 [ 13.617714] ? smp_call_function_many_cond+0x102/0x3d0 [ 13.617717] ? text_poke_memset+0x20/0x20 [ 13.617719] ? on_each_cpu_cond_mask+0x1d/0x30 [ 13.617721] ? text_poke_bp_batch+0x1df/0x210 [ 13.617724] ? __SCT__kvm_x86_pmu_pmc_idx_to_pmc+0x1/0x8 [kvm] [ 13.617820] ? intel_pmu_init+0x1f0/0x1f0 [kvm_intel] [ 13.617836] ? __SCT__kvm_x86_pmu_pmc_is_enabled+0x8/0x8 [kvm] [ 13.617881] ? text_poke_bp+0x49/0x70 [ 13.617884] ? __static_call_transform+0x7f/0x120 [ 13.617886] ? arch_static_call_transform+0x57/0xa0 [ 13.617888] ? intel_pmu_init+0x1f0/0x1f0 [kvm_intel] [ 13.617897] ? __SCT__kvm_x86_pmu_pmc_is_enabled+0x8/0x8 [kvm] [ 13.617938] ? __bpf_core_types_match+0x4c0/0x4c0 [ 13.617941] ? __static_call_update+0x62/0x220 [ 13.617943] ? intel_pmu_init+0x1f0/0x1f0 [kvm_intel] [ 13.617952] ? __bpf_core_types_match+0x4c0/0x4c0 [ 13.617954] ? kvm_pmu_ops_update+0x104/0x2d0 [kvm] [ 13.617991] ? kvm_arch_hardware_setup+0x104a/0x18a0 [kvm] [ 13.618030] ? kvm_init+0xa7/0x450 [kvm] [ 13.618059] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.618069] ? vmx_init+0xfe/0x254 [kvm_intel] [ 13.618076] ? hardware_setup+0x9c1/0x9c1 [kvm_intel] [ 13.618083] ? do_one_initcall+0x48/0x1e0 [ 13.618087] ? kmem_cache_alloc_trace+0x149/0x2c0 [ 13.618090] ? do_init_module+0x52/0x200 [ 13.618093] ? load_module+0x1e1d/0x2230 [ 13.618095] ? security_kernel_post_read_file+0x5c/0x70 [ 13.618098] ? __do_sys_finit_module+0xc8/0x140 [ 13.618100] ? __do_sys_finit_module+0xc8/0x140 [ 13.618102] ? __x64_sys_finit_module+0x18/0x20 [ 13.618103] ? do_syscall_64+0x5c/0x90 [ 13.618106] ? ksys_mmap_pgoff+0x115/0x260 [ 13.618110] ? syscall_trace_enter.constprop.0+0xa3/0x190 [ 13.618113] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 13.618115] ? syscall_exit_to_user_mode+0x1c/0x40 [ 13.618116] ? do_syscall_64+0x69/0x90 [ 13.618118] ? do_syscall_64+0x69/0x90 [ 13.618120] ? do_syscall_64+0x69/0x90 [ 13.618123] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 13.618126] </TASK> The thing that is weird is the machine with the MOST CPU stalls is the LEAST busy of all the servers, it's pretty much sitting idle, where was the webserver that is busy as all get out has NONE. All of these ran totally clean under rc4 but these all occurred within minutes to hours after booting rc5.
Unfortunately STILL borked in 6.0.0-rc6 These are from our web server, a KVM-QEMU virtual machine running under a i7-6850 based host. Both host and guest running this kernel. [ 7.967701] task:modprobe state:R running task stack: 0 pid: 1574 ppid: 651 flags:0x0000400a [ 7.967708] Call Trace: [ 7.967713] <TASK> [ 7.967715] ? __x2apic_send_IPI_mask+0x77/0x100 [ 7.967724] ? load_new_mm_cr3+0x7f/0xe0 [ 7.967728] ? switch_mm_irqs_off+0x19e/0x570 [ 7.967730] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 7.967737] ? native_send_call_func_ipi+0xd2/0x100 [ 7.967743] ? netfs_get_request+0x6f/0xa0 [netfs] [ 7.967752] ? smp_call_function_many_cond+0x2f6/0x3b0 [ 7.967756] ? text_poke_memset+0x20/0x20 [ 7.967760] ? on_each_cpu_cond_mask+0x1d/0x30 [ 7.967762] ? text_poke_bp_batch+0x1fb/0x210 [ 7.967764] ? netfs_get_request+0x70/0xa0 [netfs] [ 7.967769] ? __traceiter_netfs_failure+0x60/0x60 [netfs] [ 7.967833] ? netfs_get_request+0x6f/0xa0 [netfs] [ 7.967838] ? netfs_get_request+0x6f/0xa0 [netfs] [ 7.967842] ? text_poke_bp+0x49/0x70 [ 7.967845] ? __static_call_transform+0x7f/0x120 [ 7.967847] ? arch_static_call_transform+0x87/0xa0 [ 7.967849] ? __static_call_init+0x167/0x210 [ 7.967853] ? static_call_module_notify+0x13e/0x1a0 [ 7.967856] ? blocking_notifier_call_chain_robust+0x72/0xd0 [ 7.967860] ? load_module+0x1f0d/0x22e0 [ 7.967865] ? __do_sys_finit_module+0xbd/0x130 [ 7.967867] ? __do_sys_finit_module+0xbd/0x130 [ 7.967869] ? __x64_sys_finit_module+0x18/0x20 [ 7.967871] ? do_syscall_64+0x5b/0x80 [ 7.967874] ? syscall_exit_to_user_mode+0x1b/0x40 [ 7.967877] ? do_syscall_64+0x67/0x80 [ 7.967879] ? syscall_exit_to_user_mode+0x1b/0x40 [ 7.967882] ? do_syscall_64+0x67/0x80 [ 7.967884] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 7.967888] </TASK> [ 9.474823] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 0-.... 3-.... 7-.... 11-.... } 4 jiffies s: 373 root: 0x889/. [ 9.474836] rcu: blocking rcu_node structures (internal RCU debug): [ 9.474837] Task dump for CPU 0: [ 9.474839] task:mariadbd state:R running task stack: 0 pid: 1455 ppid: 1 flags:0x00020008 [ 9.474843] Call Trace: [ 9.474845] <TASK> [ 9.474849] ? get_partial_node.part.0+0xca/0x200 [ 9.474855] ? __folio_alloc+0x1b/0x50 [ 9.474858] ? vma_alloc_folio+0x2cc/0x370 [ 9.474862] ? do_huge_pmd_anonymous_page+0xbd/0x780 [ 9.474866] ? virtqueue_add_split+0x5b5/0x680 [ 9.474869] ? __handle_mm_fault+0xbdb/0xfd0 [ 9.474873] ? get_page_from_freelist+0x1370/0x1770 [ 9.474876] ? handle_mm_fault+0xac/0x280 [ 9.474885] ? __get_user_pages+0x1e6/0x670 [ 9.474888] ? get_user_pages_unlocked+0xcb/0x300 [ 9.474890] ? internal_get_user_pages_fast+0x101d/0x11a0 [ 9.474893] ? blk_mq_sched_bio_merge+0x36/0x180 [ 9.474898] ? get_user_pages_fast+0x24/0x50 [ 9.474900] ? __iov_iter_get_pages_alloc+0x118/0x860 [ 9.474904] ? mempool_alloc_slab+0x15/0x20 [ 9.474908] ? mempool_alloc+0x56/0x180 [ 9.474911] ? iov_iter_get_pages2+0x1e/0x30 [ 9.474913] ? bio_iov_iter_get_pages+0x6e/0x410 [ 9.474917] ? iomap_dio_bio_iter+0x2e2/0x520 [ 9.474921] ? __iomap_dio_rw+0x521/0x8a0 [ 9.474925] ? iomap_dio_rw+0x11/0x40 [ 9.474928] ? ext4_file_read_iter+0x115/0x1b0 [ 9.474931] ? io_read+0xd4/0x4c0 [ 9.474935] ? ext4_file_read_iter+0xc9/0x1b0 [ 9.474939] ? io_issue_sqe+0x68/0x430 [ 9.474943] ? io_submit_sqes+0x211/0x600 [ 9.474946] ? __do_sys_io_uring_enter+0x4d4/0x900 [ 9.474950] ? io_submit_sqes+0x211/0x600 [ 9.474953] ? __x64_sys_io_uring_enter+0x22/0x30 [ 9.474955] ? do_syscall_64+0x5b/0x80 [ 9.474959] ? syscall_exit_to_user_mode+0x1b/0x40 [ 9.474962] ? do_syscall_64+0x67/0x80 [ 9.474964] ? syscall_exit_to_user_mode+0x1b/0x40 [ 9.474966] ? do_syscall_64+0x67/0x80 [ 9.474969] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 9.474974] ? syscall_exit_to_user_mode+0x1b/0x40 [ 9.474976] ? do_syscall_64+0x67/0x80 [ 9.474978] ? do_syscall_64+0x67/0x80 [ 9.474981] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 9.474985] </TASK> [ 9.474986] Task dump for CPU 3: [ 9.474987] task:GC Thread#5 state:R running task stack: 0 pid: 1697 ppid: 1 flags:0x00000008 [ 9.474991] Call Trace: [ 9.474992] <TASK> [ 9.474993] ? __alloc_pages+0x1e1/0x12e0 [ 9.474995] __schedule+0x35c/0x1420 [ 9.474999] schedule+0x58/0xf0 [ 9.475002] ? __folio_alloc+0x1b/0x50 [ 9.475004] ? vma_alloc_folio+0x2cc/0x370 [ 9.475007] ? do_huge_pmd_anonymous_page+0xbd/0x780 [ 9.475010] ? __handle_mm_fault+0xbdb/0xfd0 [ 9.475014] ? handle_mm_fault+0xac/0x280 [ 9.475016] ? do_user_addr_fault+0x1a7/0x630 [ 9.475020] ? exc_page_fault+0x75/0x160 [ 9.475023] ? asm_exc_page_fault+0x27/0x30 [ 9.475026] </TASK> [ 9.475027] Task dump for CPU 7: [ 9.475028] task:openssl state:R running task stack: 0 pid: 1862 ppid: 1734 flags:0x00000008 [ 9.475031] Call Trace: [ 9.475032] <TASK> [ 9.475033] ? exc_page_fault+0x75/0x160 [ 9.475036] asm_exc_page_fault+0x27/0x30 [ 9.475038] RIP: 0033:0x7fc081084340 [ 9.475041] RSP: 002b:00007ffe68225f40 EFLAGS: 00010206 [ 9.475048] RAX: 00007fc080c55460 RBX: 00007fc080ab0ed8 RCX: 00007fc080a0a6d8 [ 9.475050] RDX: 00007fc080e15008 RSI: 0000000000000000 RDI: 00007fc080ab0f98 [ 9.475051] RBP: 00007ffe68226040 R08: 00007fc080ab1c28 R09: 0000000000000000 [ 9.475052] R10: 00007fc080a00000 R11: 00007fc081071690 R12: 00007fc080a84958 [ 9.475054] R13: 0000000000000000 R14: 00007fc081071690 R15: 00007fc080a00000 [ 9.475056] </TASK> [ 9.664864] Call Trace: [ 9.664866] <TASK> [ 9.664869] ? __folio_alloc+0x1b/0x50 [ 9.664874] ? vma_alloc_folio+0x2cc/0x370 [ 9.664877] ? rmqueue_bulk+0x28e/0xd60 [ 9.664878] ? do_huge_pmd_anonymous_page+0xbd/0x780 [ 9.664881] ? __handle_mm_fault+0xbdb/0xfd0 [ 9.664885] ? handle_mm_fault+0xac/0x280 [ 9.664886] ? __get_user_pages+0x1e6/0x670 [ 9.664888] ? get_user_pages_unlocked+0xcb/0x300 [ 9.664889] ? internal_get_user_pages_fast+0x101d/0x11a0 [ 9.664891] ? get_user_pages_fast+0x24/0x50 [ 9.664892] ? __iov_iter_get_pages_alloc+0x118/0x860 [ 9.664896] ? mempool_alloc_slab+0x15/0x20 [ 9.664899] ? mempool_alloc+0x56/0x180 [ 9.664901] ? iov_iter_get_pages2+0x1e/0x30 [ 9.664902] ? bio_iov_iter_get_pages+0x6e/0x410 [ 9.664906] ? iomap_dio_bio_iter+0x2e2/0x520 [ 9.664909] ? __iomap_dio_rw+0x521/0x8a0 [ 9.664912] ? iomap_dio_rw+0x11/0x40 [ 9.664913] ? ext4_file_read_iter+0x115/0x1b0 [ 9.664916] ? io_read+0xd4/0x4c0 [ 9.664918] ? ext4_file_read_iter+0xc9/0x1b0 [ 9.664920] ? io_read+0xd4/0x4c0 [ 9.664922] ? io_issue_sqe+0x68/0x430 [ 9.664925] ? io_submit_sqes+0x211/0x600 [ 9.664927] ? __do_sys_io_uring_enter+0x4d4/0x900 [ 9.664929] ? io_submit_sqes+0x211/0x600 [ 9.664931] ? __x64_sys_io_uring_enter+0x22/0x30 [ 9.664933] ? do_syscall_64+0x5b/0x80 [ 9.664936] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 9.664939] ? syscall_exit_to_user_mode+0x1b/0x40 [ 9.664941] ? exit_to_user_mode_prepare+0x41/0x1e0 [ 9.664943] ? syscall_exit_to_user_mode+0x1b/0x40 [ 9.664945] ? do_syscall_64+0x67/0x80 [ 9.664946] ? do_syscall_64+0x67/0x80 [ 9.664948] ? sysvec_reschedule_ipi+0x77/0xe0 [ 9.664949] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 9.664952] </TASK> This is from our client mail server. [ 12.719943] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 3-.... } 4 jiffies s: 97 root: 0x8/. [ 12.719951] rcu: blocking rcu_node structures (internal RCU debug): [ 12.719953] Task dump for CPU 3: [ 12.719954] task:systemd-udevd state:R running task stack: 0 pid: 442 ppid: 441 flags:0x0000400a [ 12.719964] Call Trace: [ 12.719966] <TASK> [ 12.719967] ? x2apic_send_IPI_allbutself+0x2f/0x40 [ 12.719974] ? native_send_call_func_ipi+0xcf/0x100 [ 12.719978] do_sync_core+0x2a/0x30 [ 12.719982] ? smp_call_function_many_cond+0xf2/0x3b0 [ 12.719985] ? text_poke_memset+0x20/0x20 [ 12.719988] ? on_each_cpu_cond_mask+0x1d/0x30 [ 12.719990] ? text_poke_bp_batch+0xaf/0x210 [ 12.719994] ? handle_apic_write+0x120/0x120 [kvm_intel] [ 12.720010] ? init_emulate_ctxt+0x3d/0x130 [kvm] [ 12.720074] ? text_poke_bp+0x49/0x70 [ 12.720077] ? __static_call_transform+0x7f/0x120 [ 12.720079] ? arch_static_call_transform+0x87/0xa0 [ 12.720081] ? init_emulate_ctxt+0x3d/0x130 [kvm] [ 12.720128] ? __static_call_update+0x16e/0x220 [ 12.720132] ? handle_apic_write+0x120/0x120 [kvm_intel] [ 12.720148] ? kvm_arch_hardware_setup+0x627/0x18c0 [kvm] [ 12.720194] ? __kmalloc_node+0x16c/0x370 [ 12.720207] ? kvm_init+0xa7/0x450 [kvm] [ 12.720276] ? _raw_spin_unlock_irqrestore+0xe/0x30 [ 12.720280] ? hardware_setup+0x94f/0x987 [kvm_intel] [ 12.720292] ? vmx_init+0xfe/0x20a [kvm_intel] [ 12.720301] ? hardware_setup+0x987/0x987 [kvm_intel] [ 12.720311] ? do_one_initcall+0x47/0x1e0 [ 12.720319] ? kmem_cache_alloc_trace+0x16a/0x2b0 [ 12.720321] ? do_init_module+0x50/0x1f0 [ 12.720324] ? load_module+0x215d/0x22e0 [ 12.720327] ? __do_sys_finit_module+0xbd/0x130 [ 12.720329] ? __do_sys_finit_module+0xbd/0x130 [ 12.720333] ? __x64_sys_finit_module+0x18/0x20 [ 12.720335] ? do_syscall_64+0x58/0x80 [ 12.720341] ? do_syscall_64+0x67/0x80 [ 12.720343] ? do_syscall_64+0x67/0x80 [ 12.720345] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 12.720348] </TASK> I'm reverting back to 5.18.19 since 5.19.x is borked and 6.0.0rc4 and earlier which did work are no longer available.
This seems to be an issue with my configuration. My configuration was initially derived from Ubuntu's configuration around 12.04 and then as various releases came out whatever new items defaulted to I pretty much left them at that and just maintained the things I wanted changed. Well, I decided to take a current Ubuntu kernel as a base, apply the changes I want, and that build seems to be stable. When I compare the old and new configs there are literally hundreds of changes so I think my chances of isolating which item is causing this are essentially nil.
Well I missed some things in my initial re-configuration and when I got everything back in, it's back to failing again, but since I've only changes a few things I've at least got the potential to isolate exactly what broke it, so I'm changing on thing at a time trying to isolate now.
Well, it still failed, it just took longer. 5.19 had the same issue for me, so I'm going to grab Ubuntu 22.10 which is supposed to be released with 5.19 and see how they configured it and IF I see the same issue with their configuration. If I don't then I know I've got something configured wrong, if I do then the kernel is broken.
Same behaviour on my system with last 5.19.11 kernel (on Ubuntu 20.04) - CPU AMD Ryzen 5500U: --- [ 4.475429] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 7 jiffies s: 33 root: 0x100/. [ 4.475483] rcu: blocking rcu_node structures (internal RCU debug): [ 4.475506] Task dump for CPU 8: [ 4.475508] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.475516] Call Trace: [ 4.475519] <TASK> [ 4.475528] ? kallsyms_lookup+0x14/0x20 [ 4.475541] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.475549] ? change_page_attr_set_clr+0x12e/0x180 [ 4.475558] ? ftrace_module_enable+0xba/0x3d0 [ 4.475564] ? load_module+0x1846/0x22f0 [ 4.475574] ? __do_sys_finit_module+0xc8/0x140 [ 4.475578] ? __do_sys_finit_module+0xc8/0x140 [ 4.475583] ? __x64_sys_finit_module+0x1a/0x20 [ 4.475587] ? do_syscall_64+0x5c/0x90 [ 4.475593] ? do_syscall_64+0x69/0x90 [ 4.475596] ? do_syscall_64+0x69/0x90 [ 4.475599] ? do_syscall_64+0x69/0x90 [ 4.475601] ? sysvec_call_function+0x46/0x90 [ 4.475606] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.475615] </TASK> [ 4.551415] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 26 jiffies s: 33 root: 0x100/. [ 4.551460] rcu: blocking rcu_node structures (internal RCU debug): [ 4.551479] Task dump for CPU 8: [ 4.551480] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.551487] Call Trace: [ 4.551490] <TASK> [ 4.551496] ? kallsyms_lookup+0x14/0x20 [ 4.551504] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.551510] ? change_page_attr_set_clr+0x12e/0x180 [ 4.551517] ? ftrace_module_enable+0xba/0x3d0 [ 4.551522] ? load_module+0x1846/0x22f0 [ 4.551530] ? __do_sys_finit_module+0xc8/0x140 [ 4.551534] ? __do_sys_finit_module+0xc8/0x140 [ 4.551539] ? __x64_sys_finit_module+0x1a/0x20 [ 4.551542] ? do_syscall_64+0x5c/0x90 [ 4.551546] ? do_syscall_64+0x69/0x90 [ 4.551549] ? do_syscall_64+0x69/0x90 [ 4.551551] ? do_syscall_64+0x69/0x90 [ 4.551553] ? sysvec_call_function+0x46/0x90 [ 4.551557] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.551564] </TASK> [ 4.627421] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 45 jiffies s: 33 root: 0x100/. [ 4.627432] rcu: blocking rcu_node structures (internal RCU debug): [ 4.627434] Task dump for CPU 8: [ 4.627435] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.627442] Call Trace: [ 4.627445] <TASK> [ 4.627452] ? kallsyms_lookup+0x14/0x20 [ 4.627461] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.627468] ? change_page_attr_set_clr+0x12e/0x180 [ 4.627474] ? ftrace_module_enable+0xba/0x3d0 [ 4.627479] ? load_module+0x1846/0x22f0 [ 4.627486] ? __do_sys_finit_module+0xc8/0x140 [ 4.627489] ? __do_sys_finit_module+0xc8/0x140 [ 4.627494] ? __x64_sys_finit_module+0x1a/0x20 [ 4.627497] ? do_syscall_64+0x5c/0x90 [ 4.627501] ? do_syscall_64+0x69/0x90 [ 4.627503] ? do_syscall_64+0x69/0x90 [ 4.627506] ? do_syscall_64+0x69/0x90 [ 4.627508] ? sysvec_call_function+0x46/0x90 [ 4.627511] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.627519] </TASK> [ 4.703414] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 64 jiffies s: 33 root: 0x100/. [ 4.703471] rcu: blocking rcu_node structures (internal RCU debug): [ 4.703495] Task dump for CPU 8: [ 4.703497] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.703503] Call Trace: [ 4.703506] <TASK> [ 4.703512] ? kallsyms_lookup+0x14/0x20 [ 4.703521] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.703526] ? change_page_attr_set_clr+0x12e/0x180 [ 4.703531] ? ftrace_module_enable+0xba/0x3d0 [ 4.703535] ? load_module+0x1846/0x22f0 [ 4.703542] ? __do_sys_finit_module+0xc8/0x140 [ 4.703545] ? __do_sys_finit_module+0xc8/0x140 [ 4.703550] ? __x64_sys_finit_module+0x1a/0x20 [ 4.703552] ? do_syscall_64+0x5c/0x90 [ 4.703556] ? do_syscall_64+0x69/0x90 [ 4.703558] ? do_syscall_64+0x69/0x90 [ 4.703560] ? do_syscall_64+0x69/0x90 [ 4.703561] ? sysvec_call_function+0x46/0x90 [ 4.703565] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.703570] </TASK> [ 4.783696] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 84 jiffies s: 33 root: 0x100/. [ 4.783757] rcu: blocking rcu_node structures (internal RCU debug): [ 4.783783] Task dump for CPU 8: [ 4.783785] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.783793] Call Trace: [ 4.783797] <TASK> [ 4.783804] ? kallsyms_lookup+0x14/0x20 [ 4.783814] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.783822] ? change_page_attr_set_clr+0x12e/0x180 [ 4.783831] ? ftrace_module_enable+0xba/0x3d0 [ 4.783838] ? load_module+0x1846/0x22f0 [ 4.783848] ? __do_sys_finit_module+0xc8/0x140 [ 4.783853] ? __do_sys_finit_module+0xc8/0x140 [ 4.783861] ? __x64_sys_finit_module+0x1a/0x20 [ 4.783865] ? do_syscall_64+0x5c/0x90 [ 4.783870] ? do_syscall_64+0x69/0x90 [ 4.783873] ? do_syscall_64+0x69/0x90 [ 4.783876] ? do_syscall_64+0x69/0x90 [ 4.783880] ? sysvec_call_function+0x46/0x90 [ 4.783885] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.783893] </TASK> [ 4.859710] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 103 jiffies s: 33 root: 0x100/. [ 4.859778] rcu: blocking rcu_node structures (internal RCU debug): [ 4.859811] Task dump for CPU 8: [ 4.859813] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.859823] Call Trace: [ 4.859825] <TASK> [ 4.859831] ? kallsyms_lookup+0x14/0x20 [ 4.859841] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.859850] ? change_page_attr_set_clr+0x12e/0x180 [ 4.859860] ? ftrace_module_enable+0xba/0x3d0 [ 4.859868] ? load_module+0x1846/0x22f0 [ 4.859881] ? __do_sys_finit_module+0xc8/0x140 [ 4.859886] ? __do_sys_finit_module+0xc8/0x140 [ 4.859896] ? __x64_sys_finit_module+0x1a/0x20 [ 4.859901] ? do_syscall_64+0x5c/0x90 [ 4.859907] ? do_syscall_64+0x69/0x90 [ 4.859911] ? do_syscall_64+0x69/0x90 [ 4.859915] ? do_syscall_64+0x69/0x90 [ 4.859919] ? sysvec_call_function+0x46/0x90 [ 4.859925] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.859936] </TASK> [ 4.939471] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 123 jiffies s: 33 root: 0x100/. [ 4.939538] rcu: blocking rcu_node structures (internal RCU debug): [ 4.939571] Task dump for CPU 8: [ 4.939573] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 4.939582] Call Trace: [ 4.939585] <TASK> [ 4.939591] ? kallsyms_lookup+0x14/0x20 [ 4.939600] ? test_for_valid_rec.isra.0+0x35/0x80 [ 4.939609] ? change_page_attr_set_clr+0x12e/0x180 [ 4.939618] ? ftrace_module_enable+0xba/0x3d0 [ 4.939627] ? load_module+0x1846/0x22f0 [ 4.939639] ? __do_sys_finit_module+0xc8/0x140 [ 4.939645] ? __do_sys_finit_module+0xc8/0x140 [ 4.939654] ? __x64_sys_finit_module+0x1a/0x20 [ 4.939660] ? do_syscall_64+0x5c/0x90 [ 4.939665] ? do_syscall_64+0x69/0x90 [ 4.939669] ? do_syscall_64+0x69/0x90 [ 4.939673] ? do_syscall_64+0x69/0x90 [ 4.939677] ? sysvec_call_function+0x46/0x90 [ 4.939684] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 4.939694] </TASK> [ 5.032512] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 146 jiffies s: 33 root: 0x100/. [ 5.032581] rcu: blocking rcu_node structures (internal RCU debug): [ 5.032615] Task dump for CPU 8: [ 5.032617] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.032626] Call Trace: [ 5.032629] <TASK> [ 5.032635] ? kallsyms_lookup+0x14/0x20 [ 5.032644] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.032653] ? change_page_attr_set_clr+0x12e/0x180 [ 5.032663] ? ftrace_module_enable+0xba/0x3d0 [ 5.032671] ? load_module+0x1846/0x22f0 [ 5.032684] ? __do_sys_finit_module+0xc8/0x140 [ 5.032689] ? __do_sys_finit_module+0xc8/0x140 [ 5.032699] ? __x64_sys_finit_module+0x1a/0x20 [ 5.032704] ? do_syscall_64+0x5c/0x90 [ 5.032709] ? do_syscall_64+0x69/0x90 [ 5.032713] ? do_syscall_64+0x69/0x90 [ 5.032717] ? do_syscall_64+0x69/0x90 [ 5.032721] ? sysvec_call_function+0x46/0x90 [ 5.032728] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.032738] </TASK> [ 5.111486] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 166 jiffies s: 33 root: 0x100/. [ 5.111557] rcu: blocking rcu_node structures (internal RCU debug): [ 5.111590] Task dump for CPU 8: [ 5.111592] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.111602] Call Trace: [ 5.111604] <TASK> [ 5.111610] ? kallsyms_lookup+0x14/0x20 [ 5.111620] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.111629] ? change_page_attr_set_clr+0x12e/0x180 [ 5.111639] ? ftrace_module_enable+0xba/0x3d0 [ 5.111647] ? load_module+0x1846/0x22f0 [ 5.111660] ? __do_sys_finit_module+0xc8/0x140 [ 5.111665] ? __do_sys_finit_module+0xc8/0x140 [ 5.111675] ? __x64_sys_finit_module+0x1a/0x20 [ 5.111680] ? do_syscall_64+0x5c/0x90 [ 5.111685] ? do_syscall_64+0x69/0x90 [ 5.111689] ? do_syscall_64+0x69/0x90 [ 5.111693] ? do_syscall_64+0x69/0x90 [ 5.111697] ? sysvec_call_function+0x46/0x90 [ 5.111703] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.111714] </TASK> [ 5.191491] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 186 jiffies s: 33 root: 0x100/. [ 5.191560] rcu: blocking rcu_node structures (internal RCU debug): [ 5.191592] Task dump for CPU 8: [ 5.191595] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.191604] Call Trace: [ 5.191607] <TASK> [ 5.191612] ? kallsyms_lookup+0x14/0x20 [ 5.191622] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.191631] ? change_page_attr_set_clr+0x12e/0x180 [ 5.191640] ? ftrace_module_enable+0xba/0x3d0 [ 5.191649] ? load_module+0x1846/0x22f0 [ 5.191661] ? __do_sys_finit_module+0xc8/0x140 [ 5.191667] ? __do_sys_finit_module+0xc8/0x140 [ 5.191676] ? __x64_sys_finit_module+0x1a/0x20 [ 5.191682] ? do_syscall_64+0x5c/0x90 [ 5.191687] ? do_syscall_64+0x69/0x90 [ 5.191691] ? do_syscall_64+0x69/0x90 [ 5.191695] ? do_syscall_64+0x69/0x90 [ 5.191699] ? sysvec_call_function+0x46/0x90 [ 5.191705] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.191716] </TASK> [ 5.267725] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 205 jiffies s: 33 root: 0x100/. [ 5.267791] rcu: blocking rcu_node structures (internal RCU debug): [ 5.267823] Task dump for CPU 8: [ 5.267826] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.267835] Call Trace: [ 5.267838] <TASK> [ 5.267843] ? kallsyms_lookup+0x14/0x20 [ 5.267853] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.267862] ? change_page_attr_set_clr+0x12e/0x180 [ 5.267871] ? ftrace_module_enable+0xba/0x3d0 [ 5.267879] ? load_module+0x1846/0x22f0 [ 5.267892] ? __do_sys_finit_module+0xc8/0x140 [ 5.267897] ? __do_sys_finit_module+0xc8/0x140 [ 5.267906] ? __x64_sys_finit_module+0x1a/0x20 [ 5.267912] ? do_syscall_64+0x5c/0x90 [ 5.267917] ? do_syscall_64+0x69/0x90 [ 5.267921] ? do_syscall_64+0x69/0x90 [ 5.267925] ? do_syscall_64+0x69/0x90 [ 5.267929] ? sysvec_call_function+0x46/0x90 [ 5.267935] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.267946] </TASK> [ 5.343719] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 224 jiffies s: 33 root: 0x100/. [ 5.343790] rcu: blocking rcu_node structures (internal RCU debug): [ 5.343822] Task dump for CPU 8: [ 5.343825] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.343834] Call Trace: [ 5.343837] <TASK> [ 5.343843] ? kallsyms_lookup+0x14/0x20 [ 5.343853] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.343862] ? change_page_attr_set_clr+0x12e/0x180 [ 5.343871] ? ftrace_module_enable+0xba/0x3d0 [ 5.343880] ? load_module+0x1846/0x22f0 [ 5.343892] ? __do_sys_finit_module+0xc8/0x140 [ 5.343898] ? __do_sys_finit_module+0xc8/0x140 [ 5.343907] ? __x64_sys_finit_module+0x1a/0x20 [ 5.343913] ? do_syscall_64+0x5c/0x90 [ 5.343918] ? do_syscall_64+0x69/0x90 [ 5.343922] ? do_syscall_64+0x69/0x90 [ 5.343926] ? do_syscall_64+0x69/0x90 [ 5.343930] ? sysvec_call_function+0x46/0x90 [ 5.343936] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.343947] </TASK> [ 5.419696] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 243 jiffies s: 33 root: 0x100/. [ 5.419767] rcu: blocking rcu_node structures (internal RCU debug): [ 5.419800] Task dump for CPU 8: [ 5.419803] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.419812] Call Trace: [ 5.419814] <TASK> [ 5.419820] ? kallsyms_lookup+0x14/0x20 [ 5.419830] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.419839] ? change_page_attr_set_clr+0x12e/0x180 [ 5.419849] ? ftrace_module_enable+0xba/0x3d0 [ 5.419857] ? load_module+0x1846/0x22f0 [ 5.419869] ? __do_sys_finit_module+0xc8/0x140 [ 5.419875] ? __do_sys_finit_module+0xc8/0x140 [ 5.419884] ? __x64_sys_finit_module+0x1a/0x20 [ 5.419890] ? do_syscall_64+0x5c/0x90 [ 5.419895] ? do_syscall_64+0x69/0x90 [ 5.419899] ? do_syscall_64+0x69/0x90 [ 5.419903] ? do_syscall_64+0x69/0x90 [ 5.419907] ? sysvec_call_function+0x46/0x90 [ 5.419913] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.419924] </TASK> [ 5.495690] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 262 jiffies s: 33 root: 0x100/. [ 5.495757] rcu: blocking rcu_node structures (internal RCU debug): [ 5.495790] Task dump for CPU 8: [ 5.495793] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.495802] Call Trace: [ 5.495805] <TASK> [ 5.495811] ? kallsyms_lookup+0x14/0x20 [ 5.495820] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.495829] ? change_page_attr_set_clr+0x12e/0x180 [ 5.495839] ? ftrace_module_enable+0xba/0x3d0 [ 5.495847] ? load_module+0x1846/0x22f0 [ 5.495860] ? __do_sys_finit_module+0xc8/0x140 [ 5.495866] ? __do_sys_finit_module+0xc8/0x140 [ 5.495875] ? __x64_sys_finit_module+0x1a/0x20 [ 5.495881] ? do_syscall_64+0x5c/0x90 [ 5.495885] ? do_syscall_64+0x69/0x90 [ 5.495890] ? do_syscall_64+0x69/0x90 [ 5.495894] ? do_syscall_64+0x69/0x90 [ 5.495898] ? sysvec_call_function+0x46/0x90 [ 5.495904] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.495915] </TASK> [ 5.575859] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 282 jiffies s: 33 root: 0x100/. [ 5.575926] rcu: blocking rcu_node structures (internal RCU debug): [ 5.575958] Task dump for CPU 8: [ 5.575960] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.575969] Call Trace: [ 5.575972] <TASK> [ 5.575978] ? kallsyms_lookup+0x14/0x20 [ 5.575987] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.575996] ? change_page_attr_set_clr+0x12e/0x180 [ 5.576006] ? ftrace_module_enable+0xba/0x3d0 [ 5.576014] ? load_module+0x1846/0x22f0 [ 5.576026] ? __do_sys_finit_module+0xc8/0x140 [ 5.576032] ? __do_sys_finit_module+0xc8/0x140 [ 5.576041] ? __x64_sys_finit_module+0x1a/0x20 [ 5.576047] ? do_syscall_64+0x5c/0x90 [ 5.576052] ? do_syscall_64+0x69/0x90 [ 5.576056] ? do_syscall_64+0x69/0x90 [ 5.576060] ? do_syscall_64+0x69/0x90 [ 5.576064] ? sysvec_call_function+0x46/0x90 [ 5.576071] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.576081] </TASK> [ 5.655724] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 302 jiffies s: 33 root: 0x100/. [ 5.655792] rcu: blocking rcu_node structures (internal RCU debug): [ 5.655824] Task dump for CPU 8: [ 5.655827] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.655836] Call Trace: [ 5.655838] <TASK> [ 5.655844] ? kallsyms_lookup+0x14/0x20 [ 5.655854] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.655863] ? change_page_attr_set_clr+0x12e/0x180 [ 5.655872] ? ftrace_module_enable+0xba/0x3d0 [ 5.655880] ? load_module+0x1846/0x22f0 [ 5.655893] ? __do_sys_finit_module+0xc8/0x140 [ 5.655899] ? __do_sys_finit_module+0xc8/0x140 [ 5.655908] ? __x64_sys_finit_module+0x1a/0x20 [ 5.655913] ? do_syscall_64+0x5c/0x90 [ 5.655918] ? do_syscall_64+0x69/0x90 [ 5.655922] ? do_syscall_64+0x69/0x90 [ 5.655926] ? do_syscall_64+0x69/0x90 [ 5.655930] ? sysvec_call_function+0x46/0x90 [ 5.655937] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.655947] </TASK> [ 5.731719] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 8-... } 321 jiffies s: 33 root: 0x100/. [ 5.731789] rcu: blocking rcu_node structures (internal RCU debug): [ 5.731822] Task dump for CPU 8: [ 5.731824] task:systemd-udevd state:R running task stack: 0 pid: 485 ppid: 430 flags:0x00004008 [ 5.731833] Call Trace: [ 5.731836] <TASK> [ 5.731841] ? kallsyms_lookup+0x14/0x20 [ 5.731851] ? test_for_valid_rec.isra.0+0x35/0x80 [ 5.731860] ? change_page_attr_set_clr+0x12e/0x180 [ 5.731869] ? ftrace_module_enable+0xba/0x3d0 [ 5.731877] ? load_module+0x1846/0x22f0 [ 5.731890] ? __do_sys_finit_module+0xc8/0x140 [ 5.731895] ? __do_sys_finit_module+0xc8/0x140 [ 5.731905] ? __x64_sys_finit_module+0x1a/0x20 [ 5.731910] ? do_syscall_64+0x5c/0x90 [ 5.731915] ? do_syscall_64+0x69/0x90 [ 5.731919] ? do_syscall_64+0x69/0x90 [ 5.731923] ? do_syscall_64+0x69/0x90 [ 5.731927] ? sysvec_call_function+0x46/0x90 [ 5.731933] ? entry_SYSCALL_64_after_hwframe+0x4b/0xb5 [ 5.731944] </TASK>
It seems to me that this bug has been fixed in 6.0-rc7
I am trying rc7 now. We'll see how it goes.
No issues on 6.0 kernel, released today.
RC7 ran clean on my workstation but that was a preemptive kernel, problems tended to show up more in kernels which were compiled non-preemptive and tickless. But before I could try on servers, the official 6.0 release came out. So I built it. I've got it running on about a dozen machines so far and so far it's running clean and with substantially better performance than 5.15.
Sadly, just got another preemptive RCU CPU Stall on 6.0: [ 9.064676] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-.... } 4 jiffies s: 41 root: 0x80/. [ 9.064685] rcu: blocking rcu_node structures (internal RCU debug): [ 9.064688] Task dump for CPU 7: [ 9.064689] task:modprobe state:R running task stack: 0 pid: 1358 ppid: 503 flags:0x00004008 [ 9.064695] Call Trace: [ 9.064699] <TASK> [ 9.064701] ? __d_alloc+0x2e/0x1e0 [ 9.064710] ? __d_alloc+0x2e/0x1e0 [ 9.064712] ? kmem_cache_alloc_lru+0x1ac/0x370 [ 9.064721] ? __wake_up_common_lock+0x87/0xc0 [ 9.064726] ? sysvec_apic_timer_interrupt+0x90/0xa0 [ 9.064738] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 9.064747] ? smp_call_function_many_cond+0x120/0x3d0 [ 9.064754] ? text_poke_memset+0x20/0x20 [ 9.064758] ? on_each_cpu_cond_mask+0x1d/0x30 [ 9.064760] ? text_poke_bp_batch+0x231/0x290 [ 9.064763] ? __traceiter_netfs_rreq_ref+0x60/0x60 [netfs] [ 9.064770] ? netfs_get_subrequest+0x70/0xa0 [netfs] [ 9.064789] ? netfs_get_subrequest+0x70/0xa0 [netfs] [ 9.064794] ? text_poke_bp+0x49/0x70 [ 9.064797] ? __static_call_transform+0x7f/0x120 [ 9.064802] ? arch_static_call_transform+0x87/0xa0 [ 9.064818] ? __static_call_init+0x167/0x210 [ 9.064824] ? static_call_module_notify+0x13e/0x1a0 [ 9.064830] ? blocking_notifier_call_chain_robust+0x6f/0xd0 [ 9.064836] ? load_module+0x1f8e/0x23b0 [ 9.064843] ? __do_sys_finit_module+0xbd/0x130 [ 9.064845] ? __do_sys_finit_module+0xbd/0x130 [ 9.064847] ? __x64_sys_finit_module+0x18/0x20 [ 9.064862] ? do_syscall_64+0x58/0x80 [ 9.064868] ? exc_page_fault+0x86/0x160 [ 9.064869] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 9.064873] </TASK>
Still seeing occasionally on 6.0.0, mostly at boot time. This splat is from a Centos7 machine. My conf is slightly different in this both because it is 6.0.0 and not 6.0rc5, but also because bpfilters are not enabled since Centos7 has no userland support for them. But the add file option seems missing from Bugzilla today. [ 59.154322] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 4-.... } 4 jiffies s: 41 root: 0x10/. [ 59.154338] rcu: blocking rcu_node structures (internal RCU debug): [ 59.154342] Task dump for CPU 4: [ 59.154344] task:modprobe state:R running task stack: 0 pid: 1374 ppid: 9 flags:0x00004008 [ 59.154350] Call Trace: [ 59.154354] <TASK> [ 59.154358] ? number+0x329/0x450 [ 59.154363] ? switch_mm_irqs_off+0x19e/0x570 [ 59.154366] ? text_poke_loc_init+0x181/0x1a0 [ 59.154368] ? smp_call_function_many_cond+0xf3/0x3d0 [ 59.154372] ? vsnprintf+0x2b6/0x530 [ 59.154374] ? sysvec_apic_timer_interrupt+0x90/0xa0 [ 59.154378] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 59.154382] ? trace_event_eval_update+0x3c7/0x4b0 [ 59.154385] ? trace_event_eval_update+0x14a/0x4b0 [ 59.154388] ? trace_module_notify+0x4c/0x60 [ 59.154391] ? blocking_notifier_call_chain_robust+0x6f/0xd0 [ 59.154395] ? load_module+0x1f8e/0x23b0 [ 59.154399] ? __do_sys_finit_module+0xbd/0x130 [ 59.154402] ? __do_sys_finit_module+0xbd/0x130 [ 59.154405] ? __x64_sys_finit_module+0x18/0x20 [ 59.154407] ? do_syscall_64+0x58/0x80 [ 59.154409] ? __do_sys_newfstat+0x58/0x60 [ 59.154413] ? __audit_syscall_exit+0xb1/0xe0 [ 59.154416] ? exit_to_user_mode_prepare+0x41/0x200 [ 59.154420] ? syscall_exit_to_user_mode+0x1b/0x40 [ 59.154422] ? do_syscall_64+0x67/0x80 [ 59.154424] ? do_syscall_64+0x67/0x80 [ 59.154427] ? do_syscall_64+0x67/0x80 [ 59.154429] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 59.154433] </TASK>
This splat is from a Scientific-7 machine. My conf is slightly different in this both because it is 6.0.0 and not 6.0rc5, but also because bpfilters are not enabled since Scientific7 has no userland support for them. This is the same configuration as used on the above Centos7 machine for the same reason and like Centos7 this occurred at boot time. [ 71.393385] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 6-.... } 3 jiffies s: 45 root: 0x40/. [ 71.393396] rcu: blocking rcu_node structures (internal RCU debug): [ 71.393400] Task dump for CPU 6: [ 71.393401] task:modprobe state:R running task stack: 0 pid: 1376 ppid: 111 flags:0x00004008 [ 71.393407] Call Trace: [ 71.393411] <TASK> [ 71.393414] ? __d_alloc+0x2e/0x1e0 [ 71.393421] ? __d_alloc+0x2e/0x1e0 [ 71.393425] ? kmem_cache_alloc_lru+0x1ac/0x370 [ 71.393428] ? sysvec_apic_timer_interrupt+0x90/0xa0 [ 71.393430] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 71.393433] ? insn_get_prefixes+0x1f1/0x440 [ 71.393440] ? put_dec+0x1c/0xb0 [ 71.393442] ? number+0x329/0x450 [ 71.393445] ? switch_mm_irqs_off+0x19e/0x570 [ 71.393447] ? text_poke_loc_init+0x181/0x1a0 [ 71.393450] ? smp_call_function_many_cond+0xf3/0x3d0 [ 71.393453] ? vsnprintf+0x2b6/0x530 [ 71.393454] ? asm_sysvec_call_function_single+0x1b/0x20 [ 71.393459] ? sysvec_apic_timer_interrupt+0x90/0xa0 [ 71.393462] ? asm_sysvec_apic_timer_interrupt+0x1b/0x20 [ 71.393465] ? trace_event_eval_update+0x429/0x4b0 [ 71.393467] ? trace_event_eval_update+0x14a/0x4b0 [ 71.393470] ? trace_module_notify+0x4c/0x60 [ 71.393473] ? blocking_notifier_call_chain_robust+0x6f/0xd0 [ 71.393476] ? load_module+0x1f8e/0x23b0 [ 71.393480] ? __do_sys_finit_module+0xbd/0x130 [ 71.393482] ? __do_sys_finit_module+0xbd/0x130 [ 71.393485] ? __x64_sys_finit_module+0x18/0x20 [ 71.393487] ? do_syscall_64+0x58/0x80 [ 71.393489] ? syscall_exit_to_user_mode+0x1b/0x40 [ 71.393491] ? do_syscall_64+0x67/0x80 [ 71.393493] ? do_syscall_64+0x67/0x80 [ 71.393495] ? do_syscall_64+0x67/0x80 [ 71.393497] ? do_syscall_64+0x67/0x80 [ 71.393499] ? do_syscall_64+0x67/0x80 [ 71.393500] ? syscall_exit_to_user_mode+0x1b/0x40 [ 71.393502] ? do_syscall_64+0x67/0x80 [ 71.393504] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 71.393507] </TASK>
Please contact the maintainers by sending an email to the addresses below, and referencing this report. ``` M: "Paul E. McKenney" <paulmck@kernel.org> M: Frederic Weisbecker <frederic@kernel.org> (kernel/rcu/tree_nocb.h) M: Neeraj Upadhyay <quic_neeraju@quicinc.com> (kernel/rcu/tasks.h) M: Josh Triplett <josh@joshtriplett.org> L: rcu@vger.kernel.org ``` PS: As this is a regression, and it is reproducible in a virtual machine (making testing cycles fast), it’d be great if you could bisect the issue. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS
I've not done this before so please forgive my lack of knowledge and help me out here, I found instructions at: https://docs.kernel.org/admin-guide/bug-bisect.html But my knowledge of git is somewhat limited. I know how to pull source and build a kernel from that source, but I don't understand the instructions because I'm not familiar with this feature of git. The instructions start with: mark the broken changeset with: $ git bisect bad [commit] I assume that what is in [commit] is something other than literally that, is this an arbitrary label or? And then going back to known good is problematic because between 5.15 and rc6 it was awful, 6.0rc7 greatly reduced it and 6.0 seems to have reduced to where it's only failing on the centos7 derived machines and then only at boot time. Anybody who can further recommend some reading on how to use git and particularly in this context, I'd appreciate it. Thanks.
So what was the last version it worked, and when was the first version it did not? (From your report, I read 5.18.19 and 6.0-rc4 worked fine?) Assuming 6.0-rc4 had *no* problems, and 6.0-rc5 started to have problems, than you would do: $ git bisect start 6.0-rc5 6.0-rc4 Each time you build and test the commit in question. If it is still good, you execute `git bisect good`, and if it is bad, then `git bisect bad`. PS: I believe, Michele’s issue was a different issue.
To complicate things, I believe I've got two different issues that are difficult to separate. I've got four physical machines and 26 virtual machines. The physical machines consist of two i7-6700k based servers, one i7-6850k based server, and one i7-9700k based workstation. The servers are all running Ubuntu 22.04, the workstation is running Ubuntu 22.10. The virtual machines are running a wide variety of distributions including centos6, centos7, fedora rawhide, centos-stream, scientific linux, ubuntu 22.04, Debian 11 Bullseye, Manjaro, MxLinux which is based on Debian 11, Mint 20.03, and Zorin 16.01. Initially I was seeing these expedited RCU stall splats happening in code across the board and on all the distros here, on both the virtual and physical services, and not just a few, dozens and even after the machine was booted and running stable. They were here right up to the last 5.19 release, then 6.0.0 came out and it ran absolutely clean, and it remained that way through rc4, then rc5 and rc6 were bad, just like 5.19, across all distros and machines here. So I went back to 5.18 and it got one splat while it was running, even though I had seen none previously, so I loaded 5.15 and it ran clean. Someone here posted they were not seeing it with 6.0.0rc7 but when I went to grab that from kernel.org, it was gone and the official 6.0 release was out. So I downloaded and built it and put it on all the machines. Now I am only seeing problems with two machines, they one based on centos7 and the one on scientific linux 7, and both are derivatives of the same version of redhat and nearly identical. The rest of the machines are all running clean. Further, these two are only doing it at boot time and no other time once they get up and running. Further, the splats earlier were across the board code all over the place but now it's just one place, as the example posted by scientific 7 but also similar happens on centos 7. The only difference between the kernels running on centos7 and the other machines is that bpfilters is not enabled on centos7 since there is no userland support for it in that distro. Otherwise the configurations are identical. So you can see why I am a bit confused now and not sure how to proceed or if I even should. It does not seem to be service affecting. The splat that happened under 5.18 was also very infrequent and I do not have it anymore so not sure if it was in the same location as the current, but the ones happening across all the distros and machines and throughout the uptime are not happening anymore. The problems before were seriously impacting services and things would just lock and stall for customers, that is no longer happening. They were all over the place and that is no longer happening. Now just on one distro at boot time only and it's the internal debug splat and no other code showing up. So I suspect it is an entirely unrelated problem BUT because it is so similar it could have been obscured by the worse problem in 5.19 and 6.0rc5 and 6.0rc6 and with the sheer number of them I would not have noticed. So hopefully now you can see the source of my dilemma.
I see, for the original problem, you could test the releases between 5.15 and 5.18, that means 5.16 and 5.17, and go on from there. But if it looks to be fixed in 6.0, I suggest to close this issue. For the second issue (CentOS 7, …), you could create a new issue with the description, and attach all the logs and your configuration. Then, as mentioned in comment #13, contact the maintainers by email and reference the issue.
Ok sounds like a plan, I'll do that. Closing this. Wasn't sure if to mark code fix or obsolete since it's fixed in the newer version. Feel free to change it if inappropriate.