Bug 212231 - Interrupting gfs2 mount hangs
Summary: Interrupting gfs2 mount hangs
Status: RESOLVED CODE_FIX
Alias: None
Product: File System
Classification: Unclassified
Component: gfs2 (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: GFS2 Virtual Assignee
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-03-11 11:11 UTC by Andy Price
Modified: 2021-04-12 15:28 UTC (History)
0 users

See Also:
Kernel Version: 5.12.0-rc2
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Andy Price 2021-03-11 11:11:44 UTC
Hitting ctrl-c just after 'mount /dev/vdc /mnt/test' causes mount to hang:

# cat /proc/`pidof mount`/stack
[<0>] flush_workqueue+0x204/0x770
[<0>] gfs2_gl_hash_clear+0xa3/0x1c0
[<0>] gfs2_fill_super+0x10d1/0x11b0
[<0>] get_tree_bdev+0x216/0x350
[<0>] gfs2_get_tree+0x2e/0xf0
[<0>] vfs_get_tree+0x41/0xf0
[<0>] path_mount+0x904/0xd40
[<0>] __x64_sys_mount+0x170/0x1a0
[<0>] do_syscall_64+0x2d/0x40
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae


[   68.501973] gfs2: fsid=vdc: Trying to join cluster "lock_nolock", "vdc"
[   68.503212] gfs2: fsid=vdc: Now mounting FS (format 1802)...
[   68.522342] gfs2: fsid=vdc.0: journal 0 mapped with 1 extents in 0ms
[   68.524292] gfs2: fsid=vdc.0: jid=0, already locked for use
[   68.525455] gfs2: fsid=vdc.0: jid=0: Looking at journal...
[   68.854142] gfs2: fsid=vdc.0: jid=0: Journal head lookup took 328ms
[   68.855520] gfs2: fsid=vdc.0: jid=0: Done
[   68.856590] gfs2: fsid=vdc.0: first mount done, others may mount
[   68.864579] gfs2: fsid=vdc.0: can't start logd thread: -4
[   68.865585] gfs2: fsid=vdc.0: can't make FS RW: -4
[  181.333186] INFO: task kworker/1:1H:89 blocked for more than 90 seconds.
[  181.338436]       Not tainted 5.12.0-rc2-00271-g05a59d79793d #110
[  181.347271] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  181.353578] task:kworker/1:1H    state:D stack:    0 pid:   89 ppid:     2 flags:0x00004000
[  181.358185] Workqueue: glock_workqueue glock_work_func
[  181.360452] Call Trace:
[  181.361856]  __schedule+0x4f1/0xde0
[  181.363126]  ? __sched_text_start+0x8/0x8
[  181.364521]  ? _raw_spin_lock_bh+0xb0/0xb0
[  181.365951]  schedule+0x6f/0x120
[  181.367106]  rwsem_down_write_slowpath+0x48e/0xa70
[  181.368506]  ? ret_from_fork+0x1f/0x30
[  181.369736]  ? rwsem_mark_wake+0x460/0x460
[  181.371106]  ? elv_rqhash_del+0x53/0x90
[  181.372376]  ? deadline_remove_request+0xba/0x180
[  181.373682]  ? kvm_sched_clock_read+0xd/0x20
[  181.374909]  ? sched_clock+0x5/0x10
[  181.375896]  down_write+0xd9/0xe0
[  181.376826]  ? down_read_killable+0x190/0x190
[  181.377845]  ? psi_group_change+0x2db/0x320
[  181.378978]  freeze_super+0x41/0x180
[  181.379830]  ? psi_flags_change+0x53/0x70
[  181.380710]  freeze_go_sync+0xd2/0x150
[  181.381565]  do_xmote+0x160/0x560
[  181.382432]  glock_work_func+0x8c/0x1e0
[  181.383290]  process_one_work+0x3b9/0x690
[  181.384237]  worker_thread+0x2d6/0x6d0
[  181.385127]  ? process_one_work+0x690/0x690
[  181.386091]  kthread+0x1cb/0x1f0
[  181.386747]  ? __kthread_bind_mask+0x70/0x70
[  181.387694]  ret_from_fork+0x1f/0x30
[  181.388457] INFO: task mount:381 blocked for more than 90 seconds.
[  181.389814]       Not tainted 5.12.0-rc2-00271-g05a59d79793d #110
[  181.391042] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  181.392831] task:mount           state:D stack:    0 pid:  381 ppid:   354 flags:0x00004004
[  181.394526] Call Trace:
[  181.395127]  __schedule+0x4f1/0xde0
[  181.395813]  ? __sched_text_start+0x8/0x8
[  181.396565]  ? sched_clock+0x5/0x10
[  181.397194]  ? sched_clock_cpu+0x18/0x100
[  181.398286]  schedule+0x6f/0x120
[  181.399003]  schedule_timeout+0x182/0x1c0
[  181.399541]  ? console_conditional_schedule+0x20/0x20
[  181.400732]  ? __list_add_valid+0x2f/0x60
[  181.401648]  ? __prepare_to_swait+0xa0/0xb0
[  181.402722]  wait_for_completion+0xda/0x150
[  181.403857]  ? bit_wait_io_timeout+0xc0/0xc0
[  181.404706]  ? flush_workqueue_prep_pwqs+0x1c8/0x200
[  181.405653]  flush_workqueue+0x204/0x770
[  181.406304]  ? gfs2_glock_complete+0x130/0x130
[  181.407319]  ? queue_rcu_work+0x50/0x50
[  181.408148]  ? __queue_work+0x349/0x630
[  181.408872]  gfs2_gl_hash_clear+0xa3/0x1c0
[  181.409598]  ? gfs2_delete_work_queued+0x30/0x30
[  181.410453]  ? _raw_spin_lock_irqsave+0x70/0xb0
[  181.411204]  ? _raw_spin_lock_irq+0xb0/0xb0
[  181.412010]  ? swake_up_all_locked+0x1e/0xb0
[  181.413222]  gfs2_fill_super+0x10d1/0x11b0
[  181.414138]  ? gfs2_reconfigure+0x6e0/0x6e0
[  181.415067]  ? ptr_to_hashval+0x40/0x40
[  181.415857]  ? snprintf+0x88/0xa0
[  181.416538]  ? set_blocksize+0x95/0x190
[  181.417425]  get_tree_bdev+0x216/0x350
[  181.418202]  ? gfs2_reconfigure+0x6e0/0x6e0
[  181.419021]  gfs2_get_tree+0x2e/0xf0
[  181.419729]  vfs_get_tree+0x41/0xf0
[  181.420433]  path_mount+0x904/0xd40
[  181.421143]  ? mark_mounts_for_expiry+0x290/0x290
[  181.422056]  ? _copy_from_user+0x3a/0x70
[  181.422786]  ? user_path_at_empty+0x5a/0x90
[  181.423586]  __x64_sys_mount+0x170/0x1a0
[  181.424386]  ? copy_mnt_ns+0x530/0x530
[  181.425110]  do_syscall_64+0x2d/0x40
[  181.425754]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[  181.426686] RIP: 0033:0x7f1401a5a01e
[  181.427374] RSP: 002b:00007ffe072dc7e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
[  181.428562] RAX: ffffffffffffffda RBX: 00007ffe072dc960 RCX: 00007f1401a5a01e
[  181.429922] RDX: 000055b323d59300 RSI: 000055b323d521b0 RDI: 000055b323d52190
[  181.431649] RBP: 000055b323d51f60 R08: 0000000000000000 R09: 00007f1401b1ca60
[  181.433338] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[  181.434776] R13: 000055b323d52190 R14: 000055b323d59300 R15: 000055b323d51f60
Comment 1 Andy Price 2021-03-15 12:38:43 UTC
Patch posted to cluster-devel: https://listman.redhat.com/archives/cluster-devel/2021-March/msg00057.html
Comment 2 Andy Price 2021-04-12 15:28:57 UTC
Fixed in 62dd0f98a0e5668424270b47a0c2e973795faba7

Note You need to log in before you can comment on or make changes to this bug.