Bug 20472 - [2.6.35.5 -> .7] INFO: task rpcbind:14163 blocked for more than 120 seconds.
Summary: [2.6.35.5 -> .7] INFO: task rpcbind:14163 blocked for more than 120 seconds.
Status: RESOLVED OBSOLETE
Alias: None
Product: File System
Classification: Unclassified
Component: NFS (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Trond Myklebust
URL:
Keywords:
Depends on:
Blocks: 16055
  Show dependency tree
 
Reported: 2010-10-17 18:37 UTC by Maciej Rutecki
Modified: 2012-08-14 11:15 UTC (History)
5 users (show)

See Also:
Kernel Version: 2.6.35.7
Subsystem:
Regression: Yes
Bisected commit-id:


Attachments

Description Maciej Rutecki 2010-10-17 18:37:59 UTC
Subject    : [2.6.35.5 -> .7] INFO: task rpcbind:14163 blocked for more than 120 seconds.
Submitter  : Pawel Sikora <pluto@agmk.net>
Date       : 2010-10-15 15:02
Message-ID : 201010151702.15675.pluto@agmk.net
References : http://marc.info/?l=linux-kernel&m=128715683507549&w=2

This entry is being used for tracking a regression from 2.6.35. Please don't
close it until the problem is fixed in the mainline.
Comment 1 Trond Myklebust 2010-10-17 20:23:24 UTC
So, while a blocked rpcbind will indeed cause NFS mounts to hang, I don't
see anything to indicate that NFS is actually at fault here. All the trace
shows is that a page is being faulted in and that is, for reasons unknown,
taking a long time.

Could we therefore please have a bit more information about what is going on?

A full sysrq-t output would probably help, as would information about the
mounts and filesystems in use on this system.
Comment 2 Rafael J. Wysocki 2010-10-18 21:37:31 UTC
On Monday, October 18, 2010, Paweł Sikora wrote:
> On Sunday 17 October 2010 22:21:48 Rafael J. Wysocki wrote:
> > This message has been generated automatically as a part of a summary report
> > of recent regressions.
> > 
> > The following bug entry is on the current list of known regressions
> > from 2.6.35.  Please verify if it still should be listed and let the
> tracking team
> > know (either way).
> 
> hi,
> 
> hot news from the front, recent tests on the pld-linux.org vendor kernel
> show that the random task blocking in 2.6.35.7 during medium/heavy load
> (~4..25)
> is related to the grsecurity patch. we'll forward problem report to the
> grsec maintainer asap...
Comment 3 Rafael J. Wysocki 2010-10-19 20:30:56 UTC
On Tuesday, October 19, 2010, Paweł Sikora wrote:
> On Monday 18 October 2010 23:36:51 Rafael J. Wysocki wrote:
> > On Monday, October 18, 2010, Paweł Sikora wrote:
> > > On Sunday 17 October 2010 22:21:48 Rafael J. Wysocki wrote:
> > > > This message has been generated automatically as a part of a summary
> report
> > > > of recent regressions.
> > > > 
> > > > The following bug entry is on the current list of known regressions
> > > > from 2.6.35.  Please verify if it still should be listed and let the
> tracking team
> > > > know (either way).
> > > 
> > > hi,
> > > 
> > > hot news from the front, recent tests on the pld-linux.org vendor kernel
> > > show that the random task blocking in 2.6.35.7 during medium/heavy load
> (~4..25)
> > > is related to the grsecurity patch. we'll forward problem report to the
> > > grsec maintainer asap...
> > 
> > So, to make things clear, this is not a mainline kernel issue, is it?
> 
> i thought that was a grsecurity issue but now i have a testcase that
> casues task blocking on my machine with the vanilla 2.6.35.7 kernel:
> 
> steps to repoduce:
> 
> on console 1:
> - run 'make -j32' in kernel tree and wait several second for make forks
>   and medium system load (~15).
> 
> on console 2:
> - run 'sync' from root account to bump load more.
> 
> on console 3:
> - observe 'tail -f /var/log/kernel' for blocking issues.
> 
> [  360.517917] INFO: task sync:6712 blocked for more than 120 seconds.
> [  360.517920] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
> [  360.517922] sync          D 00000000ffff4c7c     0  6712   2770 0x00000000
> [  360.517926]  ffff88021ac71d18 0000000000000086 ffff880200000000
> 0000000000004000
> [  360.517930]  0000000000013700 0000000000013700 ffff88021ac71fd8
> ffff88021ac71fd8
> [  360.517933]  ffff880205593020 0000000000013700 ffff88021ac71fd8
> 0000000000004000
> [  360.517936] Call Trace:
> [  360.517944]  [<ffffffff813b8dbd>] schedule_timeout+0x20d/0x2f0
> [  360.517949]  [<ffffffff8103baba>] ? enqueue_entity+0xea/0x170
> [  360.517951]  [<ffffffff8103bc09>] ? enqueue_task_fair+0x49/0x50
> [  360.517954]  [<ffffffff813b8945>] wait_for_common+0xc5/0x150
> [  360.517957]  [<ffffffff81040430>] ? default_wake_function+0x0/0x10
> [  360.517959]  [<ffffffff813b8a78>] wait_for_completion+0x18/0x20
> [  360.517963]  [<ffffffff8113b343>] sync_inodes_sb+0x83/0x170
> [  360.517967]  [<ffffffff8113f890>] ? sync_one_sb+0x0/0x20
> [  360.517969]  [<ffffffff8113f880>] __sync_filesystem+0x80/0x90
> [  360.517972]  [<ffffffff8113f8ab>] sync_one_sb+0x1b/0x20
> [  360.517975]  [<ffffffff8111d2c7>] iterate_supers+0x77/0xc0
> [  360.517978]  [<ffffffff8113f7bb>] sync_filesystems+0x1b/0x20
> [  360.517980]  [<ffffffff8113f92c>] sys_sync+0x1c/0x40
> [  360.517984]  [<ffffffff81002d6b>] system_call_fastpath+0x16/0x1b
> 
> the 'make' is running on ext4 filesystem with underlying mdadm (raid10)
> created from 2 caviar raid edition disks. the cpu is intel quad core
> Q9300 with 8GB of ddr2 ram.
Comment 4 Rafael J. Wysocki 2010-10-20 21:37:35 UTC
On Wednesday, October 20, 2010, Paweł Sikora wrote:
> On Tuesday 19 October 2010 22:30:22 Rafael J. Wysocki wrote:
> > On Tuesday, October 19, 2010, Paweł Sikora wrote:
> > > On Monday 18 October 2010 23:36:51 Rafael J. Wysocki wrote:
> > > > On Monday, October 18, 2010, Paweł Sikora wrote:
> > > > > On Sunday 17 October 2010 22:21:48 Rafael J. Wysocki wrote:
> > > > > > This message has been generated automatically as a part of a
> summary report
> > > > > > of recent regressions.
> > > > > > 
> > > > > > The following bug entry is on the current list of known regressions
> > > > > > from 2.6.35.  Please verify if it still should be listed and let
> the tracking team
> > > > > > know (either way).
> > > > > 
> > > > > hi,
> > > > > 
> > > > > hot news from the front, recent tests on the pld-linux.org vendor
> kernel
> > > > > show that the random task blocking in 2.6.35.7 during medium/heavy
> load (~4..25)
> > > > > is related to the grsecurity patch. we'll forward problem report to
> the
> > > > > grsec maintainer asap...
> > > > 
> > > > So, to make things clear, this is not a mainline kernel issue, is it?
> > > 
> > > i thought that was a grsecurity issue but now i have a testcase that
> > > casues task blocking on my machine with the vanilla 2.6.35.7 kernel:
> > > 
> > > steps to repoduce:
> > > 
> > > on console 1:
> > > - run 'make -j32' in kernel tree and wait several second for make forks
> > >   and medium system load (~15).
> > > 
> > > on console 2:
> > > - run 'sync' from root account to bump load more.
> > > 
> > > on console 3:
> > > - observe 'tail -f /var/log/kernel' for blocking issues.
> > > 
> > > [  360.517917] INFO: task sync:6712 blocked for more than 120 seconds.
> > > [  360.517920] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> > > [  360.517922] sync          D 00000000ffff4c7c     0  6712   2770
> 0x00000000
> > > [  360.517926]  ffff88021ac71d18 0000000000000086 ffff880200000000
> 0000000000004000
> > > [  360.517930]  0000000000013700 0000000000013700 ffff88021ac71fd8
> ffff88021ac71fd8
> > > [  360.517933]  ffff880205593020 0000000000013700 ffff88021ac71fd8
> 0000000000004000
> > > [  360.517936] Call Trace:
> > > [  360.517944]  [<ffffffff813b8dbd>] schedule_timeout+0x20d/0x2f0
> > > [  360.517949]  [<ffffffff8103baba>] ? enqueue_entity+0xea/0x170
> > > [  360.517951]  [<ffffffff8103bc09>] ? enqueue_task_fair+0x49/0x50
> > > [  360.517954]  [<ffffffff813b8945>] wait_for_common+0xc5/0x150
> > > [  360.517957]  [<ffffffff81040430>] ? default_wake_function+0x0/0x10
> > > [  360.517959]  [<ffffffff813b8a78>] wait_for_completion+0x18/0x20
> > > [  360.517963]  [<ffffffff8113b343>] sync_inodes_sb+0x83/0x170
> > > [  360.517967]  [<ffffffff8113f890>] ? sync_one_sb+0x0/0x20
> > > [  360.517969]  [<ffffffff8113f880>] __sync_filesystem+0x80/0x90
> > > [  360.517972]  [<ffffffff8113f8ab>] sync_one_sb+0x1b/0x20
> > > [  360.517975]  [<ffffffff8111d2c7>] iterate_supers+0x77/0xc0
> > > [  360.517978]  [<ffffffff8113f7bb>] sync_filesystems+0x1b/0x20
> > > [  360.517980]  [<ffffffff8113f92c>] sys_sync+0x1c/0x40
> > > [  360.517984]  [<ffffffff81002d6b>] system_call_fastpath+0x16/0x1b
> > > 
> > > the 'make' is running on ext4 filesystem with underlying mdadm (raid10)
> > > created from 2 caviar raid edition disks. the cpu is intel quad core
> > > Q9300 with 8GB of ddr2 ram.
> > 
> > So, how is this related to the original report?
> 
> it's easier to reproduce on single test machine (no need to playing with
> parallel make and nfs).
> in fact this isn't a 2.6.35.5->.7 regression but 2.6.34->2.6.35. the
> git-bisect found
> first commit that introduces the 'sync' task blocking problem here:
> 
> commit 7c8a3554c683f512dbcee26faedb42e4c05f12fa
> Author: Jens Axboe <jens.axboe@oracle.com>
> Date:   Tue May 18 14:29:29 2010 +0200
> 
>     writeback: ensure that WB_SYNC_NONE writeback with sb pinned is sync
>     
>     Even if the writeout itself isn't a data integrity operation, we need
>     to ensure that the caller doesn't drop the sb umount sem before we
>     have actually done the writeback.
>     
>     This is a fixup for commit e913fc82.
>     
>     Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Comment 5 Rafael J. Wysocki 2010-10-20 21:38:09 UTC
First-Bad-Commit : 7c8a3554c683f512dbcee26faedb42e4c05f12fa
Comment 6 Trond Myklebust 2010-10-20 22:34:27 UTC
Adding Jens to the Cc: list since he is the author of that commit.
Comment 7 Pawel Sikora 2010-10-23 19:02:37 UTC
(In reply to comment #6)
> Adding Jens to the Cc: list since he is the author of that commit.

i see similar PR18632 reported over a month ago.
P1 priority, assigned to Jens and without response so far :(
Comment 8 Alan 2012-08-14 11:15:12 UTC
There have been major changes to the write back code since this

Note You need to log in before you can comment on or make changes to this bug.