Bug 99461 - recvfrom SYSCALL infinite loop/deadlock chewing 100% CPU [was __libc_recv (fd=fd@entry=300, buf=buf@entry=0x7f6042880600, n=n@entry=5, flags=-1, flags@entry=258) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:33]
Summary: recvfrom SYSCALL infinite loop/deadlock chewing 100% CPU [was __libc_recv (fd...
Status: NEW
Alias: None
Product: Networking
Classification: Unclassified
Component: Other (show other bugs)
Hardware: x86-64 Linux
: P1 high
Assignee: Stephen Hemminger
Depends on:
Reported: 2015-06-05 12:39 UTC by Dan Searle
Modified: 2016-02-15 20:29 UTC (History)
3 users (show)

See Also:
Kernel Version: 3.13.0
Regression: No
Bisected commit-id:


Description Dan Searle 2015-06-05 12:39:38 UTC
This is a repost of a bug I reported initially to the GNU libc bug list (https://sourceware.org/bugzilla/show_bug.cgi?id=18493).

I was advised by Andreas Schwab that __libc_recv function is just a thin wrapper around the recvfrom system call, and to report this to "the kernel people", which I assume is you people.

Here's a summary of the problem:

In a multi-threaded pthreads process running on Ubuntu 14.04 AMD64 (with over 1000 threads) which uses real time FIFO scheduling, we occasionally see calls to recv() with flags (MSG_PEEK | MSG_WAITALL) get stuck in an infinte loop or deadlock meaning the threads lock up chewing as much CPU as they can (due to FIFO scheduling) while stuck inside recv().

Here's an example gdb back trace:

[Switching to thread 4 (Thread 0x7f6040546700 (LWP 27251))]
#0  0x00007f6231d2f7eb in __libc_recv (fd=fd@entry=146, buf=buf@entry=0x7f6040543600, n=n@entry=5, flags=-1, flags@entry=258) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:33
33      ../sysdeps/unix/sysv/linux/x86_64/recv.c: No such file or directory.
(gdb) bt
#0  0x00007f6231d2f7eb in __libc_recv (fd=fd@entry=146, buf=buf@entry=0x7f6040543600, n=n@entry=5, flags=-1, flags@entry=258) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:33
#1  0x0000000000421945 in recv (__flags=258, __n=5, __buf=0x7f6040543600, __fd=146) at /usr/include/x86_64-linux-gnu/bits/socket2.h:44

The socket is a TCP socket in blocking mode, the recv() call is inside an outer loop with a counter, and I've checked the counter with gdb and it's always at 1, meaning that I'm sure that the outer loop isn't the problem, the thread is indeed deadlocked inside the recv() internals.

Other nodes: 
* There always seems to be 2 or more threads deadlocked in the same place (same recv() call but with distinct FDs)
* The threads calling recv() have cancellation disbaled by previously executing: thread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL);

I've even tried adding a poll() call for POLLRDNORM on the socket before calling recv() with MSG_PEEK | MSG_WAITALL flags to try to make sure there's data available on the socket before calling *recv()*, but it makes no difference.

So, I don't know what is wrong here, I've read all the recv() documentation and believe that recv() is being used correctly, the only conclusion I can come to is that there is a bug in libc recv() when using flags MSG_PEEK | MSG_WAITALL with thousands of pthreads running.
Comment 1 hannes 2015-06-08 11:33:14 UTC
We track this here:

Comment 2 Dan Searle 2015-06-08 11:40:58 UTC
Is there any way to work around this issue? I guess we could try to rework the user space code so that it does not call recv() with both MSG_WAITALL and MSG_PEEK, not ideal but waiting for a fix in the kernel might not be an option as it's effecting our business.
Comment 3 Dan Searle 2015-06-11 08:18:23 UTC
I have worked around the issue in user space for now by not using MSG_WAITALL (while still using MSG_PEEK) with an outer loop around recv() with a sleep() and a counter to retry the recv() call a set number of times before timing out.
Comment 4 Dan Searle 2015-07-24 11:22:17 UTC
Is there anyone working on a fix for this bug? Is there any way a fix can be expedited?
Comment 5 Sabrina Dubroca 2015-07-31 15:03:47 UTC
This bug is now fixed in the net tree:
Comment 6 Dan Searle 2015-07-31 15:04:31 UTC
Many thanks!

Note You need to log in before you can comment on or make changes to this bug.