Bug 205579 - rfcomm causes very high cpu usage during serial connectivity
Summary: rfcomm causes very high cpu usage during serial connectivity
Status: NEW
Alias: None
Product: Networking
Classification: Unclassified
Component: Other (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Stephen Hemminger
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-11-19 09:52 UTC by epigramx
Modified: 2019-11-19 09:52 UTC (History)
0 users

See Also:
Kernel Version: Everything
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description epigramx 2019-11-19 09:52:12 UTC
This appears to be a long standing issue that is mentioned only in old Debian bug reports and current Raspbian discussions but it remains important for embedded systems because they are not powerful systems.

Sources:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=565485
https://www.raspberrypi.org/forums/viewtopic.php?t=181796

Issue Description:

"I use 'rfcomm listen' in combination with 'gpspipe' to relay GPS data
from my eeePC to my Blackberry.  I noticed, however, that the CPU
overhead from 'rfcomm listen' is quite significant -- over 10% on my
eeePC 901.

Running 'strace' reveals that rfcomm is in a tight loop between
'waitpid' (with WNOHANG) and 'ppoll' (with a 200 nanosecond(!!)
timeout).  This is evidently the source of the extreme CPU usage.

If it's critical that rfcomm know the child has died ASAP, I would
expect it should be watching for the SIGCHLD signal rather than trying
to use waitpid().  If it's not all that critical, it probably doesn't
need to be watching for child death at a rate of five millions checks
per second. :)"

also

"I am experimenting with a serial connection to Pi Zero W using the on-board Bluetooth module. I can successfully establish a connection and send data to Pi. However as soon as the connection starts, the rfcomm process consumes almost 50% of the processor (as top shows). This noticeably slows down other processes."

The issue remains on current code.

Note You need to log in before you can comment on or make changes to this bug.