Bug 11448 - NFS client has inconsistent write flushing to non-linux serversa
Summary: NFS client has inconsistent write flushing to non-linux serversa
Status: CLOSED OBSOLETE
Alias: None
Product: File System
Classification: Unclassified
Component: NFS (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: Trond Myklebust
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-08-28 11:41 UTC by Doug Hughes
Modified: 2012-05-22 13:39 UTC (History)
2 users (show)

See Also:
Kernel Version: 2.6.22.15
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Doug Hughes 2008-08-28 11:41:07 UTC
Latest working kernel version: N/A (works on 2.6.18 with Linux NFS server, but we cannot continue to use that kernel for various reasons)
Earliest failing kernel version: N/A (2.6.18, 2.6.24, and 2.6.25 are also known to fail by another party experiencing same bug against non-Linux NFS servers). Not currently known to be reproducible against NetApp, but this is not authoritative (lack of seeing a bug does not guarantee lack of existence)
Distribution: CentOS 4.6
Hardware Environment: supermicro twin, 2 quad core Harpertown CPU, 16G ram.
Software Environment: CentOS 4.6
Problem Description: 

NFS client writes to Sun Solaris 10 U4 server. 
at some point in time, there is an empty portion of the output file from the writer containing missing data (shows as NULL bytes from another NFS client issuing a tail -f on the file being written). 
confirmed that the file as exists on the NFS server is sparse, missing bytes (not necessarily multiple of 512 or 1024, one sample is a gap of 3818 bytes, another is 1895 bytes, another is 423 bytes)

if you do a read of the entire file from the NFS client doing the writing, it causes the non-flushed writes to be instantly flushed to the server followed by a NFS3 commit operation. The data then can be seen on all other NFS clients.

If you do an open of the file alone, no flush
if you do an open and a close, no flush
if you do an open and a read at the beginning of the file (far before the data that is outstanding), *usually* no flush (one case where it did).
If you do a read at another position in the file, no flush (other than as indicated above).
If you do a read at the indicated offset where the bytes are null, it causes the NFS client to write and NFS commit to the server (truss output available)

The missing blocks may flush themselves after undefined periods of time which can be hours. Our runs last days.

Steps to reproduce:

Chemist running NAMD sees frequent cases of this in his output trajectory index files. We don't have an exact sequence of steps to reproduce. After I file this ticket I will be giving ticket number to another person I know at a different company experiencing the same problem as described above (to the best of my knowledge)
Comment 1 Anonymous Emailer 2008-08-28 13:28:24 UTC
Reply-To: akpm@linux-foundation.org


(switched to email.  Please respond via emailed reply-to-all, not via the
bugzilla web interface).

On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
bugme-daemon@bugzilla.kernel.org wrote:

> http://bugzilla.kernel.org/show_bug.cgi?id=11448
> 
>            Summary: NFS client has inconsistent write flushing to non-linux
>                     serversa
>            Product: File System
>            Version: 2.5
>      KernelVersion: 2.6.22.15
>           Platform: All
>         OS/Version: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: NFS
>         AssignedTo: trond.myklebust@fys.uio.no
>         ReportedBy: doug@will.to
> 
> 
> Latest working kernel version: N/A (works on 2.6.18 with Linux NFS server,
> but
> we cannot continue to use that kernel for various reasons)
> Earliest failing kernel version: N/A (2.6.18, 2.6.24, and 2.6.25 are also
> known
> to fail by another party experiencing same bug against non-Linux NFS
> servers).
> Not currently known to be reproducible against NetApp, but this is not
> authoritative (lack of seeing a bug does not guarantee lack of existence)
> Distribution: CentOS 4.6
> Hardware Environment: supermicro twin, 2 quad core Harpertown CPU, 16G ram.
> Software Environment: CentOS 4.6
> Problem Description: 
> 
> NFS client writes to Sun Solaris 10 U4 server. 
> at some point in time, there is an empty portion of the output file from the
> writer containing missing data (shows as NULL bytes from another NFS client
> issuing a tail -f on the file being written). 
> confirmed that the file as exists on the NFS server is sparse, missing bytes
> (not necessarily multiple of 512 or 1024, one sample is a gap of 3818 bytes,
> another is 1895 bytes, another is 423 bytes)
> 
> if you do a read of the entire file from the NFS client doing the writing, it
> causes the non-flushed writes to be instantly flushed to the server followed
> by
> a NFS3 commit operation. The data then can be seen on all other NFS clients.
> 
> If you do an open of the file alone, no flush
> if you do an open and a close, no flush
> if you do an open and a read at the beginning of the file (far before the
> data
> that is outstanding), *usually* no flush (one case where it did).
> If you do a read at another position in the file, no flush (other than as
> indicated above).
> If you do a read at the indicated offset where the bytes are null, it causes
> the NFS client to write and NFS commit to the server (truss output available)
> 
> The missing blocks may flush themselves after undefined periods of time which
> can be hours. Our runs last days.
> 
> Steps to reproduce:
> 
> Chemist running NAMD sees frequent cases of this in his output trajectory
> index
> files. We don't have an exact sequence of steps to reproduce. After I file
> this
> ticket I will be giving ticket number to another person I know at a different
> company experiencing the same problem as described above (to the best of my
> knowledge)
> 

That seems rather ugly.

2.6.22 is getting a bit old though.  It's quite possible that this was
subsequently fixed, in which case upgrading your kernel or hassling the
vendor to backport the fix would be needed.
Comment 2 Doug Hughes 2008-08-28 13:33:35 UTC
Andrew Morton wrote:
> (switched to email.  Please respond via emailed reply-to-all, not via the
> bugzilla web interface).
>
> On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
> bugme-daemon@bugzilla.kernel.org wrote:
>
>   
>> http://bugzilla.kernel.org/show_bug.cgi?id=11448
>>
>>            Summary: NFS client has inconsistent write flushing to non-linux
>>                     serversa
>>            Product: File System
>>            Version: 2.5
>>      KernelVersion: 2.6.22.15
>>           Platform: All
>>         OS/Version: Linux
>>               Tree: Mainline
>>             Status: NEW
>>           Severity: normal
>>           Priority: P1
>>          Component: NFS
>>         AssignedTo: trond.myklebust@fys.uio.no
>>         ReportedBy: doug@will.to
>>
>>
>> Latest working kernel version: N/A (works on 2.6.18 with Linux NFS server,
>> but
>> we cannot continue to use that kernel for various reasons)
>> Earliest failing kernel version: N/A (2.6.18, 2.6.24, and 2.6.25 are also
>> known
>> to fail by another party experiencing same bug against non-Linux NFS
>> servers).
>> Not currently known to be reproducible against NetApp, but this is not
>> authoritative (lack of seeing a bug does not guarantee lack of existence)
>> Distribution: CentOS 4.6
>> Hardware Environment: supermicro twin, 2 quad core Harpertown CPU, 16G ram.
>> Software Environment: CentOS 4.6
>> Problem Description: 
>>
>> NFS client writes to Sun Solaris 10 U4 server. 
>> at some point in time, there is an empty portion of the output file from the
>> writer containing missing data (shows as NULL bytes from another NFS client
>> issuing a tail -f on the file being written). 
>> confirmed that the file as exists on the NFS server is sparse, missing bytes
>> (not necessarily multiple of 512 or 1024, one sample is a gap of 3818 bytes,
>> another is 1895 bytes, another is 423 bytes)
>>
>> if you do a read of the entire file from the NFS client doing the writing,
>> it
>> causes the non-flushed writes to be instantly flushed to the server followed
>> by
>> a NFS3 commit operation. The data then can be seen on all other NFS clients.
>>
>> If you do an open of the file alone, no flush
>> if you do an open and a close, no flush
>> if you do an open and a read at the beginning of the file (far before the
>> data
>> that is outstanding), *usually* no flush (one case where it did).
>> If you do a read at another position in the file, no flush (other than as
>> indicated above).
>> If you do a read at the indicated offset where the bytes are null, it causes
>> the NFS client to write and NFS commit to the server (truss output
>> available)
>>
>> The missing blocks may flush themselves after undefined periods of time
>> which
>> can be hours. Our runs last days.
>>
>> Steps to reproduce:
>>
>> Chemist running NAMD sees frequent cases of this in his output trajectory
>> index
>> files. We don't have an exact sequence of steps to reproduce. After I file
>> this
>> ticket I will be giving ticket number to another person I know at a
>> different
>> company experiencing the same problem as described above (to the best of my
>> knowledge)
>>
>>     
>
> That seems rather ugly.
>
> 2.6.22 is getting a bit old though.  It's quite possible that this was
> subsequently fixed, in which case upgrading your kernel or hassling the
> vendor to backport the fix would be needed.
>   

I am in the process of trying to duplicate this on 2.6.26.. I need the 
chemist to change his machine. (it's a kernel.org kernel for reasons of 
IB support, so no vendor to hassle). There is another party to this bug 
who seems to have the same symptoms in 2.6.25 who is trying to capture 
packet data and reproduce.
Comment 3 Doug Hughes 2008-08-28 14:04:12 UTC
I have snoop data on server and strace data on a client exhibiting the 
issue right now. When sync is run on client, it flushes the data to the 
ssrver.
Client couldn't be simpler. Open file at beginning of session. Write 
period meta-records containing some trajectory information, close at end.
16:42:56.143512 write(8, "1948900 47.1225 0 0 0 47.7759 0 "..., 118) = 118
16:43:01.845742 write(8, "1949000 47.0474 0 0 0 47.8865 0 "..., 116) = 116
16:43:07.481889 write(8, "1949100 47.045 0 0 0 48.0742 0 0"..., 116) = 116
16:43:13.150555 write(8, "1949200 47.1848 0 0 0 47.8868 0 "..., 116) = 116
16:43:18.788863 write(8, "1949300 47.251 0 0 0 47.7743 0 0"..., 113) = 113
16:43:24.429424 write(8, "1949400 47.2722 0 0 0 47.6937 0 "..., 118) = 118
...

When I noticed Nulls appear on the server, I issued synch on the client 
which caused the client to flush the missing data to the server (and 
lots of other stuff by the looks of it). The NFS client buffers up the 
113-118 byte writes until it gets to the write size of 32K and then 
sends it over, except for the stuff that it's missing.

New data: I have discovered at least one situation that if I do a read 
on the client doing the writing, it returns the correct information from 
the file, but that information is NOT on the NFS server, it is only 
available in the client's cached data somewhere in the kernel. Most of 
the time the request to read the data causes the data to be flushed to 
the server, but not always.

(snoop data shows the entire session up through the missing data and 
including the sync that causes the flush at the end)
Comment 4 Doug Hughes 2008-08-29 05:54:57 UTC
confirmed that this bug is present in same way in 2.6.26 using default 
ASYNC NFS, but so far does not exhibit when sync mount option is used.
Comment 5 Doug Hughes 2008-08-29 07:53:27 UTC
confirmed bug exists on 2.6.26 with ASYNC NFS. it has not exhibited with sync mount option. Have packet trace including series of correct writes, some missing bytes, and then issuing a sync to flush the rest.

The uncommitted data blocks may exist on the client for many hours or days before being flushed to the server.
Comment 6 bfields 2008-08-29 10:08:51 UTC
On Thu, Aug 28, 2008 at 01:27:53PM -0700, Andrew Morton wrote:
> 
> (switched to email.  Please respond via emailed reply-to-all, not via the
> bugzilla web interface).
> 
> On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
> bugme-daemon@bugzilla.kernel.org wrote:
> > NFS client writes to Sun Solaris 10 U4 server. 
> > at some point in time, there is an empty portion of the output file from
> the
> > writer containing missing data (shows as NULL bytes from another NFS client
> > issuing a tail -f on the file being written). 
> > confirmed that the file as exists on the NFS server is sparse, missing
> bytes
> > (not necessarily multiple of 512 or 1024, one sample is a gap of 3818
> bytes,
> > another is 1895 bytes, another is 423 bytes)

Seems like something that could happen if for example two write rpc's
got reordered on the network.  That's not necessarily a bug--the nfs
client isn't required to wait for confirmation of every previous write
before sending the next one.

However if the client isn't flushing dirty data to the server before
returning from close, then that's a violation of NFS's close-to-open
semantics:...

> > 
> > if you do a read of the entire file from the NFS client doing the writing,
> it
> > causes the non-flushed writes to be instantly flushed to the server
> followed by
> > a NFS3 commit operation. The data then can be seen on all other NFS
> clients.
> > 
> > If you do an open of the file alone, no flush
> > if you do an open and a close, no flush

... so this "close, no flush" could be a bug (depending on who is doing
that close when--I don't completely understand the described situation).

--b.
Comment 7 Anonymous Emailer 2008-08-29 10:14:19 UTC
Reply-To: staubach@redhat.com

J. Bruce Fields wrote:
> On Thu, Aug 28, 2008 at 01:27:53PM -0700, Andrew Morton wrote:
>   
>> (switched to email.  Please respond via emailed reply-to-all, not via the
>> bugzilla web interface).
>>
>> On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
>> bugme-daemon@bugzilla.kernel.org wrote:
>>     
>>> NFS client writes to Sun Solaris 10 U4 server. 
>>> at some point in time, there is an empty portion of the output file from
>>> the
>>> writer containing missing data (shows as NULL bytes from another NFS client
>>> issuing a tail -f on the file being written). 
>>> confirmed that the file as exists on the NFS server is sparse, missing
>>> bytes
>>> (not necessarily multiple of 512 or 1024, one sample is a gap of 3818
>>> bytes,
>>> another is 1895 bytes, another is 423 bytes)
>>>       
>
> Seems like something that could happen if for example two write rpc's
> got reordered on the network.  That's not necessarily a bug--the nfs
> client isn't required to wait for confirmation of every previous write
> before sending the next one.
>
> However if the client isn't flushing dirty data to the server before
> returning from close, then that's a violation of NFS's close-to-open
> semantics:...
>
>   
>>> if you do a read of the entire file from the NFS client doing the writing,
>>> it
>>> causes the non-flushed writes to be instantly flushed to the server
>>> followed by
>>> a NFS3 commit operation. The data then can be seen on all other NFS
>>> clients.
>>>
>>> If you do an open of the file alone, no flush
>>> if you do an open and a close, no flush
>>>       
>
> ... so this "close, no flush" could be a bug (depending on who is doing
> that close when--I don't completely understand the described situation).

I suspect that this last might depend upon 1) what options were used
when the file system was mounted and 2) how the file was opened.  The
flush-on-close wouldn't be needed if the file was opened read-only.

It seems a little odd that the holes aren't page aligned or page
sized multiples.

What application is being used to generate the file which is showing
these holes?

    Thanx...

       ps
Comment 8 Doug Hughes 2008-08-29 10:24:06 UTC
Peter Staubach wrote:
> J. Bruce Fields wrote:
>> On Thu, Aug 28, 2008 at 01:27:53PM -0700, Andrew Morton wrote:
>>  
>>> (switched to email.  Please respond via emailed reply-to-all, not 
>>> via the
>>> bugzilla web interface).
>>>
>>> On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
>>> bugme-daemon@bugzilla.kernel.org wrote:
>>>    
>>>> NFS client writes to Sun Solaris 10 U4 server. at some point in 
>>>> time, there is an empty portion of the output file from the
>>>> writer containing missing data (shows as NULL bytes from another 
>>>> NFS client
>>>> issuing a tail -f on the file being written). confirmed that the 
>>>> file as exists on the NFS server is sparse, missing bytes
>>>> (not necessarily multiple of 512 or 1024, one sample is a gap of 
>>>> 3818 bytes,
>>>> another is 1895 bytes, another is 423 bytes)
>>>>       
>>
>> Seems like something that could happen if for example two write rpc's
>> got reordered on the network.  That's not necessarily a bug--the nfs
>> client isn't required to wait for confirmation of every previous write
>> before sending the next one.
>>
if two RPCs got reordered on the network, and they encompass all the 
data, then there shouldn't be any missing data. It seems to me like 
pieces of data are just being skipped, for whatever reason, but I 
haven't exhaustively examined the NFS network data.

>> However if the client isn't flushing dirty data to the server before
>> returning from close, then that's a violation of NFS's close-to-open
>> semantics:...
>>
this is not confirmed yet. No solid cases of data not being present 
after close.
>>  
>>>> if you do a read of the entire file from the NFS client doing the 
>>>> writing, it
>>>> causes the non-flushed writes to be instantly flushed to the server 
>>>> followed by
>>>> a NFS3 commit operation. The data then can be seen on all other NFS 
>>>> clients.
>>>>
>>>> If you do an open of the file alone, no flush
>>>> if you do an open and a close, no flush
>>>>       
>>
>> ... so this "close, no flush" could be a bug (depending on who is doing
>> that close when--I don't completely understand the described situation).
>
> I suspect that this last might depend upon 1) what options were used
> when the file system was mounted and 2) how the file was opened.  The
> flush-on-close wouldn't be needed if the file was opened read-only.
>
no special options on open. Here are the mount options:
retry=1000,tcp,noatime,nosuid,nodev,dirsync,timeo=100,rsize=32768,wsize=32768
,hard,intr


> It seems a little odd that the holes aren't page aligned or page
> sized multiples.
>
indeed. and the time for them to actually get to the server is 
indeterminate (days is not uncommon. We have not as yet confirmed that 
some of the data never gets sent to the server until close)

> What application is being used to generate the file which is showing
> these holes?
>
namd and some custom code developed in-house for chemistry research (at 
the very least)
Comment 9 Anonymous Emailer 2008-08-29 10:54:02 UTC
Reply-To: staubach@redhat.com

Doug Hughes wrote:
> Peter Staubach wrote:
>> J. Bruce Fields wrote:
>>> On Thu, Aug 28, 2008 at 01:27:53PM -0700, Andrew Morton wrote:
>>>  
>>>> (switched to email.  Please respond via emailed reply-to-all, not 
>>>> via the
>>>> bugzilla web interface).
>>>>
>>>> On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
>>>> bugme-daemon@bugzilla.kernel.org wrote:
>>>>   
>>>>> NFS client writes to Sun Solaris 10 U4 server. at some point in 
>>>>> time, there is an empty portion of the output file from the
>>>>> writer containing missing data (shows as NULL bytes from another 
>>>>> NFS client
>>>>> issuing a tail -f on the file being written). confirmed that the 
>>>>> file as exists on the NFS server is sparse, missing bytes
>>>>> (not necessarily multiple of 512 or 1024, one sample is a gap of 
>>>>> 3818 bytes,
>>>>> another is 1895 bytes, another is 423 bytes)
>>>>>       
>>>
>>> Seems like something that could happen if for example two write rpc's
>>> got reordered on the network.  That's not necessarily a bug--the nfs
>>> client isn't required to wait for confirmation of every previous write
>>> before sending the next one.
>>>
> if two RPCs got reordered on the network, and they encompass all the 
> data, then there shouldn't be any missing data. It seems to me like 
> pieces of data are just being skipped, for whatever reason, but I 
> haven't exhaustively examined the NFS network data.
>
>>> However if the client isn't flushing dirty data to the server before
>>> returning from close, then that's a violation of NFS's close-to-open
>>> semantics:...
>>>
> this is not confirmed yet. No solid cases of data not being present 
> after close.
>>>  
>>>>> if you do a read of the entire file from the NFS client doing the 
>>>>> writing, it
>>>>> causes the non-flushed writes to be instantly flushed to the 
>>>>> server followed by
>>>>> a NFS3 commit operation. The data then can be seen on all other 
>>>>> NFS clients.
>>>>>
>>>>> If you do an open of the file alone, no flush
>>>>> if you do an open and a close, no flush
>>>>>       
>>>
>>> ... so this "close, no flush" could be a bug (depending on who is doing
>>> that close when--I don't completely understand the described 
>>> situation).
>>
>> I suspect that this last might depend upon 1) what options were used
>> when the file system was mounted and 2) how the file was opened.  The
>> flush-on-close wouldn't be needed if the file was opened read-only.
>>
> no special options on open. Here are the mount options:
> retry=1000,tcp,noatime,nosuid,nodev,dirsync,timeo=100,rsize=32768,wsize=32768 
>
> ,hard,intr
>
>
>> It seems a little odd that the holes aren't page aligned or page
>> sized multiples.
>>
> indeed. and the time for them to actually get to the server is 
> indeterminate (days is not uncommon. We have not as yet confirmed that 
> some of the data never gets sent to the server until close)
>
>> What application is being used to generate the file which is showing
>> these holes?
>>
> namd and some custom code developed in-house for chemistry research 
> (at the very least) 

Do these applications use mmap() or generate the file contents
serially or randomly?

    Thanx...

       ps
Comment 10 Doug Hughes 2008-08-29 11:27:53 UTC
Peter Staubach wrote:
> Doug Hughes wrote:
>> Peter Staubach wrote:
>>> J. Bruce Fields wrote:
>>>> On Thu, Aug 28, 2008 at 01:27:53PM -0700, Andrew Morton wrote:
>>>>  
>>>>> (switched to email.  Please respond via emailed reply-to-all, not 
>>>>> via the
>>>>> bugzilla web interface).
>>>>>
>>>>> On Thu, 28 Aug 2008 11:41:08 -0700 (PDT)
>>>>> bugme-daemon@bugzilla.kernel.org wrote:
>>>>>  
>>>>>> NFS client writes to Sun Solaris 10 U4 server. at some point in 
>>>>>> time, there is an empty portion of the output file from the
>>>>>> writer containing missing data (shows as NULL bytes from another 
>>>>>> NFS client
>>>>>> issuing a tail -f on the file being written). confirmed that the 
>>>>>> file as exists on the NFS server is sparse, missing bytes
>>>>>> (not necessarily multiple of 512 or 1024, one sample is a gap of 
>>>>>> 3818 bytes,
>>>>>> another is 1895 bytes, another is 423 bytes)
>>>>>>       
>>>>
>>>> Seems like something that could happen if for example two write rpc's
>>>> got reordered on the network.  That's not necessarily a bug--the nfs
>>>> client isn't required to wait for confirmation of every previous write
>>>> before sending the next one.
>>>>
>> if two RPCs got reordered on the network, and they encompass all the 
>> data, then there shouldn't be any missing data. It seems to me like 
>> pieces of data are just being skipped, for whatever reason, but I 
>> haven't exhaustively examined the NFS network data.
>>
>>>> However if the client isn't flushing dirty data to the server before
>>>> returning from close, then that's a violation of NFS's close-to-open
>>>> semantics:...
>>>>
>> this is not confirmed yet. No solid cases of data not being present 
>> after close.
>>>>  
>>>>>> if you do a read of the entire file from the NFS client doing the 
>>>>>> writing, it
>>>>>> causes the non-flushed writes to be instantly flushed to the 
>>>>>> server followed by
>>>>>> a NFS3 commit operation. The data then can be seen on all other 
>>>>>> NFS clients.
>>>>>>
>>>>>> If you do an open of the file alone, no flush
>>>>>> if you do an open and a close, no flush
>>>>>>       
>>>>
>>>> ... so this "close, no flush" could be a bug (depending on who is 
>>>> doing
>>>> that close when--I don't completely understand the described 
>>>> situation).
>>>
>>> I suspect that this last might depend upon 1) what options were used
>>> when the file system was mounted and 2) how the file was opened.  The
>>> flush-on-close wouldn't be needed if the file was opened read-only.
>>>
>> no special options on open. Here are the mount options:
>>
>> retry=1000,tcp,noatime,nosuid,nodev,dirsync,timeo=100,rsize=32768,wsize=32768 
>>
>> ,hard,intr
>>
>>
>>> It seems a little odd that the holes aren't page aligned or page
>>> sized multiples.
>>>
>> indeed. and the time for them to actually get to the server is 
>> indeterminate (days is not uncommon. We have not as yet confirmed 
>> that some of the data never gets sent to the server until close)
>>
>>> What application is being used to generate the file which is showing
>>> these holes?
>>>
>> namd and some custom code developed in-house for chemistry research 
>> (at the very least) 
>
> Do these applications use mmap() or generate the file contents
> serially or randomly?
>
>    Thanx...
>
>  
open file at beginning. write, write, write, write, write, (no seek, no 
offset, entirely serial), run a very long time, end.

strace excerpt:
16:42:56.143512 write(8, "1948900 47.1225 0 0 0 47.7759 0 "..., 118) = 118
16:43:01.845742 write(8, "1949000 47.0474 0 0 0 47.8865 0 "..., 116) = 116
16:43:07.481889 write(8, "1949100 47.045 0 0 0 48.0742 0 0"..., 116) = 116
16:43:13.150555 write(8, "1949200 47.1848 0 0 0 47.8868 0 "..., 116) = 116
16:43:18.788863 write(8, "1949300 47.251 0 0 0 47.7743 0 0"..., 113) = 113
16:43:24.429424 write(8, "1949400 47.2722 0 0 0 47.6937 0 "..., 118) = 118
16:43:30.057179 write(8, "1949500 47.4865 0 0 0 47.6251 0 "..., 117) = 117
Comment 11 Alan 2012-05-22 13:39:01 UTC
Closing as obsolete. Pleas reopen this bug against a modern kernel if in-correct

Note You need to log in before you can comment on or make changes to this bug.