When you open a file on an NFS-mounted partition with the O_DIRECT flag, reading from it fails if you attempt to read more than 2^24 bytes at a time. The read doesn't just return a smaller number of bytes (which would be reasonable), but fails with E2BIG, which makes programs think that an unrecoverable error occurred. The bug is trivially reproducible in several lines of Python (trivially converted to C as necessary): $ python >>> import os >>> os.open('/path/to/nfs/file', os.O_RDONLY | os.O_DIRECT) 3 >>> s=os.read(3, 2**24) >>> len(s) 16777216 >>> s=os.read(3, 2**24 + 1) Traceback (most recent call last): File "<stdin>", line 1, in ? OSError: [Errno 27] File too large Expected result is, of course, a successful read. It is perfectly acceptable for the read to read less bytes than requested, but not to return an error. Note that if you remove O_DIRECT or if you try this with a file that is not NFS-mounted, the read works perfectly. Also please note that this is not an academic issue: real-world programs can and do get tripped on this. For example, I had a cryptic error while burning a DVD from k3b, only to (eventually) track it down to this problem, where growisofs opened the image file in O_DIRECT mode and attempted to read a bunch of data (to fill its 32M buffer) at once. Upon receiving E2BIG it bailed out.
Known bug. Will be fixed in 2.6.18.