Bug 13602 - some HFS CD-ROM's can't be read
Summary: some HFS CD-ROM's can't be read
Status: CLOSED WILL_NOT_FIX
Alias: None
Product: IO/Storage
Classification: Unclassified
Component: SCSI (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: linux-scsi@vger.kernel.org
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-06-22 15:20 UTC by Bill McGonigle
Modified: 2012-06-12 09:47 UTC (History)
3 users (show)

See Also:
Kernel Version: 2.6.34.7-56.fc13.x86_64
Subsystem:
Regression: No
Bisected commit-id:


Attachments

Description Bill McGonigle 2009-06-22 15:20:49 UTC
I recently attempted to copy a set of HFS CD's (c. 1997) on linux by mounting them:
  -t hfs -o session=1

25 of the 28 CD's worked perfectly.  Three had problems mounting.  Errors look like:

  hfs: unable to set blocksize to 512
  hfs: can't find a HFS filesystem on dev sr0.

The CD's mount and copy fine under OS X 10.4.11.  I've put the first meg of each CD here:

  http://bfccomputing.com/downloads/linux/badhfs/18.dd  
  http://bfccomputing.com/downloads/linux/badhfs/20.dd
  http://bfccomputing.com/downloads/linux/badhfs/25.dd

in case that might help somebody see what's up.  I can provide more info if needed.
Comment 1 Christoph Hellwig 2010-10-01 14:40:47 UTC
The message means hfs thinks the filesystem has 512 byte blocks, while the CDROM driver only supports sector sizes larger than that.  CDROMs with block sizes smaller than 2k are rather uncommon and it seems like your driver doesn't support them.
Comment 2 Vlad Codrea 2010-10-21 04:16:27 UTC
So is this a libata bug? It seems that a lot of other people have encountered it as well:

http://www.google.com/search?q="unable+to+set+blocksize+to"
Comment 3 Christoph Hellwig 2010-10-21 04:19:03 UTC
I think it's actually the SCSI cdrom driver.  Btw, using a loop device ontop of the CDROM block device node should work around this issue.
Comment 4 Anonymous Emailer 2010-10-28 16:06:44 UTC
Reply-To: James.Bottomley@suse.de

On Thu, 2010-10-28 at 12:53 +0000, bugzilla-daemon@bugzilla.kernel.org
wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=13602
> 
> 
> Christoph Hellwig <hch@lst.de> changed:
> 
>            What    |Removed                     |Added
> ----------------------------------------------------------------------------
>           Component|HFS/HFSPLUS                 |SCSI
>          AssignedTo|hch@lst.de                  |linux-scsi@vger.kernel.org
>             Product|File System                 |IO/Storage

Why is this actually a bug?  Linux has never supported filesystem block
sizes smaller than device ones ... we don't have the necessary RMW code
for it.  The sr driver supports 512 byte sectors but *only* if the CD
was burned with them (which isn't always possible depending on CD
burner) and the reader supports them.  I'd need the dmesgs to be sure
but i'd guess READ_CAPACITY on this cd returns 2048 as the block size.

James
Comment 5 Bill McGonigle 2010-10-28 16:40:30 UTC
@Christoph: good workaround.  I can losetup my /dev/sr0 and mount the discs (tested 1 anyway).  I need to omit the session=1 option on the loopback but it does mount that way.  Just to be sure, I confirmed sr0 won't mount without the session option.  Not sure what would happen if the data were on session=2.

For the HFS vs. SCSI question, I'm not clear why 22 out of 25 discs in this set work OK but the remaining 3 would not.  They're factory-stamped discs, no idea how they were mastered.

The only dmesg I get is:

 hfs: unable to set blocksize to 512
 hfs: can't find a HFS filesystem on dev sr0.

This is now kernel 2.6.34.7-56.fc13.x86_64.

Not sure if it's helpful, but here are the /sys sizes:

[bfccomputing@zpm sr0]$ more `pwd`/queue/*size
::::::::::::::
/sys/block/sr0/queue/hw_sector_size
::::::::::::::
2048
::::::::::::::
/sys/block/sr0/queue/logical_block_size
::::::::::::::
2048
::::::::::::::
/sys/block/sr0/queue/max_segment_size
::::::::::::::
65536
::::::::::::::
/sys/block/sr0/queue/minimum_io_size
::::::::::::::
2048
::::::::::::::
/sys/block/sr0/queue/optimal_io_size
::::::::::::::
0
::::::::::::::
/sys/block/sr0/queue/physical_block_size
::::::::::::::
2048

I can't even say this is a feature implemented in MacOSX for compatibility that linux lacks - for all I know, they do some sort of loopback mount automagically if certain kinds of discs are detected.

I don't know if there are any linux equivalents for doing something similar by the kernel; one could imagine userland machinations that could do the right thing, though perhaps costly.
Comment 6 Christoph Hellwig 2010-10-31 21:17:53 UTC
Old SGI, SUN and apparently mac systems did support CDROMs with 512 byte sectors.  In MMC5 there is only a vague reference to them beeing supported by the deprecated block descriptors taken from older SPC specs in some drives.

Searching on the internet implies you need very specific CDROM drives to use the SGI/SUN 512 byte sector CDROMs.

I think we safely say there is no way to properly support this in Linux and we we can close this bug.

Note You need to log in before you can comment on or make changes to this bug.