Bug 13602
Summary: | some HFS CD-ROM's can't be read | ||
---|---|---|---|
Product: | IO/Storage | Reporter: | Bill McGonigle (bill-osdl.org-bugzilla) |
Component: | SCSI | Assignee: | linux-scsi (linux-scsi) |
Status: | CLOSED WILL_NOT_FIX | ||
Severity: | normal | CC: | alan, hch, vladc6 |
Priority: | P1 | ||
Hardware: | All | ||
OS: | Linux | ||
Kernel Version: | 2.6.34.7-56.fc13.x86_64 | Subsystem: | |
Regression: | No | Bisected commit-id: |
Description
Bill McGonigle
2009-06-22 15:20:49 UTC
The message means hfs thinks the filesystem has 512 byte blocks, while the CDROM driver only supports sector sizes larger than that. CDROMs with block sizes smaller than 2k are rather uncommon and it seems like your driver doesn't support them. So is this a libata bug? It seems that a lot of other people have encountered it as well: http://www.google.com/search?q="unable+to+set+blocksize+to" I think it's actually the SCSI cdrom driver. Btw, using a loop device ontop of the CDROM block device node should work around this issue. Reply-To: James.Bottomley@suse.de On Thu, 2010-10-28 at 12:53 +0000, bugzilla-daemon@bugzilla.kernel.org wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=13602 > > > Christoph Hellwig <hch@lst.de> changed: > > What |Removed |Added > ---------------------------------------------------------------------------- > Component|HFS/HFSPLUS |SCSI > AssignedTo|hch@lst.de |linux-scsi@vger.kernel.org > Product|File System |IO/Storage Why is this actually a bug? Linux has never supported filesystem block sizes smaller than device ones ... we don't have the necessary RMW code for it. The sr driver supports 512 byte sectors but *only* if the CD was burned with them (which isn't always possible depending on CD burner) and the reader supports them. I'd need the dmesgs to be sure but i'd guess READ_CAPACITY on this cd returns 2048 as the block size. James @Christoph: good workaround. I can losetup my /dev/sr0 and mount the discs (tested 1 anyway). I need to omit the session=1 option on the loopback but it does mount that way. Just to be sure, I confirmed sr0 won't mount without the session option. Not sure what would happen if the data were on session=2. For the HFS vs. SCSI question, I'm not clear why 22 out of 25 discs in this set work OK but the remaining 3 would not. They're factory-stamped discs, no idea how they were mastered. The only dmesg I get is: hfs: unable to set blocksize to 512 hfs: can't find a HFS filesystem on dev sr0. This is now kernel 2.6.34.7-56.fc13.x86_64. Not sure if it's helpful, but here are the /sys sizes: [bfccomputing@zpm sr0]$ more `pwd`/queue/*size :::::::::::::: /sys/block/sr0/queue/hw_sector_size :::::::::::::: 2048 :::::::::::::: /sys/block/sr0/queue/logical_block_size :::::::::::::: 2048 :::::::::::::: /sys/block/sr0/queue/max_segment_size :::::::::::::: 65536 :::::::::::::: /sys/block/sr0/queue/minimum_io_size :::::::::::::: 2048 :::::::::::::: /sys/block/sr0/queue/optimal_io_size :::::::::::::: 0 :::::::::::::: /sys/block/sr0/queue/physical_block_size :::::::::::::: 2048 I can't even say this is a feature implemented in MacOSX for compatibility that linux lacks - for all I know, they do some sort of loopback mount automagically if certain kinds of discs are detected. I don't know if there are any linux equivalents for doing something similar by the kernel; one could imagine userland machinations that could do the right thing, though perhaps costly. Old SGI, SUN and apparently mac systems did support CDROMs with 512 byte sectors. In MMC5 there is only a vague reference to them beeing supported by the deprecated block descriptors taken from older SPC specs in some drives. Searching on the internet implies you need very specific CDROM drives to use the SGI/SUN 512 byte sector CDROMs. I think we safely say there is no way to properly support this in Linux and we we can close this bug. |