Bug 20072 - tapeinfo reports MaxBlock: 16777215 but writes with blocksize >2M fail
Summary: tapeinfo reports MaxBlock: 16777215 but writes with blocksize >2M fail
Status: RESOLVED DOCUMENTED
Alias: None
Product: IO/Storage
Classification: Unclassified
Component: SCSI (show other bugs)
Hardware: All Linux
: P1 normal
Assignee: linux-scsi@vger.kernel.org
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2010-10-11 10:19 UTC by lkolbe
Modified: 2012-08-14 11:06 UTC (History)
2 users (show)

See Also:
Kernel Version: 2.6.32.21
Subsystem:
Regression: No
Bisected commit-id:


Attachments
lsscsi of system (6.63 KB, text/plain)
2010-10-11 10:19 UTC, lkolbe
Details
lspci -vvvn of system (46.51 KB, text/plain)
2010-10-11 10:20 UTC, lkolbe
Details

Description lkolbe 2010-10-11 10:19:32 UTC
Created attachment 33222 [details]
lsscsi of system

As subject says, tapeinfo reports the tape drives (IBM ULTRIUM-HH4) support 16M Blocksize, but writing in blocksizes bigger than 2M fail 

root@shepherd:~# tapeinfo -f /dev/nst0
Product Type: Tape Drive
Vendor ID: 'IBM     '
Product ID: 'ULTRIUM-HH4     '
Revision: '85V3'
Attached Changer API: No
SerialNumber: '1K10014452'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 0
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x48
Density Code: 0x46
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
BOP: yes
Block Position: 0
Partition 0 Remaining Kbytes: -1
Partition 0 Size in Kbytes: -1
ActivePartition: 0
EarlyWarningSize: 0

dd tests:
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=2M count=4
4+0 records in
4+0 records out
8388608 bytes (8.4 MB) copied, 7.52488 s, 1.1 MB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=4M count=4
dd: writing `/dev/nst0': Device or resource busy
1+0 records in
0+0 records out
0 bytes (0 B) copied, 1.84202 s, 0.0 kB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=8M count=4
dd: writing `/dev/nst0': Device or resource busy
1+0 records in
0+0 records out
0 bytes (0 B) copied, 1.76087 s, 0.0 kB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=16M count=4
dd: writing `/dev/nst0': Invalid argument
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00986913 s, 0.0 kB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=15M count=4
dd: writing `/dev/nst0': Value too large for defined data type
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00954276 s, 0.0 kB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=16777215 count=4
dd: writing `/dev/nst0': Value too large for defined data type
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.0102587 s, 0.0 kB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=16777214 count=4
dd: writing `/dev/nst0': Value too large for defined data type
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.0100916 s, 0.0 kB/s
root@shepherd:~# dd if=/dev/zero of=/dev/nst0 bs=2M count=4
4+0 records in
4+0 records out
8388608 bytes (8.4 MB) copied, 1.76435 s, 4.8 MB/s

lspci -vvvn and lsscsi -v are attached; do you need more info?
Comment 1 lkolbe 2010-10-11 10:20:06 UTC
Created attachment 33232 [details]
lspci -vvvn of system
Comment 2 lkolbe 2010-10-11 11:25:15 UTC
This is what dmesg has to say about the earlier tries to write to the device with higher than 2MiB blocksizes:

[92315.076956] st0: Block limits 1 - 16777215 bytes.
[92445.438048] st0: Can't allocate 15728640 byte tape buffer.
[92457.973733] st0: Can't allocate 16777215 byte tape buffer.
[92460.990589] st0: Can't allocate 16777214 byte tape buffer.
Comment 3 lkolbe 2010-10-13 09:50:22 UTC
When instructing bacula to use the advertised tape blocksize of 16M, we get the following errors everytime it tries to access the tape:

13-Oct 11:48 sd1.techfak JobId 2692: Error: block.c:1002 Read error on fd=5 at file:blk 0:0 on device "drv2" (/dev/nst0). ERR=Cannot allocate memory.

Is there some limit/sysctl we might have to adapt to the native tape drive blocksize of 16M?

regards,
Lukas
Comment 4 Kai Mäkisara 2010-10-13 18:36:45 UTC
On Wed, 13 Oct 2010, bugzilla-daemon@bugzilla.kernel.org wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=20072
> 
> 
> 
> 
> 
> --- Comment #2 from lkolbe@techfak.uni-bielefeld.de  2010-10-11 11:25:15 ---
> This is what dmesg has to say about the earlier tries to write to the device
> with higher than 2MiB blocksizes:
> 
> [92315.076956] st0: Block limits 1 - 16777215 bytes.
> [92445.438048] st0: Can't allocate 15728640 byte tape buffer.
> [92457.973733] st0: Can't allocate 16777215 byte tape buffer.
> [92460.990589] st0: Can't allocate 16777214 byte tape buffer.
> 
> --- Comment #3 from lkolbe@techfak.uni-bielefeld.de  2010-10-13 09:50:22 ---
> When instructing bacula to use the advertised tape blocksize of 16M, we get
> the
> following errors everytime it tries to access the tape:
> 
> 13-Oct 11:48 sd1.techfak JobId 2692: Error: block.c:1002 Read error on fd=5
> at
> file:blk 0:0 on device "drv2" (/dev/nst0). ERR=Cannot allocate memory.
> 
> Is there some limit/sysctl we might have to adapt to the native tape drive
> blocksize of 16M?
> 
There is a limit that comes from physical constraints and memory 
fragmentation. Because of fragmentation, it is often not possible to find 
larger chunks of free memory than one page (e.g., 4 kB). The SCSI HBA 
supports often only a certain number of scatter/gather segments. Let's 
assume that this limit is 256. This means that the largest possible SCSI 
data buffer is 256 x 4 kB = 1 MB. This limits the possible block size. 
Depending on the state of fragmentation, the actual limit may be larger 
but this is the limit of the block size you can always use.

The st driver (with defaults) first tries to map the user buffer to the 
number of available scatter/gather segments. If this fails, the driver 
tries to allocate a local buffer using as large chunks of memory as it can 
get. If this succeeds, the request may still fail because of SCSI midlevel 
limits (I don't know what the limits currently are, but, if there are 
limits, they are tunable).

I have been able to read and write 16MB-1 blocks with my system. What you 
can actually reach depends on the things above.

Kai

Note You need to log in before you can comment on or make changes to this bug.