Hi,

In a nutshell, I have a server with a Ultrium/LTO-1 tape drive that should give me a 100GB per tape (or 200GB with the advertised 2:1 compression ratio) - and all I can ever get on a tape is about 50GB only!

So now, on with the details that I've managed to piece so far.

The server is a HP/DL380 - I have in addition to the built-in RAID controller an Adaptec 29160 - this is what the HP Ultrium-230 drive is connected to.
The server runs Xubuntu 6.06 (basically, this is Ubuntu LTR 6.06 with an Xfce GUI) - the kernel is a stock 2.6.15-26.

I am reasonnably sure the the problem is not hardware related:
- I can write 15GB uncompressed on a DLT-IIIXT tape (exactly what I'm supposed to be able to write)
- I don't get any error messages - just a "tape full" after writing only 50GB
- I can read whatever I have written on the tape - so I'm pretty sure there is no data corruption
- as the SCSI controller is both dedicated to the tape drives and not a RAID one, I don't think shoe-shinning is the issue (the LTO drive was originally connected to the RAID controller, but I have taken it off as part of my investigations)

There doesn't seem to be anything wrong reported during the boot process - this is an extract of dmesg:
Code:
[42949395.060000] scsi0 : Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 7.0
[42949395.060000]         <Adaptec 29160B Ultra160 SCSI adapter>
[42949395.060000]         aic7892: Ultra160 Wide Channel A, SCSI Id=7, 32/253 SCBs
[42949395.060000] 
[42949395.600000]   Vendor: HP        Model: Ultrium 1-SCSI    Rev: E32D
[42949395.600000]   Type:   Sequential-Access                  ANSI SCSI revision: 03
[42949395.600000]  target0:0:2: Beginning Domain Validation
[42949395.670000]  target0:0:2: wide asynchronous
[42949395.720000]  target0:0:2: FAST-10 WIDE SCSI 20.0 MB/s ST (100 ns, offset 15)
[42949395.780000]  target0:0:2: Domain Validation skipping write tests
[42949395.780000]  target0:0:2: Ending Domain Validation
[42949396.580000]   Vendor: QUANTUM   Model: DLT7000           Rev: 2255
[42949396.580000]   Type:   Sequential-Access                  ANSI SCSI revision: 02
[42949396.580000]  target0:0:6: Beginning Domain Validation
[42949396.580000]  target0:0:6: wide asynchronous
[42949396.590000]  target0:0:6: FAST-10 WIDE SCSI 20.0 MB/s ST (100 ns, offset 15)
[42949396.590000]  target0:0:6: Domain Validation skipping write tests
[42949396.590000]  target0:0:6: Ending Domain Validation
(Not that the LTO drive is downgraded to Fast/Wide SCSI 20Mbps because of the DLT drive on the same bus - all tests were also carried out with the LTO drive on its own and hence at full speed - with the same results)

When I originally started, the tape density as reported by "mt -f /nst0 status" was set to code 0x24 (for DDS-2 drives, from memory) which I thin kis because the drive was identified as a generic SCSI-2 drive, rather than anything specific. I have changed it to 0x40 (after a bit of googling I found that this should be the density code for LTO-1 tapes), but still have the same problem.
I have noticed, though, that when running a "mt -f /dev/nst0 densities" there is no 0x40 listed, and that there is no other code for any LTO/Ultrium tapes - now, as far as I can tell, this is more for information purposes than a clear list of what's supported, but you never know.

I have tried to backup both a regular /home partition (with all sorts of files, big and small), a VMWare image which is over 60GB in size, and running "dd if=/dev/urandom" tests and in all instances, the tape is full after about 50GB - so I'm pretty sure that what happens is not linked to some sort of "side effect" of the backup software struggling with large filesystems / small files, etc. I have carried tests using both bacula (backup software), straightforward tar, and even more sraightforward dd, and the same happens time and time again. The tapes themselves I think can be ruled out as I have used 3 brand new tapes with no difference.

I have a suspicion that the problem is linked to the SCSI layer and/or the st driver which is not quite configured as it should and just stops short before time (or does not use the correct algorithm/blocksize/god-knows-what-else to maximize data density on the tape) - but I just don't know where to look next.

If anyone as either encountered a similar problem, or has an idea, any help will be greatly received!

Olivier