Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Sep 2005
Puzzling cross-distro slowness in cdrecord writes
I've got an Plextor 708A (IDE interface) as /dev/hdc on a K7N2 Delta-L motherboard with an AMD 2800+ CPU, 1GB RAM, and a Seagate 300GB disk on /dev/hda. These are the only two IDE devices, and there are no other unusual devices (no USB, etc---just a couple of Xinerama's screens, a keyboard, a mouse, and a 100baseT ethernet connection on an idle home network).
This drive does NOT write above about 20x on a CDR under -either- Ubuntu Hoary (2.6 kernel) -or- Debian Sarge (2.4 kernel) without getting underruns (if I turn on the drive's BurnFree, it averages about 16x), but it -does- write up to 40x if I boot under Windows 98! And yes, I trust that both the DiscCopier app and Nero are reporting speeds correctly---for one thing, Windows can write the same CD in substantially less time than either of the Linux distros I've tried, using the same CD's and the same hardware. (I don't know -exactly- what versions of cdrecord, or indeed even exactly what Debian version I was using, since I'm now using Ubuntu, but I still have snapshots of all the old installations and could look up version numbers if necessary. I'm certainly using a completely stock Ubuntu Hoary right now.)
Yes, DMA is on. (Sarge turned it on by default, Ubuntu for some reason didn't & I had to use "hdparm -d1 /dev/hdc" to do so.)
Under Debian (2.4 kernel), I was using ide-scsi to talk to ATAPI:0,0,0; under Ubuntu, I'm -not- using ide-scsi (2.6 kernel) and am using /dev/cdrw instead.
Even better, if I copy from /dev/zero instead of from some file in my homedir (which is on /dev/hda, of course), then I -can- write up to 40x! And BurnFree is never used at all in this case. For example:
time sudo cdrecord -dummy -v -dao -gracetime=5 -overburn driveropts=burnfree dev=/dev/cdrw tsize=707m /dev/zero
works fine and reports all speeds as 40x (and runs in 3m17s wallclock time). But if I drop the tsize & pick some 707meg file in ~/, BurnFree gets used 164 times (repeatably) and the drive writes at about 16x (and runs in 4m19s wallclock time; it's not as big a difference as might be expected because of course no drive can write 40x until it gets pretty close to the outside of the CD). If I omit BurnFree, it underruns and blows out a hundred meg in or so. In both distros, the filesystem is ext3fs.
Okay, so now it looks like there's something wrong with the IDE controller. But for it to have such poor performance would be a noteworthy thing---no motherboard manufacturer or distro could get away with it. And I've done disk-to-disk copies on this very hardware, such as dd'ing one 200GB disk to another, in around an hour or so---so I'm getting transfer rates across both IDE controllers of 2-3GB/min. This is -far, far faster- than any CDR wants data. This is true in both distros, of course. And, of course, since I'm going hda -> hdc, the data source & sink are on opposite IDE channels and are both masters, so it ain't bus contention. (My disk-to-disks were also typically master-to-master, but it hasn't made much of a difference in speed either way.)
Could cdrecord be reading data extremely inefficiently from the filesystem? Perhaps. (Surely ext3fs isn't the problem; again, people would scream.) I -did- try screwing around with fs= to change the FIFO size; going from the default 4m to 8m made the problem slightly worse. Going to 2m wasn't any better than 4m. (And, of course, it doesn't matter -which- big file I try in my filesystem.)
I'm now out of ideas. I haven't tried non-cdrecord-based CD-writers; suggestions on what to try would be appreciated, but it'd be even more appreciated if someone knows if this is a known cdrecord problem (also hard to believe, but...). I can get very specific version numbers, commands, command outputs, etc if anyone thinks they'd help (I did all of my experients in an Emacs shell buffer, so it's all there), but given that the behavior hasn't improved across different distros (and presumably different cdrecord versions---these distros were installed something like a year apart!), it's hard to believe that it's some specific bad version.
If there's a more-appropriate place to ask this question (some cdrecord-specific forum, or any other place), please let me know; I'm basically guessing on asking here because this problem has manifested on two different Debian-ish distros and thus doesn't even seem specific to any one distro (unless it's anything derived from Debian...).
- Join Date
- Sep 2005
More info: the problem is CUEFILE=
More experimentation has revealed that using CUEFILE= is what provokes the underruns---if I write the -same- large file that the cuefile points at by simply specifying its name directly, I can write at 40x with no problems at all.
I'm at a loss as to what cdrecord could possibly be doing here such that it's reading files differently (and very slowly) only when coming from cuefiles.