hi all,

if have a problem extending a file system on a lvm managed raid. It does not expand to the available size, instead it seems to try to SHRINK the file system.

The Setup:
we have a server running Suse Enterprise Server 10. we have a raid which contained 6x 2TB discs in a raid 5 configuration under an lvm. we added 6 more 2tb discs and now we wanted to expand the ext3-filesystem residing on the first 6 hds to span the whole raid (12x 2tb discs).

the raid controller is a hp smart array p800. the old logical disc is /dev/cciss/c0p3 .

These were the steps involved:
- configured the raid controller to incorporate the 6 new disks into the current array.
- created a new logical disk on the added space which came up as /dev/cciss/c0p2 as device descriptor
- labeled the partition table with gpt and created a primary partition with type lvm spanning the whole device:
Code:
(parted) print
Disk geometry for /dev/cciss/c0d2: 0kB - 12TB
Disk-Label-Typ: gpt
Number  Start   End     Size    File system  Name                  Flags
1       17kB    12TB    12TB                                       lvm
- created a new lvm physical device from this device (Attention: data is from after adding it to the volume group)
Code:
 --- Physical volume ---
  PV Name               /dev/cciss/c0d2p1
  VG Name               raid2tb
  PV Size               10,92 TB / not usable 2,49 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              2861545
  Free PE               0
  Allocated PE          2861545
  PV UUID               XXXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXXXX
- added the physical device to the lvm volume group
Code:
server:~ # pvs
  PV                VG      Fmt  Attr PSize  PFree
  /dev/cciss/c0d2p1 raid2tb lvm2 a-   10,92T    0
  /dev/cciss/c0d3p1 raid2tb lvm2 a-    7,28T    0
- extended the lvm logical device to span the whole volume group
Code:
  --- Logical volume ---
  LV Name                /dev/raid2tb/raid2lv
  VG Name                raid2tb
  LV UUID                XXXXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXXXX
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                18,19 TB
  Current LE             4769241
  Segments               2
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0
the old raid has been mounted on /raid2 so df -h gave me this
Code:
/dev/mapper/raid2tb-raid2lv
                      7,2T  6,3T  935G  88% /raid2
now i tried
ext2online /raid2
but the output was
Code:
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
ext2online: warning - device size 588735488, filesystem 1953480704
ext2online: /dev/mapper/raid2tb-raid2lv has 1953480704 blocks cannot shrink to 588735488
and didn't change anything on the filesystem

out of curiosity i did
ext2online -d -v /raid2
the output was basically this:
Code:
ext2online: warning - device size 588735488, filesystem 1953480704
group 2 inode table has offset 2, not 1027
group 4 inode table has offset 2, not 1027
[...snipp...]
group 59614 inode table has offset 2, not 1027
group 59615 inode table has offset 2, not 1027
ext2online: /dev/mapper/raid2tb-raid2lv has 1953480704 blocks cannot shrink to 588735488
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
ext2_open
ext2_bcache_init
new filesystem size 588735488
ext2_determine_itoffset
setting itoffset to +1027
ext2_get_reserved
Found 558 blocks in s_reserved_gdt_blocks
using 558 reserved group descriptor blocks
that's it, it terminates with code 2.

can anyone identify the problem and how to fix this? help is very much appreciated.

i have to add the unmounting and doing the resize offline is at the moment not an option. but any hint on how long it takes to resize a filesystem from 7TB to 18TB would be nice anyway.

thanks in advance
jonas