Find the answer to your Linux question:
Results 1 to 9 of 9
I'm setting up a NAS server for the house and need a little advice. I have 5 SATA disks in a Hot-swap cage. 3 are 750GB and 2 are 500GB. ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jun 2004
    Posts
    23

    Red face EVMS and Software RAID questions.


    I'm setting up a NAS server for the house and need a little advice. I have 5 SATA disks in a Hot-swap cage. 3 are 750GB and 2 are 500GB. My plan was to break them all into 250GB segments and then create 3 RAID 5 LUNs...and then stripe the LUNs together.

    I actually did this and it seems to work. The only issue is that 1) the drives are constantly grinding and the lights are always flashing...at first I thought this was the initialization...but it has lasted for hours. 2) I'm not sure if this is optimal for performance. Since I have nothing to really relate it to.

    Finally, I tried using JFS as the filesystem, but I got too many errors, and so exded up with ReiserFS. So the next part of the question is which Filesystem should I apply to this volume? XFS, JFS, ReiserFS or EXT3?

    The main use of the sytem will be a backup target for my clients and for storing music and video files.

    Also, I would like to setup a small 40GB iSCSI target on this system...but I can't seem to get that working. Can't load the files for ISCSITARGET or apt-get them.

  2. #2
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,752
    Did I read this correctly? You took each 750GB drive and made (3) 250GB "chunks", and then made (2) 250GB segments from each of the 500's? For a total of (13) 250GB "chunks"? Then created 3 RAID5 slices - and then striped those 3 RAID5 volumes?

    If so, I can't say it any other way - Wow. That is creative, but definitely a bad idea. You have increased the complexity of each volume that the filesystem sits on, while at the same time, reduced the fault tolerance and performance of your disks.

    Since this is for data backup, I don't think max performance is your main concern. With that in mind, I would simply find out the formatted size of your 500GB drives (say 465GB) and use md to create a single RAID5 set using 465 X 5 drives. This will give you the best fault tolerance vs. max space tradeoff.

    The remaining space of ~200GB X 3 can go into another RAID5 set that is mounted as another volume. Again, this is the best performance vs. fault tolerance.

    OR, both RAID sets can be combined into a single volume using LVM. (This would probably be the route I would take.) Using this setup, you would be able to lose 1 of your disks and not lose data and still have all of the space in one volume at the filesystem level. (Note: This is still not optimal for performance, since 3 disks are handling data for 2 RAID5 sets at the same time. Whether or not this matters to you depends on the load. If the machine has a single 100Mbit network link, the network can only push 10MB/Sec and will likely be the limiting factor anways.)

    For the filesystem, my own preference is to either reiserfs or xfs. If my data is smaller files (anything 20-50MB or less), then reiserfs will perform better. If the data is larger files 100MB+, xfs is the way to go. File system comparison

  3. #3
    Just Joined!
    Join Date
    Jun 2004
    Posts
    23

    Thanks for the help

    I appreciate the help here. I'm used to more advanced RAID setups, so the configuration I built isn't all that out of bounds...at least from what I'm used to...Though I'm generally dealing with hardware RAID systems that support meta striping...software is a new gig.

    So, first, how is my current configuration less fault tolerant? I have 3 LUNS, that are striped. If any single drive breaks, I would still have consistency against the group. All groups are in RAID5, so I would be able to rebuild when the drive is replaced. The only issues I saw were 1) complexity, 2) performance. Complexity has to do with figuring out which of the drives has broken when one breaks...not sure about this yet. For performance...that's what I don't know. I'm guessing that all the parity calculations against those volumes would give me a lot of overhead...and that's what I'm most concerned with.

    My plan is to split up the volumes in ones that you describe and test each. The 3 drive, 5 drive, and 5+0. Thanks for the FS suggestion...I'm going to try XFS, JFS and Reiser now...I've pretty much decided that ext3 is out...

    I'll keep digging and post my results.

  4. #4
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,752
    You never explained how the RAID5 sets were built. You said 3 RAID5 sets were created from 13 250GB chunks. That doesn't split evenly, so there has to be at least one uneven RAID set.

    But that doesn't matter. Even with a larger RAID setup (hardware or software) this would be a bad config because you have the same spindle (physical disk) servicing multiple RAID groups - and these RAID groups are then striped as well (this is going to hurt performance, not help it.)

    Unless you *specfically* know how the 250GB chunks are allocated to the RAID sets, it is possible that you have 2 chunks from the same RAID set on one physical disk. Meaning that a loss of that disk will mean that RAID set is lost. And if you lose ONE RAID set, then the stripe is complete lost as well.

    As for performance, you are splitting data to a 3-way stripe that is then directing each 1/3 to a RAID set. Problem is, the RAID sets all reside on the SAME disks. You've already seen the proof - any little update causes all of your disks to go nuts. First the stripe, and then the RAID sets have to be updated - this means the disks are always busy. The only way this would be good is if each RAID set were independent spindles (9 HDD's would be needed.) BUT, if his were a bigger array with 9 HDD's, I'd use a RAID 1+0 on 8 disks and leave one as a hot spare. That would be the best speed + fault tolerance (FT). If I needed max space + FT, I'd RAID5 8 HDD's and keep one as a hot spare. Reads from a RAID5 with that many disks will be good, but I'd take a hit on writes, as any small update on 1 disk may require all 8 HDD's be updated to maintain the XOR consistency.

    HTH.

  5. #5
    Just Joined!
    Join Date
    Jun 2004
    Posts
    23

    Talking Hmmm Now I'm with you

    I sort of get the point about multiple updates, but I'm not sure I understand the point about losing a set in the event of a RAID failure. Here how I have the pieces allocated.

    3 (750GB) drives sda,sdb,sdc
    2 (500GB) drives sdd, sde

    the 750s are all split into 3 parts, sda1, sda2, sda3 all of equal sizes.
    the 5002 are split into 2 parts sdd1, sdd2 all the same size as the 750 parts

    these parts are assembled into RAID5 sets
    /dev/md0 ==> /dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1
    /dev/md1 ==> /dev/sda2, /dev/sdb2, /dev/sdc2, /dev/sdd2, /dev/sde2
    /dev/md2 ==> /dev/sda3, /dev/sdb3, /dev/sdc3

    So, I know that each of the sets is protected with RAID 5.

    I then make a RAID 0 stripe of the 3 sets.

    /dev/md3 ==> /dev/md0, /dev/md1, /dev/md2

    My thought was that a write operation would first be allocated by the RAID0 operation and that part would be written to say md1, that operation would then be placed on the correct disk part and parity generated.

    I see now what you are saying about performance, since I'm running two operations for each write. I guess I figured that the overhead of the RAID0 operation wouldn't be all that great. My other choice would be to concatenate the volumes, since the end goal is to make one big volume out of them. The only thing about concatentation is that I wouldn't necessarily be using all of the disks all of the time...but it really depends on the overhead.

    Also, I figured out why the disks were all spining all the time. mdadm had to initialize the RAID5 sets which can take hours. It essentially calls one of the drives in the set a spare and starts rebuilding from it.

    When I get back from work today, I'll lay down a filesystem and test it out. If you are right and the performance dogs, then I'll go back to the drawing board. I've been running base level tests with the seperate volumes, so I can compare performance of the aggregate to the pieces.

    I'll let you know.

    Thanks for the help.

  6. #6
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,752
    With the layout you've got, you can lose one disk and keep running. (I wasn't sure that you knew which chunks made up which RAID sets.)

    But, your performance trying to stripe will be bad. Call each RAID5 set RAID_1, RAID_2, and RAID_3.

    For *EVERY* write, 1/3 of the data is sent to RAID_1, RAID_2, and RAID_3 (ideally.) But since all of thee RAID sets are the same spindles, you have almost every disk doing 3 writes. And to complete ONE write, all subdisks of the RAID may need to be updated *every time*. So, assuming WORST case scenario, every write at the filesystem level translates to 13 writes at the disk level - and this is likely since every write is spread to every RAID set.

    Using a single RAID5 set with the 5 disks would mean one write for each disk. If you stacked the 465GB RAID + 200GB RAID into one LVM volume, you would get 2 writes on the 750GB spindles and 1 write on the 500GB spindles *IF* both "areas" of the volume are written to (unlikely, but a worst case scenario.)

    Hope that clarifies.

  7. #7
    Just Joined!
    Join Date
    Jun 2004
    Posts
    23

    Smile thanks for the help and advice...and patience

    I've run a bunch of non scientific tests on various configurations. Basically I'm just moving a directory from one system to the NAS server and timing the copy. This is all over a Gbit LAN, no jumbo frame support, from an Opteron 170 to an X2 1.9GHz system. From Windows XP to Ubuntu. let's see if this formats correctly...

    RAID tests--------bytes---------files-----avg size
    --------------1,944,156,884 ----1453------1.27
    -------------------------------------------------------
    name--------FS-------Type----Drives-----sec-----MB/s
    500GB-------ext3---------0------1--------144-----13
    750GB-------ext3---------0------1--------100-----19
    RAID0-------ReiserFS------0------3--------74------25
    RAID4-------ReiserFS------4------5 -------102-----18
    RAID5-------ReiserFS------5------5--------104-----18
    RAID0-------JFS-----------0------3--------83------22
    RAID4-------JFS-----------4------5--------100-----19
    RAID5-------JFS-----------5------5--------99------19
    RAID 50-----JFS-----------5+0----5 -------151-----12
    RAID 50-----xfs-----------5+0----5--------164-----11
    RAID 50-----ReiserFS------5+0----5--------124-----15
    RAID5big----ReiserFS------5------5--------103------18

    So I figure my striped RAID is about 40% slower than the standard RAID5. Based on this, I'm taking your advice (HROadmin26) and just striping the 5 drives into 500GB chunks and making a second drive out of the other 3 drive RAID5 stripe. I'll probably use the other stripe for scratch space or maybe an iSCSI target...We'll see.

    hope this helps any of the googlers out there looking for a comparison of options. Though it's not scientific, it did help me decide between xfs, jfs and Reiser....and of course RAID5.

    thanks

  8. #8
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,752
    Thanks for the update and numbers.

    From my own experience, testing, work, etc. I know the conclusions are right....but the NUMBERS!?

    This is a 5 drive enclosure? PATA, SATA, SAS, or SCSI? Are all drives the same interface?

    Was the machine sending data busy with other requests? Was the CPU of the Ubuntu machine pegged from a filesystem or md module?

    The transfer numbers just seem low to me. Example, I have 4 drives (part of a 16 drive array) set up in a RAID5 LUN. In testing, it will push ~75-80MB/sec....these are 250GB SATA drives, and I'd consider them "slow."

  9. #9
    Just Joined!
    Join Date
    Jun 2004
    Posts
    23

    Unhappy Yeah, I know...

    I've never been much at getting performance from my systems. I think it's 'cause I'm so cheap. The system I have is an X2 1.9GHz processor with 2GB of RAM. I'm using the built-in Gbit controller for the LAN and I'm booting for an old 80GB ATA disk. I setup the RAID as described above, and I'm using Samba to move the files from my XP system (opteron, Gbit, 4GB RAM) over the network. I kindof assume that I'm not going to get perforance, since I'm moving stuff via Samba...and since my lan isn't Jumbo Frame compliant.

    for the RAID controllers, I'm using an SI3124 for 4 of the drives and the other one is on the motherboard SATA port. All should be SATAII, not that that matters a whole lot.

    I tried copying data from the ATA to the RAID, and got about the same results and going over the LAN.

    I Know that I could do better with a hardware card, but like I said, I'm cheap. My card is supposed to support RAID, but it's that FAKE RAID, so I decided to go with pure software RAID to make things easier.

    Any input you have in imporving or tuning this would be greatly appreciated.

    thanks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •