Find the answer to your Linux question:
Results 1 to 4 of 4
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Scalable Storage Clustering with redundancy? What to use?

    I'm looking at implementing a large and scalable storage solution to replace lots of file servers which needs to meet the following requirements:

    1. Redundancy
    2. Scalability/Expandability
    3. Performance
    4. Single Directory tree for all storage
    5. Preferably runs on Linux, my favourite.
    6. Preferably on commodity hardware that is easy to replace/add, although not necessary

    Ideally I'd like it to have something like a Global Filesystem like AFS or Microsoft's DFS where everything is organized into one directory tree, but also to have redundancy and data security like a Cluster Filesystem, so if one or more nodes breaks then it still works and we just put in new nodes. I also need to be able to extend it by putting in new nodes so that I can make the storage grow indefinitely.

    At the moment my storage requirements are 15-30TB, but this will increase and so I need to able to add more space by adding more nodes.

    What I think I am really after is a Cluster FileSystem, something like the Google File System, except that is not available cos Google are evil and eat up lots of talented open source people and keep cool stuff like that to themselves.

    Ideally the solution should also be fast and reliable, so that I can crunch data on it from other servers.
    Of course it doesn't __have__ to be open source or unixy or anything, but it would be nice...

    Any ideas?

  2. #2
    Linux Guru bigtomrodney's Avatar
    Join Date
    Nov 2004
    The first things that spring to mind would be using LVM. Logical Volume Management is something you would often see in SAN solutions and is supported out of the box. SUSE is particularly easy to configure this way. This means you can later add disks to the Volume and the space will be good to go. I am already assuming you are using RAID - my guess at least RAID5 so that there's you're redundancy (Unless you are thinking of just running a cluster?).

    The Single Direrctory tree is the easiest. Unix had mounting a long time before Microsoft had DFS. You can mount any storage at any point. This is the standard way of working in Linux. So all you do is create a directory and mount your new (or existing) filesystem there. In fact you can mount the same directory in multiple locations if needs be. Check out
    mount --bind
    For commodity hardware...It would be possible but ultimately a lot easier to buy 1U or 2U servers that can hold several disks per unit. This isn't just a recommendation for Linux, I would say the same if you were using Windows or anything else.

  3. #3
    thanks I am quite seasoned in unix and I know all about mounting.

    I have not used LVM but I do use RAID.

    I was actually thinking of running a cluster. SANs are also on the agenda.

    It's a debate between the two really. I think a scalable cluster is probably better than a SAN, but I am still exploring both solutions.

    I'd love to hear what other people are using out there.

  4. $spacer_open
  5. #4
    Just Joined!
    Join Date
    Oct 2002
    southern ontario, canada
    Look into OCFS (Oracle Cluster File System), should be the most likely one to fill your needs quite quickly. OCFS2 is designed to handle files more flexibly than OCFS1 and is currently in the linux kernel (OCFS2 I mean). I dont know how this does for redundancy.

    Otherwise look int CodaFS, AndrewFS, and Lustre Cluster Filesystem.

    Apparently Hadoop FS (Google FS clone) and GlusterFS (from GNU) are also cluster filesystems that might be what you're looking for, I'm just finding out about them myself on Wikipedia's FS list List of file systems - Wikipedia, the free encyclopedia

    Hope it helps.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts