Find the answer to your Linux question:
Results 1 to 4 of 4
I have 100 nodes pc network, each computer have 1TB hard disk. I make two partitions, one is 750GB with ntfs partition for computer user and 2nd partion is 250GB ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Mar 2009
    Posts
    3

    Windows partitions share with Linux serrver


    I have 100 nodes pc network, each computer have 1TB hard disk. I make two partitions, one is 750GB with ntfs partition for computer user and 2nd partion is 250GB with no file system. I want to share 2nd partition of each computer with Linux server to make a big single storage pool. i.e the total size of this single storage pool will be 250GB * 100computers = 25,000 GB. Is it possible? and how I can make it.

    I have already configured OpenLDAP on Linux server for authentication and other resources.

  2. #2
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    3,439
    Well, one way would be to configure windows to export the 250gb as an iscsi target each.
    Afaik, only the windows server variants can do this with native software.

    Then the linux server would need to use all these iscsi devices and build a cluster file system on top of it.

    Be aware though, that this scenario is not easy to pull off.
    The 100 PCs are not reliable, as any user can turn them on/off.
    Also the network overhead is probably not neglectable with 100 nodes.


    In short: My suggestion would be to not go this route and instead use a traditional fileserver.
    FreeNAS on capable and redundant hardware would be my solution.
    You must always face the curtain with a bow.

  3. #3
    Linux Newbie
    Join Date
    Apr 2012
    Posts
    112
    It would be possible to create a software array from all the targets to provide some resilience against people switching PCs off but you'd still have the problem of network overhead.

    There is non-native iscsi target software for Windows 7, not sure what windows version is being run on those PCs though.

  4. $spacer_open
    $spacer_close
  5. #4
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,746
    Great idea, but you will have availability issues when people shut down or reboot their PC's, something over which you have little control. So, although this is a great idea for providing a global storage pool, in practice it may not be practical. You should consider that all local PC (remember, the 'P' means Personal!) storage is local to the user's system, and may or may not be available at any particular time.

    These days, disc storage is pretty cheap. Consider that 100 x 250GB == 25TB of storage. In today's terms, that is about 13 2TB discs, at $150 (or less) USD per drive, which == ~$2000 USD. Add in the cost for some low-cost (but reliable) arrays (about an additional $1000USD for 4x4 esata arrays), and you have a total cost of approximately $3000 for this amount of storage attached to your online, all the time, servers that everyone can access. You can make them RAID devices for a very little bit more $$ by providing the redundant discs. Larger array enclosures will probably be available at a very small increment in cost.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •