Find the answer to your Linux question:
Results 1 to 7 of 7
Hey, So as the title explains i am in the process of setting up SAN (Storage Area Network) - NAS Hybrid Server. I.e. the server will be using both NAS ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Linux Newbie SL6-A1000's Avatar
    Join Date
    May 2011
    Location
    Australia
    Posts
    120

    SAN-NAS Hybrid Server Hard Drive Help!


    Hey,
    So as the title explains i am in the process of setting up SAN (Storage Area Network) - NAS Hybrid Server. I.e. the server will be using both NAS protocols and iSCSI for increase performance and reliability.

    I need help with choosing the right Hard Drives or Solid-State Drive (as the case may be) for the job.
    The choice i was looking at was the following:
    - 15000RPM SAS HD
    - 7200RPM SAS HD
    - 7200RPM SATA SSHD
    - SATA SSD

    The Storage server will be online for at least 5-days a week during the businesses operating hours and will potentially have up 60 people on at any one time for clients (Read-only access) and staff (Read-Write Access) who need access.
    This means that the HDD or SSD needs to be able to handle multiple people accessing it at once without falling over and without bottle necking.
    The problem arises in that from what i know and have read is:
    1. Regular SATA HDDs for desktops are not cut-out or designed for that sort of intense workloads.
    2. SATA has more limitations on simultaneous access from multiple users which can cause HDD failure or bottlenecks over a SAS alternative
    3. SSDs are a lot faster than mechanical HDDs or Hybrids.

    So the big question from all this, is which way do i go and why?

    My biggest concern in choosing is that if i go solid state i will get the performance increase but may still experience a bottleneck effect once the SATA port reaches the limit for number of users accessing it simultaneously, thus discrediting any benefits gained from choosing it over say a SAS 15K Drive configuration.

  2. #2
    Just Joined!
    Join Date
    Jan 2009
    Posts
    12
    Each manufacturer provides models that may satisfy your needs. For example, WD makes a "red label" drive for NAS or higher performance requirements in a SATA configuration. In my experience, SAS vs SATA HD failure--depending on model--is about the same, nowadays. SSDs will have higher performance, but does it justify the cost when your network speed might be the bottleneck. A RAID SATA or SAS HDs will also saturate the NIC assuming you have enough disks in the RAID.

    Now, you might have ~60 users (with simultanious access? probably not really simultanious). This will definitely slow the network speed vs serving disk requests. The OS will be threading, buffering, and serving the requests. If you indeed saturate the SATA 32 task limit (additional request are buffered by the OS), the network speed will be ~1/32 for each request: assuming Network 1Gb/32=~32Mb vs Disk 2.4Gbx6/32=450Mb (RAID6 with 8 SATA2 disks, 2 don't contribute). OK, that's theory, but even with the disks serving 1/2 that speed, the NIC is still saturated. This scenario is not likely, unless everyone is transferring GB files simultaniously. Your best single user transfer speed in this configuration is ~0.5Gb. The disk and server cache could make it faster. SATA3 will be ~2x faster. A 16 disk RAID might push it ~2x faster. How fast can the end user store the data on the local system? Do SSDs still look attractive?

    NAS and iSCSI? I've built my own SAN devices from OTS parts with HBAs using "Fibre" as well as NICs using iSCSI with a SAS controller and 16 SATA HDs. I setup each SAN device as a backend with a frontend "file server" sharing the SAN (one with "Fibre", other with iSCSI). If your gonna blow away that frontend system with direct NAS connections, why do iSCSI at all? NAS will always be slower since it implements through a heaver protocol (NFS, CIFS, FTP, others) than iSCSI or Fibre. I would only connect one system with iSCSI due to multi-IO issues (YMMV). In fact, users connected to a "file server" share via iSCSI to the "SAN" will use CIFS or NFS anyway. In any case, disk IO should not be a problem even with a SATA2 HD issue.

    What about maintenance? With RAID6, replacing bad SATA disks is way cheaper than SAS, way way cheaper than SSD. For the cost of SSDs (or SAS for that matter), you can stock up on a few extra SATAs for the "just in case" failures (and/or you can get larger disk space for the cost). If you have a SAS controller, you can always upgrade later, if you feel the need.

  3. #3
    Linux Newbie SL6-A1000's Avatar
    Join Date
    May 2011
    Location
    Australia
    Posts
    120
    Yeah ok, Network speed is something i have thought about but haven't really looked at.
    SSDs weren't attractive to begin with, it was just an option i was considering in terms of cost vs performance. But since you've pointed it out the benefit of an SSD would become less attractive and reliable then 2 or more HDDs in raid.
    I was seriously looking at the SAS hdd's and they are what i have been leaning towards. But as I mentioned wasn't sure how much of a difference there would be between SATA and SAS and if there was a real-world performance issue once you get a light-moderate user load. Although your points about failure rate and the cost of buying several SATA HDDs like WD Red drives vs SAS and SSD drives is very valid, so i will definitely re-think and consider that.

    With what you said about SAN and using iSCSI, my biggest problem has been the multi-IO. Which was why i was going with a san-nas hybrid. The design theory was using iSCSI as the underlying layer, while running a NAS virtual machine on top. So the disk that others are connecting too over iscsi would be the NAS Virtual Machine. The theory is that performance related problems brought to the table by NAS will be minimal because the NAS server is running over the iSCSI protocol not NFS/CIFS and the only time there will be a noticeable difference is for those on a different network to those connected via iSCSI, where NFS/CIFS is going to be used anyway (like with a file-server) and performance over the network is likely going to be questionable. The benefits however are much greater, not only does it get around the multi-IO dilemma, but you have the benefit of being able to snapshot the entire NAS Server, move the Virtual NAS-server from one physical computer to another, and the list goes on. The server is no longer as restricted by physical requirements or hardware failure so down-time and maintenance is minimal and is expandable to meet your needs.

    But with what your suggesting is similar but instead you have the SAN Server than a computer connected directly through iSCSI acting as a file-server for other users to connect too over NFS/CIFS (As i understand it).
    It sounds like a good idea and a little less complex, please excuse my lack of knowledge with creating and running File-Server i understand NAS and SAN better. With creating a file-server i need to be able to set different files or even partitions to different read-write permissions, so that what clients can read and access is not the same as what staff can read and access. I would also much rather be able to just set-up a partition or segregate a portion of the space to the different access levels, and know that any file put in there is only going to have said permissions. Over having to set each individual file or folder which would become tedious. Is that possible through a file-server?
    Last edited by SL6-A1000; 09-29-2013 at 11:46 AM.

  4. #4
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,754
    I do not follow the logic in this question as iSCSI and "NAS" protocols (I would guess you mean file-level access like CIFS/NFS servers) do not really mix in the scenario you have described.

    Users would typically access "shares" that provide file-level access. With iSCSI being nothing more than SCSI commands/data wrapped in TCP packets, it provides *block-level* access. This means the iSCSI *client* has to format the volume, create a filesystem, and then manage that filesystem. Multiple clients would not access the same iSCSI volume (block-level) without some sort of disk arbitration - IE, 'clustering software.'

    If your clients are strictly workstation users accessing file data, there would be little purpose for anything related to iSCSI. For users accessing data, you can provide CIFS, NFS, FTP, SCP, and HTTP easily from one server.

    Frankly, 60 users randomly accessing data on/off during the day is not a significant load. SATA HDD's would handle this fine. (For comparison, I regularly work with an appliance model that uses ~24 SATA HDD's to handle hundreds of continuous data streams at the same time - with a good RAID card.)

    What you should consider is 'what level of SATA controller is needed and how many disks do I need.' Since you KNOW disks WILL fail (only a matter of time), then you would likely want to use a RAID solution. The Linux software md driver may work fine for you (and actually should assuming fileserving is the machine's only job and has some CPU available.) Beyond that, you can look into a *real* HW RAID controller. Your RAID level will determine the max disk throughput - choosing a RAID level is always a trade-off between performance and redundancy.

    Typically what necessitates the move from SATA > SAS is related to IOPS (and some other performance tricks that SAS still holds over SATA.) It's not the spindle speed - if you have $$ to "go overboard," 10K SAS drives will be overkill. Otherwise, SATA is fine.

    ** And RAID is never a substitute for correct backups.

  5. #5
    Linux Newbie SL6-A1000's Avatar
    Join Date
    May 2011
    Location
    Australia
    Posts
    120
    Quote Originally Posted by HROAdmin26 View Post
    I do not follow the logic in this question as iSCSI and "NAS" protocols (I would guess you mean file-level access like CIFS/NFS servers) do not really mix in the scenario you have described.

    Users would typically access "shares" that provide file-level access. With iSCSI being nothing more than SCSI commands/data wrapped in TCP packets, it provides *block-level* access. This means the iSCSI *client* has to format the volume, create a filesystem, and then manage that filesystem. Multiple clients would not access the same iSCSI volume (block-level) without some sort of disk arbitration - IE, 'clustering software.'

    If your clients are strictly workstation users accessing file data, there would be little purpose for anything related to iSCSI. For users accessing data, you can provide CIFS, NFS, FTP, SCP, and HTTP easily from one server.

    Frankly, 60 users randomly accessing data on/off during the day is not a significant load. SATA HDD's would handle this fine. (For comparison, I regularly work with an appliance model that uses ~24 SATA HDD's to handle hundreds of continuous data streams at the same time - with a good RAID card.)

    What you should consider is 'what level of SATA controller is needed and how many disks do I need.' Since you KNOW disks WILL fail (only a matter of time), then you would likely want to use a RAID solution. The Linux software md driver may work fine for you (and actually should assuming fileserving is the machine's only job and has some CPU available.) Beyond that, you can look into a *real* HW RAID controller. Your RAID level will determine the max disk throughput - choosing a RAID level is always a trade-off between performance and redundancy.

    Typically what necessitates the move from SATA > SAS is related to IOPS (and some other performance tricks that SAS still holds over SATA.) It's not the spindle speed - if you have $$ to "go overboard," 10K SAS drives will be overkill. Otherwise, SATA is fine.

    ** And RAID is never a substitute for correct backups.
    Yes, your right; but i am talking about once the initiator is connected, if they choose to use iSCSI drive as a file server then other clients would be going accessing files through CIFS/NFS.

    What i was talking about was talking about was setting up VM (FreeNAS) on the ISCSI drive (so the .vhd file would be the VM and the iscsi drive others connect too. Although i am not sure that it will work in practice.

  6. #6
    Linux Newbie SL6-A1000's Avatar
    Join Date
    May 2011
    Location
    Australia
    Posts
    120
    HROAdmin: So you think iSCSI is overkill for what i need?

    Sorry for the double post

  7. #7
    Just Joined!
    Join Date
    Jan 2009
    Posts
    12
    So I'm assuming you mean something like this:
    [SAN]--iSCSI connection--[VM]--net connect--[clients]

    The iSCSI part will make the VM see the "SAN" as a RAW disk.

    This is emminently what I was getting at. The "net connect" part is what I assume you meant as the "NAS" part. Strictly speaking, you don't need to use something like FreeNAS for the VM. A regular distribution would do with SAMBA and/or NFS shares for the "client" to connect. You should be able to setup rights to the various directories and files. You could even setup multiple shares to specific directories--each dedicated to certain users/groups/IP addresses/other/comb with ReadOnly/ReadWrite as desired.

    However, FreeNAS or some other solution could do the trick and give you some "ease of mangement" interface. The downside is that I've found that these turnkey solutions can be based on odd distros and a bit tough to upgrade/add features. FreeNAS is based on FreeBSD, so your probably alright. Looks like it's fairly well documented.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •