Find the answer to your Linux question:
Results 1 to 7 of 7
For this project I need to set up a Linux multi-user or multi seat computer. I have seen some vid's on the tube and read some articles to try to ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jan 2011
    Posts
    94

    Need help regarding Multiseat Linux (multi-user) hardware choice


    For this project I need to set up a Linux multi-user or multi seat computer. I have seen some vid's on the tube and read some articles to try to get a better understanding of multi-user environment cause I am not really familiar with running multi-user setup.

    For this project there will be maximum 6 simultaneous users, but most days 3 or 4 users. Mostly users will be running web applications from remote servers via browser like crm, order management system etc. So those will not drain much cpu locally.

    But there will also be tasks that are run locally like different office tasks like spreadsheets, photo editing, HD video, IP telephone with video over IP, browsing different videofiles like youtube and for one or two users also some programming. Also there will be some virtual pc running now and then and it seems that this will be vmware.

    At first I was thinking Citrix and thin client cause that is a tested route and should be possible to set up. Then there was launched some voices about keeping this open-source if at all possible. Well Linux is a true multi user system so that should be possible.

    I got a tips about running 3 nvidia gpu and from each card configure 2 screens so there is a total of 6 users. All I could find googling was older articles from 2006 about this and I am not sure if those are still a good source. According to my friend using 3 video cards rather then thin clients would ensure high screen resolution and minimal cpu usage playing video, watching youtube movies etc cause the graphic card would run video acceleration and thereby relieve the cpu. So there will less lag on the system.

    1: What is the best choice - thin clients or 3 x nvidia gpu?

    2: What price-range or model would you recommend for question 1?

    3:What kind of cpu power should I go for here? Singel cpu or dual cpu?

    4: How much memory for this box?

    5: Will there be benefits opting for ECC ram? If so what are the benefits over regular ram?

    6: This box will be using SSD - will there still be advantages of using a sort of ram-disk or cache using some of the ram?

    7: Will there be any risks going for large cache or ram-drive?

    Ok so to sum up we will not need this project to fly like a brand new single user workstation if all 6 users are logged on at once - cause that will not be the case most of the time, usually 3-4 users. Yet there most be possible to do work without freeze, lags and hang-ups even with all 6 users at the same time. The system should run smooth without hick ups when only 3-4 users are logged in.
    Last edited by piergen; 02-09-2012 at 02:57 AM.

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, or in a galaxy far, far away.
    Posts
    11,158
    1. If you need each user to have minimal GUI capabilities (no full-motion/full-screen video display and such), then thin clients are good, connecting to a virtual machine on the server.
    2. No idea. It depends upon your budget - that will constrain your choices significantly.
    3. More is better. Get a dual processor x 4 core system (8 cores) w/ at least 2GB per core. RAM is cheap these days.
    4. See #3
    5. ECC RAM, ABSOLUTELY. I will NEVER purchase a server/workstation class system without fully-buffered ECC RAM. It has saved my bacon on many occasions. When one memory stick goes wonky, the system can map it out of use and continue operating, albeit with less total RAM, giving you a chance to replace it while your users keep working.
    6. DO NOT put your swap space on SSD! Also, DO NOT use SSD for write-mostly data. This would appear to include caching data. Use it for read-mostly data. The flash memory used in SSD's has a very limited number of write cycles before a cell is unusable. IE, over time, it will become more and more constrained for operations. Myself, I would not yet invest in SSD's unless the speed differential between the SSD and a hard drive (mostly in access time - NOT in I/O speed, which is usually constrained by the controller) is really important to your operation. Remember, Linux will utilize all available, unused RAM for disc caching, so if a file is frequently accessed, most of the time it will be accessed from RAM anyway.
    7. Possibly. Why do you want this?

    Ok. 6 users - 8 cores. Each user gets their own CPU, effectively, and 2GB of RAM (increasing this is probably advisable, to 4GB per user). This leaves 2 CPU's for system purposes, such as handling disc and network tasks, etc.

    FWIW, in my work scenario, we manage the web browsing of 100,000 concurrent users with 3 data centers. Each server (8 cores, lots of RAM) handles 800+ concurrent users without noticeable lags. Each user has their own Mozilla browser running in the server, which also handles web applications, games, video display, etc on their cell phones. I think that a well-configured 8-core system w/ 16-32GB of RAM should handle normal user applications for 3-6 users...
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  3. #3
    Just Joined!
    Join Date
    Jan 2011
    Posts
    94
    Tu rm
    First as a note to your fear of SSD's I believe better garbage collection and Trim has improved amazingly fast and the same goes for extra spare.
    For your home system even runnings torrents you could safely keep SSD for about 4-5 years before the cells wear out nowadays and you really will be switching or upgrading a long time before the SSD is rendered useless because of the performance if getting better each day. Also any unused space when formatting will be added to spare so that will actually pro long the life span of the SSD.
    I/O improvements are significant, read and write speed have hit level one could only dream about couple of years ago and if performance is too weak for any ones liking there is always Raid. All though hw raid will break Trim that is not the case for sw raid and I have only good memory of sw raid in Linux even though I have not yet tested on SSD. So what I am saying is that I believe so strongly in the upside to SSD that I am willing to test this out for the multiseat. If things works well we can always be pro active and swap the SSD in a year for new ones.

    5: You are right fully buffered is the way to go and with todays prices it wont hurt too bad either.

    6: What I was thinking of was the old ram-drive feature that uses part of the system ram to create "super-fast lightning speed" drive. From wikipedia:

    A RAM disk or RAM drive is a block of RAM (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). It is sometimes referred to as a "virtual RAM drive" or "software RAM drive" to distinguish it from a "hardware RAM drive" that uses separate hardware containing RAM, which is a type of solid-state drive.

    The performance of a RAM disk is in general orders of magnitude faster than other forms of storage media, such as an SSD, hard drive, tape drive, or optical drive.[1] This performance gain is due to multiple factors, including access time, maximum throughput and type of file system, as well as others.

    File access time is greatly decreased since a RAM disk is solid state (no mechanical parts). A physical hard drive or optical media, such as CD-ROM, DVD, and Blu-ray must move a head or optical eye into position and tape drives must wind or rewind to a particular position on the media before reading or writing can occur. RAM disks can access data with only the memory address of a given file, with no movement, alignment or positioning necessary.
    7: Well the reason is simply to try to squeeze as much performance out of the system in order for the users to experience a responsive system without lag even when users are running recourses heavy tasks. Also the more that will be loaded and kept in Ram Drive or Tmpfs the faster users can access it. Also this should theoretically reduce wear on SSD even more.

    I will try to find more material to read up on to get a better understanding of thin clients pro's and con's and also what benefits can be achieved dropping thin clients and go for 3 faster gpu rather then thin clients.

    To sum things up I guess in short the answer is get as much power and omph in the system by using best possible hw and put in as much cpu and ram that we can without breaking the budget.

  4. #4
    Just Joined!
    Join Date
    Jan 2011
    Posts
    94
    Really? No one that has experience with Tmpfs?

  5. #5
    Just Joined!
    Join Date
    Jan 2011
    Posts
    94
    Will I see any performance gain from using tmpfs on a multi-seat workstation like this?
    3-6 simultaneous users , mostly office use, a handful of webservices run in browser from remote server, little photoediting and programming and viewing video.

    1. Will there be any performance gain using a relatively large chunk of the systems ram for tmpfs? If system holds say 96 GB of ram and we set about 50% for tmpfs will that give us any boost?
    2. Will the use of tmpfs provide any advantages regarding the number of write cycles needed of the systems SSD? (reduce number of write cycles)

  6. #6
    Just Joined!
    Join Date
    Jan 2011
    Posts
    94
    Not one that have tried and tested tmpfs and how it effects SSD lifespan?

  7. #7
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, or in a galaxy far, far away.
    Posts
    11,158
    I've experimented with tmpfs. Seems to work just fine. As for affect upon SSD lifespan, if you are using it for write-mostly data, then it will probably help considerably. If it is for read-mostly data, then not so much. The major limiting factor on SSDs and flash memory in general (especially MLC devices), is the limit on the number of write-cycles per cell. Obviously (to me anyway), the fuller the SSD is, the more you will be over-writing cells with new data, wearing it out faster, even if the drive controller has TRIM enabled and it is trying to wear-balance the drive.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •