server vs tower
I have 6 "servers" that run various tasks. NAT, WEB etc. It has been a learning process since the beginning 7 years ago as I knew nothing about networking at all.
I have all tower systems and no real problems but I am at the point where I want to consider more about my hardware.
There are only approx 170 systems in the network so there is not alot of load to deal with but I want the systems to run peak and be prepared for expansion.
I just used on board video since there is little need for video at all on the servers and I use the on board nic too.
I am thinking I need to replace a couple of systems soon, I see capacitors doming up so I know they are soon to be an issue.
What I am wondering is if using the on-board stuff is a performance hit. On board video already uses system ram but .....
Would it be better if I used a video card and a nic in a slot as opposed to on board?
I hate to spend all that money for "server" computers, rack mount would be nice but I can't see the expense unless there is more there than I am aware.
Any thoughts from anyone??
Nah, save your money. In fact, most rackable servers are going to come with on board video, often using very underpowered cards.
Shoot, it's the one thing you can really skimp on in a server. I wouldn't change that unless you've got some kind of render farm or something...