[lug] server spec

Bear Giles bgiles at coyotesong.com
Wed Jun 13 22:33:08 MDT 2007


Nate Duehr wrote:
> Also, your reply assumes the original poster was putting his 1RU 
> machines in a data center environment.  Maybe he wasn't planning on 
> that, but I may have missed it.  When you have the luxury of paying 
> someone for rack space in a nicely controlled environment, great.
>
> Or maybe I should say -- they shouldn't NEED to put a 1RU PeeCee in a 
> datacenter with "properly designed airflow".  If they need to, that 
> quality level of machine should NOT be called a "server".
>
> REAL well-engineered servers shouldn't keel over dead at the first 
> sign of an 85 degree room, or a little "hot spot" at their air intakes.

Well-engineered systems take economics into account.  How much would it 
cost, in terms of materials, physical space and power consumption, to 
make these servers robust enough to handle higher temperatures?  Now 
multiply that thousands (or even tens of thousands) of servers in a data 
center.  Suddenly the cost of redundant cooling systems starts to look 
pretty cheap.

BTW I've read that some next-generation servers take DC power instead of 
handling AC conversion themselves.  The idea is to take that heat source 
out of the boxes and put it someplace else.  I don't know how well it 
works in practice, but it's an interesting thought.  It makes you wonder 
how much else can be removed from the box.



More information about the LUG mailing list