[lug] server spec
Nate Duehr
nate at natetech.com
Wed Jun 13 18:23:51 MDT 2007
Sean Reifschneider wrote:
> I'd guess it's a problem with your environment. With modern systems, you
> have to be extremely careful about the aerodynamics of the server room.
> You can't just stick one thermostat in there and if it reads 70 degrees
> assume everything is fine. You also have to be very careful about knowing
> where exhaust is going and where intake is coming from.
So really we're agreeing. By "modern system standards", if you're not
in a data center designed by a thermodynamicist - you "have a problem
with your environment"?
:-)
Older servers would sit in the corner of a room, ANY room, and survive.
Today's 1RU machines need to be babied, compared to that. That's all
I was saying.
I used to work in the data center world, and I know one thing... the
cooling WILL fail. Never been in any data center where it didn't,
eventually.
(I also work in telco, and I remember when CO's were kept religiously at
68F or below. Nowadays, you're lucky if they're below 80F... cost
cutting and cutting corners...)
I recommend that even if you're installing in a data center environment
-- buy something that will survive the failure of your fancy data center
provider's fancy cooling system -- because it WILL go down. Sooner or
later.
The old AT&T machines, Sun boxes, older HP-UX servers, and a number of
more "robust" servers always survived overheating/loss of cooling events
and rarely needed maintenance afterward when I have seen problems
happen. In many cases, PeeCee-based 1RU boxes, just shut themselves off
or fried components during those events... even if the problems never
showed up for a couple of months afterward.
I remember clearly which customers came and went doing maintenance on
their hardware at the multiple data centers I worked at. And the people
that owned racks full of PeeCee hardware, were in and out all the time,
swapping things.
The customers that bought commercial Unix servers, you never saw them.
Maybe once or twice a year during a disk failure, and even then, they
usually had service contracts for that stuff... the Sun/HP/IBM guy would
show up with a disk, call 'em on the phone, tell them he'd put the disk
in, and he was gone.
Also, your reply assumes the original poster was putting his 1RU
machines in a data center environment. Maybe he wasn't planning on
that, but I may have missed it. When you have the luxury of paying
someone for rack space in a nicely controlled environment, great.
Or maybe I should say -- they shouldn't NEED to put a 1RU PeeCee in a
datacenter with "properly designed airflow". If they need to, that
quality level of machine should NOT be called a "server".
REAL well-engineered servers shouldn't keel over dead at the first sign
of an 85 degree room, or a little "hot spot" at their air intakes.
Most PeeCee 1RU commercially built servers show 120F or higher as their
"normal operating range" in their engineering documentation, but still
keel over dead or shut down to save themselves if the temps in a room
get that high.
Why, as consumers, do we let the manufacturers get away with that? Or
are we all just too cheap? (GRIN) I know I probably am.
I really respect Sean's opinion on the list here, because he's one of
the few people I know that really "eats his own dogfood" when it comes
to building servers... he lives off what his servers do for him.
But I still contend that 1RU "servers" are generally a cheap trade-off,
a legacy of the dot-bomb era where people were charging (and still are)
far too much for rack space in a room where when the AC or power fails,
all they are is little death camps for your servers.
Big bandwidth in a closet in a office building is sometimes a lot safer
and cheaper than the big data center environments that lure folks with
flashy marketing and hype. IF you know how to multi-home and can find a
little office building that needs lessees that has a fiber running
by/through.
Sometimes data centers really are a bad place to put things... as they
say, "the bigger they are, the harder they fall".
I have at least one 1RU Dell machine (dual-P3) living on a mountaintop
in a only semi-climate-controlled room. It has to be there for various
reasons, but I trust it to survive or shut itself down if the
squirrel-cage fails. I'm not sure I'd trust ALL of the SuperMicro line
up there... some yeah, not all. If I had one, a Sun Enterprise 480
would be perfect up there. It'd never go down.
Nate
More information about the LUG
mailing list