[lug] server spec
Nate Duehr
nate at natetech.com
Thu Jun 14 14:43:14 MDT 2007
Sean Reifschneider wrote:
> On Wed, Jun 13, 2007 at 10:33:08PM -0600, Bear Giles wrote:
>> BTW I've read that some next-generation servers take DC power instead of
>
> Yeah, next generation systems from the '50s. :-) Most of these "new" DC
> systems use -48 volts DC, common in the telco industry for a very long
> time. You know, just like those 19" racks we love so much... Off the top
> of my head, I think they're saying it saves around 20% in power
> consumption, which means that it also saves 20% in cooling costs.
Yeah, most of my professional career has been in DC powered CO's. It
has some definite advantages. The big disadvantage is that you should
have special training to run -48 VDC power... even in low amperages, DC
across the heart carries the very real potential of killing you. Safety
lockouts and "doing it right" become pretty important when you get to
high-current DC distribution.
> Also, DC power plants require less equipment to run. They still have power
> supplies in the computer, to convert to the + and - 5 and 12 volts that are
> needed for various components.
Not sure about "less" equipment in the plant. You still have an AC feed
(or more than one) from the grid, auto-switches to switch those in and
out, AC/DC inverters, a (usually very big) battery plant, and usually
you re-generate AC for the equipment that can't be DC powered, so you
have a large (or multiple) inverters to go back from the battery plant
to AC distribution into the site. Additionally you then have one or
more AC generators that feed the auto-switches from the grid. Depending
on size they can be diesel internal combustion type engines, or in
really big facilities, diesel and/or natural gas turbines (jet).
In the rack space you have some really big fat DC cables overhead
carrying very lethal numbers of amps in the main distribution, and then
you have some interesting grounding requirements when you start to mix
DC and AC equipment in the same rack area. The racks typically are NOT
allowed to touch (ground separation kits) and may or may not be isolated
from the floor ground, depending on the standard of the telco you're
working with.
Two major traditional long-distance telcos "do it right" in my opinion
from having worked with most of them, and the local RBOC's are all over
the map -- even CO to CO there's differences that shouldn't be. Since
I'm still working in the industry I won't name names, but be advised
that some telco co-lo's have very dangerous setups for if AC equipment
shorts out to the racks.
-48 VDC at high current makes a really good arc welder.
The best part about a well-built DC plant is that the entire site is
really running all the time off the battery plant, and the battery plant
is on a "float" or "maintenance" charge cycle continuously. Many
expensive UPS systems are like this, but the mass of the battery plant
is usually much much bigger in a CO, and with a battery acting as a
giant capacitor, the -48 VDC in a CO is usually rock solid, no ripple,
no fluctuations... (And usually it's actually more like -51 VDC,
depending on the telco.)
A loss of grid AC in a CO set up with a traditional DC plant, means
nothing... no bumps in power, nothing... just everything continues to run.
> I haven't used any of the DC equipment, except indirectly through clients
> that are heavily telco oriented.
It's nice, if the CO/datacenter was built properly. I got a kick out of
the magazine articles touting it as "new technology" a couple of years
ago for commercial data-centers. They just copied what Bell Labs and
AT&T engineered in the late 40's and deployed in the 50's for telco.
And engineered it truly was. I have the engineering drawings for an
official AT&T outhouse (yes, they were all built the same, and built to
spec)... somewhere here, just for a laugh.
Nate
More information about the LUG
mailing list