[Seattle-SAGE] Server room build questions

Paul English tallpaul at speakeasy.org
Thu Apr 28 11:29:28 PDT 2005


First of all thanks to everyone for their helpful feedback - I will go 
looking on the datacenter mailing list and books as well.  I'm going to 
splice together my replies. 

I forgot to ask in my initial request for any vendor recommendations, 
particularly local. I'm not sure how much say we will get wrt vendors, but 
it can't hurt to try. 

On Wed, 27 Apr 2005, Gabriel Cain wrote:

> Yeah, I would definitely make sure to spell out all the build-out details. 
> Generally, if you leave room for corners to be cut, they shall be.  How 
> savvy at building rack space is this landlord?  I'd be careful about making 
> sure that they know what they're doing.

This particular landlord is fairly savvy, but I have a feeling if leave 
anything unspecific it will leave room for corners to be cut. 
We are considering the possibility of moving, which is why I didn't 
specify the landlord. If we move in with an unknown landlord they will 
also be doing the build so I will just need to be even more specific. 

> I would lean towards having additional capacity available / yes, do have 
> backup HVAC.  I would very much hate to find myself in a situation where the 
> AC wasn't available, and my customer facing machines were rebooting or going 
> down due to the heat.

I was forgetting about our Movincool unit. That is part of our current 
cooling solution and we will almost certainly use it as the backup AC for 
the future server room. It will be more than enough to keep our 1 rack of 
critical machines up, and probably enough to keep 3 racks of critical plus 
"operational" systems up. 

> > How much power should I spec? On this one I'm fairly sure that I should 
> > probably spec the full 20 rack's worth (200A?) and just get the breaker 
> > panel put into the room with circuits sufficient for 6 racks. 
> 
> Only 200A for 20 racks?  I use 40+ amps in most of my racks (and all of them 
> exceed 20A).   I'd go for 500A, at least.

You (and others) are right - that is a much more realistic number. 
_Currently_ the 24/7 and "operational" racks are fairly low-density with 
lots of empty spaces in the racks, UPSen taking up large amounts of 
rack space etc. We are actually currently using less than 100A for 
everything. Future "research" racks will most likely be a full 42U of 
dual opteron-class systems and will be > 40A each, so that is how I need 
to budget.

All our current systems are operating at 110V, but I imagine most of them 
have auto-switching power supplies or can be manually switched to 220V. 

On Wed, 27 Apr 2005, Scott McDermott wrote:

> I say ditch the raised floors. Since cold air falls, (presuming the rest of
> the space allows the option), you can have vents in the ceiling in front
> your racks and you'll do fine. And adding raised floors to a space that
> doesn't already have raised floors will certainly not be worth it!

Okay a few people said this although I'm not clear exactly why they are 
outmoded. :-)

Obviously I use ladder racks to manage ethernet and kvm cables. 

How do I manage power delivery to the racks? Ceiling mounted twist-lock 
outlets? We currently have wall-mounted receptacles with UPS cables 
running across the floor to the wall, which I hate. 

> Will your non-critical systems shut themselves down if the AC goes at at
> 2am? 

Yes. Almost all of our machines run linux, and while it is a PITA to set 
up lm_sensors, we buy our reseach machines in large quantities of the same 
model, so I can set up lm_sensors once and copy the config to all the 
nodes. 

We also have power and thermal monitoring on the room itself with a paging 
system (Nagios). 

Thanks again everyone!
Paul



More information about the Members mailing list