January/February 2012
See p.32
INSIDE: ASHRAE and Airside Economizers See page 42
40G and 100G Cabling See page 54
UPS Keeps Data Center on Track
www.missioncriticalmagazine.com
See page 60
Input 31 at www.missioncriticalmagazine.com/instantproductinfo
Table of Contents January/February 2012 | Volume 5, Number 1
COLUMNS 4
Editorial: Welcome to 2012 A lot has happened since our last issue By Kevin Heslin
6
Talent Matters: Making It as a Data Center Professional Following one individual’s career By Andrew Lane
10 Legal Perspectives: An International Crisis Impacts Your Business Do your contracts provide an excuse in the event you cannot perform? By Peter V. K. Funk, Jr.
32 FEATURES 32 COVER STORY Data Center Reliability Starts With Site Selection 12 considerations in finalizing a site By Fred Cannone
42 The Impact Of Expanded ASHRAE Ranges On Airside Economization
14 Cronin’s Workshop: The Time Has Come for 380 Vdc Testing theory in a real facility By Dennis Cronin
18 Mission Critical Care: What’s in Your Program? Electrical safety, Part 3 of 3 By Douglas H. Sandberg
22 Zinc Whiskers: Beyond PUE and Onto HPC Power and cooling efficiency is the order of the day By Bruce Myatt
New guidelines make a big difference By Mark Monroe
28 Hot Aisle Insight: Moving into 2012
52 Are You Ready For 40G and 100G?
Cloud computing comes with a carbon lining By Julius Neudorfer
12- vs. 24-fiber MTP cabling for higher-speed Ethernet By Gary Bernstein
58 2N Package Delivery United Parcel Service, Inc. updates a primary data center’s conditioned power distribution systems By C. Benjamin Swanson and Christopher M. Johnston
DEPARTMENTS 64
New Products
67
Industry News
67
Events
69
Ad Index / Kip and Gary
70
Heard on the Internet
About the cover This month’s cover story tells 12 best practices for companies considering new data center space.
January/February 2012
www.missioncriticalmagazine.com
|3
Editorial By Kevin Heslin, Editor SUBSCRIPTION INFORMATION For subscription information or service, please contact Customer Service at: (847) 763-9534 or fax: (847) 763-9538 or
[email protected]
GENERAL INFORMATION
Welcome to 2012 A lot has happened since our last issue ue to the holiday season and other business imperatives, regular readers of Mission Critical go an extended period of time without seeing their favorite data center magazine at this time of year. Consider that our November/ December 2011 issue mailed in early December, and it must be late February if you are reading this January/February issue in print. Of course, visitors to our website know that we haven’t exactly been resting on our laurels. They knew about some important developments, including two webinars we held earlier this month and a third scheduled for February 29th. The first, on February 7, included Syska Hennessy’s Vali Sorell and Intel’s John Musilli discussing best practices in data center cooling. AdaptivCool, Upsite, and Stulz sponsored the event. Cummins sponsored an event we held on February 21. The speakers were RTKL’s Rajan Battish and Dave Mueller of Enernex. Power Assure is hosting that final event. And by the time you read this, I hope to have announced speakers for a backup power discussion on April 10th. Mission Critical also launched a new expanded blog section. Eight exciting new bloggers will be adding their thoughts to Schneider’s Domenic Alcaro, NAAT’s Julius Neudorfer, and JLL’s Michael Siteman. Some of these new bloggers are familiar to regular readers of Mission Critical or attendees at industry conferences. Best of all, this area will be open for your comments, and Mission Critical will be open to new blogging contributors. Just email me at
[email protected]. Of course, web visitors don’t get all the treats first. This issue includes our first presentation of Kip and Gary (see page 69), Diane Alber’s hilarious cartoon look at data centers. Diane’s Kip and Gary series is growing in popularity in the industry, perhaps because of the gentle and accurate way she sees the data center industry. I hope you enjoy Kip and Gary. We’ll have another installment in the next issue and occasional web-only presentations, too. I hope to be able to tell you more about Diane in a future issue and also to bring some other exciting new projects to our pages and web site. ■
D
Kevin Heslin Editor
2401 W. Big Beaver Rd., Suite 700 Troy, MI 48084-3333 (248) 362-3700 • Fax: (248) 362-0317 www.missioncriticalmagazine.com
GROUP PUBLISHER Peter E. Moran •
[email protected] (914) 882-7033 • Fax: (248) 502-1052
EDITORIAL Kevin Heslin, Editor
[email protected] • (518) 731-7311 Caroline Fritz, Managing Editor
[email protected] • (419) 754-7467
ADVERTISING SALES Russell Barone Jr., Midwest and West Coast Advertising Manager
[email protected] • (219) 464-4464 • Fax: (248) 502-1085 Vic Burriss • East Coast Advertising Manager •
[email protected] (610) 436 4220 ext 8523 • Fax: (248) 502 2078
ADVERTISING PRODUCTION & EDITORIAL DESIGN Kelly Southard, Production Manager Jake Needham, Sr. Art Director
MARKETING Kevin Hackney, Marketing Manager
[email protected] • (248) 786-1642 Chelsie Taylor, Trade Show Coordinator
[email protected] • (248) 244-6249 Jill L. DeVries, Editorial Reprint Sales
[email protected] • (248) 244-1726 Kevin Collopy, Senior Accout Manager
[email protected] • 845-731-2684
AUDIENCE DEVELOPMENT Hayat Ali-Ghoneim, Audience Devel. Coordinator Devon Bono, Multimedia Coordinator Catherine M. Ronan, Corporate Audience Audit Manager
LIST RENTAL Postal contact: Kevin Collopy at 800-223-2194 x684
[email protected] Email contact: Shawn Miller at 845-731-3828
[email protected]
CORPORATE DIRECTORS See the complete list of BNP Media corporate directors www.missioncriticalmagazine.com.
INDUSTRY ALLIES
INTERNATIONAL
The end-to-end reliability forum.
AFCOM Members
MISSION CRITICAL (ISSN 1947-1521) is published 6 times annually, bi-monthly, by BNP Media II, L.L.C., 2401 W. Big Beaver Rd., Suite 700, Troy, MI 48084-3333. Telephone: (248) 362-3700, Fax: (248) 362-0317. No charge for subscriptions to qualified individuals. Annual rate for subscriptions to nonqualified individuals in the U.S.A.: $115.00 USD. Annual rate for subscriptions to nonqualified individuals in Canada: $149.00 USD (includes GST & postage); all other countries: $165.00 (int’l mail) payable in U.S. funds. Printed in the U.S.A. Copyright 2012, by BNP Media II, L.L.C. All rights reserved. The contents of this publication may not be reproduced in whole or in part without the consent of the publisher. The publisher is not responsible for product claims and representations.
4|
Mission Critical
January/February 2012
Canada Post: Publications Mail Agreement #40612608. GST account: 131263923. Send returns(Canada) to Pitney Bowes, P.O. Box 25542, London, ON, N6C 6B2. Change of address: Send old address label along with new address to MISSION CRITICAL, P.O. Box 2148, Skokie, IL 60076. For single copies or back issues: contact Ann Kalb at (248) 244-6499 or
[email protected].
M Y IT P RO S AYS E ATON H ELPS H IM T RIM T HE FAT. F RANKLY, I ’M A L ITTLE WORRIED.
Backup Power (UPS) The Eaton 9E UPS offers 98% efficiency.
Rack Power Distribution Manage power consumption.
Power Management Software Manage virtual environments.
Streamline power protection with the 98% efficient Eaton 9E UPS. Your desk toys may have concerns—but you won’t. Because now you don’t have to sacrifice uptime to reduce energy consumption. With the Eaton 9E UPS you can have it all—premium protection and 98% efficiency in a 35% smaller footprint.
You could save your company more than $85,000 per year over the life of the unit—and save yourself from having to work weekends with your desk toys. Read our white paper for hot tips on protecting your data center.
Is power protection costing you more than it should? Get our FREE white paper at switchon.eaton.com/mission Input 21 at www.missioncriticalmagazine.com/instantproductinfo
Eaton, Intelligent Power and PowerAdvantage are trademarks of Eaton Corporation. ©2012 Eaton Corporation. All rights reserved.
Talent Matters By Andrew Lane Andrew Lane is a partner with Critical Facility Search Partners, a boutique executive search and market research firm focused exclusively on the data center market since its inception in 2006.
EDITORIAL Kevin Heslin, Editor
[email protected] | (518) 731-7311
TECHNICAL ADVISORY BOARD
Making It as a Data Center Professional Following one individual’s career n occupational privilege I never take for granted is access to a variety of organization’s strategic charters and the senior executives who define them. My last column looked at Tim Caulfield’s journey to CEO of American Internet Service (AIS) in San Diego, CA. This time we we’re catching up with Chris Crosby, post-Digital Realty Trust (DLR) and pre-Compass Data Centers, his current venture. Chris Crosby’s name is, for the time being, linked to DLR, which he co-founded, led to prominence, and used to define the industrialization of data center design, construction, financing, and ownership. Prior to DLR, Chris was founder and managing director of Proferian, the technology-related leasing platform within the GI Partners portfolio, which was rolled into the IPO for DLR. Prior to Proferian, Chris served as a consultant for CRG West, now Coresite. For the first ten years of his career, Chris was active in international and domestic sales, sales management, and product development at Nortel Networks, as an aside to multiple other early entrepreneurial ventures. When I first found myself in the data center industry in 2006 Chris Crosby’s name was everywhere. An athletic figure, he was hardly an 800-pound gorilla, but he was the face of DLR and therefore becoming more prominent. This recognition made Chris popular and even harder to approach. He was a busy businessman with lots of demands on his time. Chris was an icon with all the answers and carried himself with a certain earned confidence that bordered on brash arrogance. Interesting how people can be defined by the image of the organization for which they work. Even more interesting is meeting someone outside work and realizing that your initial perceptions were very wrong and that you might have fallen for a stereotype. I pled guilty as charged as I opened up our conversation.
A
AJL: Chris, given your professional success to date, what’s left on your bucket list? CC: Professionally, I’m already enjoying the freedom of thinking clearly about building a brand again. I get to figure out my own personalized approach based upon all of my experiences and the input of incredibly bright friends and colleagues. It’s freeing and fun. It’s almost like a disease when you want to have everincreasing responsibilities.
Robert Aldrich, Cisco
Bruce Myatt, PE, Critical Facilities Round Table, M+W Group
Christian Belady, Microsoft
Russ B. Mykytyn, Skae Power Solutions
Dennis Cronin, SteelOrca
Dean Nelson, EBay
Peter Curtis, Power Management Concepts
Glen Neville, Deutsche Bank
Kevin Dickens, Jacobs
Thomas E. Reed, PE, KlingStubbins
Peter Funk Jr., Duane Morris
Leonard Ruff, Callison Architecture
Scott Good, gkkworks
David Schirmacher, Digital Realty Trust
Peter Gross, Bloom Energy
Jim Smith, Digital Realty Trust
Cyrus Izzo, Syska Hennessy Group
Robert F. Sullivan, ComputerSite Engineering, Inc.
Jack Mc Gowan, Energy Control
Stephen Worn, Data Center Dynamics, OT Partners
John Musilli, Intel Corp
Henry Wong, Intel Corp
COLUMNISTS Peter Curtis, Power Management Concepts Digital Power |
[email protected] Dennis Cronin, SteelOrca Cronin’s Workshop |
[email protected] Peter Funk, Jr., Duane Morris Legal Perspectives|
[email protected] Bruce Myatt, M+W Group, Critical Facilities Round Table Zinc Whiskers |
[email protected] Doug Sandberg, DHS Associates Mission Critical Care |
[email protected] Julius Neudorfer, NAAT Hot Aisle Insights |
[email protected]
6|
Mission Critical
January/February 2012
Andrew Lane, Critical Facility Search Partners Talent Matters |
[email protected]
Treats headache, anxiety, upset stomach and insomnia
Input 20 at www.missioncriticalmagazine.com/instantproductinfo
Being in charge of a mission critical data center can cause headaches and anxious sleepless nights. To battle these ailments shared by data center managers around the world, Geist developed a completely customizable set of tools to POWER, COOL, MONITOR, and MANAGE your data center. With Geist’s suite of products you can predict, pinpoint, and prevent potential hazards so you’ll feel great and rest easy.
www.geistglobal.com
Talent Matters Continued from page 6 Personally, things have been great with this last summer off. I’ve had time to reflect on the fact I’ve had one blessing after the next in this life. I’ve gotten to see a lot of more of my wife and two kids. I’ve been coaching my kids’ sports teams. I’ve had date nights with my wife. Generally, it has been a much better balance for me, which is what I focused on accomplishing near term. Still left on the bucket list, is the AT&T Pebble Beach Pro AM…. AJL: Our very own “Data Center Genie” (picture Bill Mazzetti?) arrives in a puff out of a generator and grants you three wishes for the data center industry. What do you wish? CC: One, humility. We need to realize that our industry behaves like a child in its early teens. You know the times that you think you know everything but you really don’t? Don’t get me wrong; this is obviously a great industry to be in, but think about how many new ideas that have already been done in other industries are being re-created here. We tout all these “new technologies” like we created them and own them. Modularity. Airside economizers is free cold air. Hot- and cold-aisle separation has been done in fab space for years. Two, recognition. This is an industry that is going to require a different breed of athlete with different skill sets, such as process engineering. We need to promote its growth and success early to professionals as a career in order to keep growing at the rate we can. Three, transparency. We need to start opening up to customers and facilitating allegiances and alliances. We need to help educate each other. AJL: What do you see going on that you like? CC: Everything about the space. Lots of capital in love with the fact that it is a high-cash flow, asset-based business. It is becoming mainstream. There is a tremendous energy and vibrancy to it, and it’s great to be a part of it and know why it’s going on. There aren’t too many careers where you get to be a part of something that is completely new. Here I am getting a chance to go around again. I had a whole summer off where I fielded a lot of phone calls and gained a lot of perspective. It was healthy for me. I got disassociated from the personalities of the business and now have a clear refreshed perspective on the business potential. The result of this will be Compass Data Centers, essentially bringing rapidly deployable, highly customizable wholesale solutions to emerging markets. One thing I have learned is that I’m much more valuable and much happier at the growth stage. I’m not so good nor do I want to be managing the $1B to $3B in revenues stage. AJL: Consider a 40 under 40 list in the data center space. You would have made the list. Who would make it now?
8|
Mission Critical
January/February 2012
CC: I’m not going to touch that one. Too many I might leave off the list and have that interpreted the wrong way. AJL: Company movers? Who’s hot? CC: There are a lot of impressive companies, small and large, with some neat Chris Crosby concepts out there. I love what Softlayer has done with their toolsets and provisioning—investing in an area that has lacked it. Innovative, not an also-ran play. Impressive company. Equinix is, too. They are very good at scaling the business, which requires doing major things well at an organizational level. They have also achieved a ubiquity of brand. Digital is, of course, impressive with their industrialization, maturity, and continued growth. i/o isn’t your run-of-the-mill company. Tony and George have something new and exciting going on. Rackspace continues to be a cool leader in transforming to a new customer cloud-based model. There are many others considering that IBM and Digital each own less than 3 percent of the market, based on an IDC calculation of total data center sq. ft. in the billions.
MORE We chatted on about issues inside and outside the industry. The conversation turned back to golf and the then-looming 7x24 conference in Phoenix in November. During a particularly pleasant “good walk spoiled” with friends, I learned Chris not only has a good heart but a great soul. Before the round, we chatted about where in world he had played. “All over” is the short answer, but I figured it was all business entertainment and coercion. Not so, just a successful son taking his dad on a couple of long trips to get away. As I approached the first green it sounded like someone’s ringtone on their phone was going off for an inordinately long time. I learned it was Chris but not his phone. He brought a little boombox and ipod which we enjoyed over the next 4 hours, and, to a man asked each other, “Why the hell didn’t we think of this before?” Leave it to Chris to innovate, to lead with quick paces, a father, husband, and son, to look ahead and defy convention. You learn a lot about a man when you hear his music. I’m still waiting for that playlist and we’re all looking forward to seeing what’s next for Chris Crosby. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
ǡ
Ǥ
ǫ
Flexible. Scalable. Efficient.
Ǥ
Dz ǦϐdzȄ
Ǥ Ambient Air to Liquid Cooling Options
Ȁ
ǡ
ǡ
Ǧ Ǧ
Ǥ ǡǯ
Ǥ
Input 94 at www.missioncriticalmagazine.com/instantproductinfo
The Power to Protect
ǡ
Ǧ
ǡ
ϐ
Ǥ Find Out How Cool the ES Series Really Is! ϐ
ǡ Ǥ
Ǥ
Ǥ ǡ
ͳǦͺǦǦ ȋͳǦͺǦͺͻǦͶͷʹʹȌ ǣ
̷
Ǥ
Ǥ
Legal Perspectives By Peter V.K. Funk, Jr. Peter Funk is a partner at Duane Morris LLP, where he practices in the area of energy law with a focus on energy generation and energy management\conservation projects.
An International Crisis Impacts Your Business Do your contracts provide an excuse in the event you cannot perform? n 1956, the Suez Canal was taken over by Egypt. In response, Britain, France, and Israel attacked to protect their interests in the canal. Egypt retaliated by blocking the Suez Canal, which caused an extreme disruption in international trade. In 2012, we are faced with threats by Iran to close the Strait of Hormuz, which provides passage for approximately 20 percent of the world’s oil, in retaliation for sanctions relating to Iran’s nuclear program.
I
The two primary legal doctrines arising from a long history of court decisions are impossibility of performance and commercial impracticability. In addition, if the transaction involves the sale of goods, Uniform Commercial Code (UCC) Section 2-615 provides specific statutory excuses. Does your company’s agreements with its customer’s provide an “excuse” for not performing if history repeats itself at the Strait of Hormuz? Or a volcano erupts? Following such events, the bad news may arrive in the form of notices from your energy suppliers variously stating: • That it cannot deliver; or • It is allocating limited supplies among its customers; or • That its prices will rise substantially effectively immediately. First, you check the agreements immediately only to find that, yes, the seller appears to have the right to take the action they took. You then look at your company’s contracts with its customers and, when you find a “force majeure” provision in your contract that excuses your performance
10 |
Mission Critical
January/February 2012
in the event of war, natural disasters, or events beyond your control, breathe a sigh of relief, right? Will your seller’s or your company’s performance be excused in this situation? The answer is that it depends upon the wording of the contract and which country or state’s law will be applied. The two primary legal doctrines arising from a long history of court decisions are impossibility of performance and commercial impracticability. In addition, if the transaction involves the sale of goods, Uniform Commercial Code (UCC) Section 2-615 provides specific statutory excuses. Relying on case law or the UCC may not be sufficient, so force majeure clauses are often included to provide specific contractual excuses and limit liability in the event of specified occurrences and circumstances beyond the control of the parties. Such clauses may also include other situations, such as when a newly enacted law prevents performance. You should be aware that weather-related events are typically included in force majeure provisions, but, even so, not all courts will excuse the affected party from performing if there is only a general reference to weather—it is better to be specific and, for example, refer to “hurricanes.” Otherwise, the result may be litigation. There are also cases in which a party seeks to be excused from its contractual obligations based upon economic reasons. In the early 1970s, when the first round of OPEC oil price increases struck, a utility had been purchasing oil as power plant fuel from a major oil company at $2/barrel. The OPEC prices that the oil company had to pay for oil soon rose above $10 a barrel and continued to rise. The utility demanded that the oil company continue to supply oil at $2/barrel. The force majeure provision in the sales agreement between the oil company and the utility did not mention an economic event such as an OPEC price increase. Faced with huge financial losses if it continued to supply oil far below its cost, the oil company sought a court order requiring the utility to pay the increased prices. The court granted the oil company relief based upon commercial impracticability.
STULZ CyberRow Intelligent Rack Cooling
Input 41 at www.missioncriticalmagazine.com/instantproductinfo
Predictability - Put the cooling where the heat is.
Put the cooling where the heat is.
Designed for Hot Aisle and Cold Aisle, with or without containment Closed-Loop Configuration for Direct one to one Rack Cooling
Versatility - Designed for easiest installation. Designed for installation on raised floor or non-raised floor applications
Another innovative, economical, energ efficient data center cooling solution by STULZ.
Suitable for new and existing data centers Can be installed in the middle or at the end of a row - 12” and 24” cabinet widths
Designed for scalability, reliability, and seamless
Chilled Water & Direct Expansion (Air, Water, or Glycol) Cooling Methods
integration into new or existing data centers;
Wide range of cooling capacities for small, medium, and the largest applications
STULZ CyberRow rack cooling systems are suitable
Top and bottom pipe and power connections
for hot-aisle containment, cold-aisle containment,
100% front and rear service access Highest cooling capacities in the industry - up to 75 Kw per unit
open aisle, and close coupled configurations in
Adapts to all major rack manufacturers racks and rack containment systems
small to enterprise size data centers.
Castors included to easily locate in place
Availability - Stay on top of your operation. STULZ E2 Microprocessor Controls
Scan to learn more
pLAN communicates with up to 12 units without a BMS Seamless integration with all BMS platforms
ROI - Variable and Scalable Capacity. Fully adjustable fan speed control for energy savings Built in redundancy Capacity assist functionality saves energy and operating expenses
STULZ Air Technology Systems, Inc. 1572 Tilco Drive, Frederick, Maryland 21704
[email protected]
Scalability - add STULZ CyberRow cooling units as your data center grows
Perimeter Cooling
Free Cooling
HHD Cooling
Replacements & Retrofits
www.stulz-ats.com Modular Cooling
Ultrasonic Humidification
Legal Perspectives Continued from page 10 The requirement to pay higher oil prices affected the utility and its customers, since the utility’s tariff included a fuel adjustment clause (FAC), which increased its customer’s bills to cover the higher oil costs. Certain commercial customers then brought a legal action asking the court to issue an order blocking the utility from implementing the FAC and limiting its charges to those approved in its most recent rate case. The utility prevailed since the FAC was lawfully approved for inclusion in the tariff. The court also recognized public-policy considerations. Requiring the utility to provide electricity below the cost of production would imperil the utility’s ability to fulfill its statutory role of providing reliable power to its customers. This was an interesting case since oil was, in fact, available. Nevertheless, it was commercially impractical for the seller to deliver it at the contractual price since the price had been driven up by an international event beyond the control of the seller. It was also significant to the court that the OPEC oil price increase was not foreseen by the parties when they entered into their agreement. Yet the outcome of the oil company’s court case was not a foregone conclusion since economic events do not typically provide an excuse from performance unless specifically mentioned in the force majeure clause. In addition, it is not unknown for commodity prices to rise and fall sharply, and both sellers and buyers have the ability to hedge against such risks. The court might have found against the oil company by
Now with
Wireless Sensors
Sensaphone Remote Monitoring Products use redundant communication paths, built–in battery backup, and supervised sensors to make sure that when something goes wrong in your computer room
...YOU GET THE MESSAGE Notification Via: UÊ6ViÊ*
iÊ >
UÊ q>
UÊ/iÝÌÊiÃÃ>}i
UÊ- *Ê/À>«
UÊ*>}iÀ
UÊ>Ý
ruling that to allow a fixed-price contract to be voided when prices fluctuate would defeat the whole purpose of having a fixed price and could provide an unbargained for advantage to the excused party. In order to address the potential uncertainties of court treatment of supply-related excuses, a buyer may be able to enhance its opportunity for an effective excuse by including specific references to cessation or allocation of supply in the force majeure clause. It is important to realize that courts may treat buyers differently than sellers. In another case, a buyer unsuccessfully tried to “get out of” its contract to purchase fuel oil by seeking relief in court when oil prices dropped as certain producers in the Mideast sought to regain market share, in part by undermining the economic basis for alternative energy and for energy conservation. A sharp decline in energy prices was not among the events permitting the buyer an excuse and the general language in the clause referring to events beyond the parties’ control was insufficient to permit a party to avoid its obligations simply because of a change in prices. The court did not permit the buyer an excuse since, while this particular change in pricing may not have been foreseeable, it was foreseeable that there would be price fluctuation. Historically, the grounds under which an excuse for non-performance was permitted have been largely limited to “impossibility for performance.” There are courts that now recognize that the defense of impossibility can be unreasonable in requiring that performance be absolutely impossible and instead set the standard at commercial impracticability. Although it is desirable to have enforceable contracts, most courts will find that performance is impracticable when an event or circumstance is beyond the control of the party seeking to be excused and the cost of performance is excessive and unreasonable. To have a valid excuse under UCC Section 2-615, a seller must show that a contingency occurred that made performance impracticable, and that the parties assumed that the contingency would not occur when they entered into the contract. If a seller whose supply has been limited can meet those criteria, the seller must allocate production and deliveries among its customers, and then must notify the buyer of any delay or non-delivery and, if an allocation is to be mad—of when and how much. To sum up, whether relief is available under a force majeure clause depends upon the wording of the clause. Usually, it will list the events that constitute force majeure followed by a “catch-all” phrase such as “other events beyond the reasonable control of the parties.” Even events beyond the control of the parties, for example, a truck accident, which are beyond the control of the parties may not provide a valid excuse if bases upon a “catch all” phrase since truck accidents are commonplace and not unforeseeable. For that reason, It is important to list specific events in the force majeure clause since reliance on a catch-all phrase is problematic and can be unsuccessful. The discussion in this column is not intended to be legal advice and you should consult your attorney for advice on the points discussed above. ■
Get your FREE application guide now
SENSAPHONE
®
REMOTE MONITORING SOLUTIONS
877-373-2700
MADE IN THE
www.sensaphone.com
Input 63 at www.missioncriticalmagazine.com/instantproductinfo
12 |
Mission Critical
January/February 2012
◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
Fresh designs. Crenlo® products are designed to be functional and stylish. As an industry leader in the design, manufacture, and integration of high quality enclosures, we offer: standard Emcor® enclosure solutions ZKLFK FDQ EH PRGL¿HG WR PHHW WKH QHHGV RI PRVW DSSOLFDWLRQV DQG custom Crenlo enclosure solutions, which can be built for any application, ranging from inverter enclosures to package drop boxes. The only thing stronger than our enclosures is our commitment to customer satisfaction. At Crenlo, we see enclosures differently. Crenlo leads the market in premium enclosures by providing: &XVWRPHQJLQHHUHGGHVLJQVLQDQ\YROXPH +LJKTXDOLW\FUDIWVPDQVKLSIRUGXUDELOLW\DQGVWUHQJWK 3URGXFWVWKDWFDQEHPRGL¿HGIRU\RXUDSSOLFDWLRQ 6W\OLVKGHVLJQRSWLRQVLQFOXGLQJFRORUVDQGDFFHVVRULHV _ZZZFUHQORFRPHQFORVXUHVIUHVK
We see enclosures differently. Input 90 at www.missioncriticalmagazine.com/instantproductinfo © 2012 Crenlo Cab Products, Inc. All Rights Reserved. Crenlo® and the Crenlo® Logo are registered trademarks in the United States (and various other countries) of Crenlo Cab Products, Inc. © 2012 Emcor Enclosures, Inc. All Rights Reserved. Emcor® is a registered trademark in the United States (and various other countries) of Emcor Enclosures, Inc.
Cronin’s Workshop By Dennis Cronin Dennis Cronin is COO of Steel Orca. He has been in the mission-critical industry for more than 30 years. He is a founder of 7x24Exchange, a member of AFCOM and the Real Estate Board of New York, and an advisory board member of Mission Critical.
The Time Has Come for 380 Vdc Testing theory in a real facility hen the Steel ORCA team started planning, it was our intention of being the “Greenest Data Center on Earth,” and, as a data center/services company, we were in the enviable position of being unrestrained by established corporate policies, legacy designs, or outdated facilities. Beginning with a completely blank slate, we embarked on a two-year effort of looking at virtually every advanced design, concept, and product that showed promise of delivering real energy efficiency while maintaining or improving reliability and offering reasonable economic return.
W
Along the way we found numerous solutions and were constantly challenged by new ideas/products in the development pipeline that outperformed the latest advanced products in the marketplace. It quickly became apparent that if we were to lay claim title to being “Greenest,” we would need to look towards advanced technologies designed to support the next generations of technology as well as capable of sustaining the legacy technology today. High-voltage direct current (HVDC) is one of the technologies we examined, and it became our preferred power distribution medium. At the start, we were wary of all the vendor hype around the application of HVDC in a data center and shared all the concerns of the numerous naysayers. We participated in online discussion groups where many knowledgeable participants wrote off HVDC as a fad, as a dangerous design, and as a solution without products to support it; they offered numerous other reasons to just to say it was a bad idea. Yet, as we did more research and connected to people who had implemented HVDC and identified vendors producing the products from high-quality rectifiers to the last 10-feet of connectivity (cords, plugs, outlets, and server power supplies), we slowly became believers that the market is ready for large-scale 380-Vdc implementation.
HISTORY
Losses in UPS and HVDC systems in North America and elsewhere.
14 |
Mission Critical
January/February 2012
Interest in HVDC started to develop from 2004 through 2006 with work at EPRI and proof of concepts (POC) executed at Sun Microsystems; however, manufacturers needed to develop standards for connector types and specifications for the dc power supplies and rack-mounted power strips. Concurrent with the technology connectivity and product development, the electrical design community needed to see several POC installations to fully evaluate the impact to the data center support infrastructure. One of the first facility demonstration projects to gain recognition was at Syracuse University in 2009, where Validus (now part of ABB www. validusdc.com) installed a HVDC system to power an IBM Z10 mainframe. Then in 2010, IBM announced two servers running on
w e n A e l g an ool. c n o t deliver a h t s n g Solutio n li o irflow o a C f o t n In-floor amou
t the righight place. ir. -pass a r inate by m e li . e h y rl t nea eat load in riable h irflow to a a v s le to g w n
a flo ® ® n assist. atch air ire grate rAire fa ectly m e ir • DirectA d w r o e P p with ® V dam e. 28.5kW Aire VA nt spac sted to o • Smart o b , quipme k e c a le /r b a W valu p to 20k king up • Cool u ithout ta w g n li o s. level co • Rack plication iciency trofit ap e the eff v e r ro p in d im use call or can • Easily rtment a olutions a S p g e n d li s o r. ice Co m/infloo In-floor ical serv ow our floors.co ur techn h s o s fo e e e n e c s ti iv c c g a o u T rod nter, t tate data ce s tool a /instantp of your t saving ine.com s z o a c g r a u do ticalm downloa issioncri Input 7
6 at w w
800 231 7788 tateaccessfloors.com/infloor Patents Pending
w.m
® Aire Direct
® Aire Power
® Aire Smart
Cronin’s Workshop Continued from page 14 380 Vdc, one POC at the University of California and the other POC at Duke Energy. You can watch the online video demonstration of a live direct current environment at http://hightech.lbl.gov/dc-powering/videos.html. Then came 2011, and connectivity products started making it to market, with companies such as Universal Electric (www.uecorp.com) launching Vdc busway products and HP modifying its power supplies to be hot swappable. These changes enable ease of conversion to a HVDC environment. One of the surprises we found in our research was a NTT R&D analysis that identified numerous facilities in Japan, Europe, and the U.S. that currently are using HVDC. With a reasonable installed base, 380 Vdc is rapidly moving from bleeding edge to leading edge, and with so many suppliers, manufacturers, and technology suppliers supporting 380 Vdc our interest was reinforced. Yet as operators, we knew that if we became a 380 Vdc shop we would need to provide our clients with a migration path to convert over time.
CATALYSTS BEHIND HVDC An overall energy-efficiency improvement between 8 and 15 percent is an obvious driver, but there are also other significant attributes that should not be overshadowed by the energy calculations. These include space, reliability, CapEx savings, reduced carbon footprint, and safety. Experts can debate the size of efficiency improvement, but suffice it to say that removing multiple power conversions in the power train will produce savings. At a minimum the HVDC design removes the inverter from the UPS and eliminates the first ac/dc conversion in the server power supply. Transformer PDUs sitting on the data center floor can also eliminated. So the savings are real and not vapor watts. Space and reliability are directly related to the elimination of equipment and components. Less equipment equals less space, and fewer components statistically equals improved reliability. CAPEX and OPEX savings follow the use of less space, less power, and therefore less cooling. All of the above factors contribute to lowering the carbon footprint to build and operate. Then there is the safety issue. This is probably one of the least understood functions in using HVDC. One member in a discussion group even went so far as to say that he would rather be shocked by ac than dc while acknowledging that either would be fatal. Such statements were typical of comments from people with limited to no direct experience in the HVDC topology. Because I do not consider myself an expert in short-circuit calculations, I sought out industry experts to find that a properly designed HVDC application can deliver significantly lower short circuit currents than its equivalent Vac counterpart.
16 |
Mission Critical
January/February 2012
WHY 380 VDC? There have been numerous discussions on this matter as the dc power supplies in servers operate across a wide range of dc voltages. The power design inside the servers goes something like this: The server receives ac power from an outlet, corrects it for power factor, and converts it to +380 Vdc, and then back to ac (double conversion) before making a final conversion back to dc and a breakdown into the multiple low-voltage dc levels that the electronics actually operate at. By eliminating the initial power factor correction and ac/dc conversion we eliminate conversion losses in the server. Although this will increase a data center’s power usage effectiveness (PUE), it should at least be partially offset by the facility energy savings through the elimination of the UPS inverter and a reduced cooling requirement. The focus is not PUE but achieving the lowest overall energy consumption. The server power supplies are generally set at +380 Vdc so this becomes the logical HVDC voltage. There are parts of the globe however where the focus is on other HVDC ratings (240 Vdc). So back to the title, “The Time Has Come for 380 Vdc.” We have done the research on 380 Vdc along with a wealth of many other promising technologies that are hitting the market. Now it is time to develop the implementation strategies, including providing a migration path for our clients. Both the engineering details and the transition planning are proceeding. In the coming year you will see our plans develop along with many others pursuing the benefits of 380 Vdc. This is a solution to watch, as it will continue to pick up market share. This is just one of over a half dozen new solutions we have selected to implement in order to gain maximum efficiency in our operations for years to come. The process is our Kaizen (translation kai (change), zen (good) is improvement). This method became famous by the book by Masaaki Imai titled Kaizen: The Key to Japan’s Competitive Success. The core principle is the (self) reflection of processes (feedback). The purpose is the identification, reduction, and elimination of suboptimal processes (efficiency). We encourage all data center operators to establish their own Kaizen or Continuous Improvement Process (CIP). It is a passion. We owe it to our clients, to our society, and ourselves. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
ARE DIESEL COSTS SQUEEZING YOUR PROFITS? If your diesel engines are in continuous or peak shaving operation, GTI Bi-Fuel® delivers the power to reduce your fuel costs—substantially. This patented system draws low-pressure, clean burning, natural gas into the engine, substituting it for a large portion of the diesel fuel, lowering costs, and reducing the need to haul diesel fuel to the site. Proven in thousands of installations, GTI Bi-Fuel® is an easy retrofit requiring no engine modification. Find out more about the benefits of GTI Bi-Fuel® at www.gti-altronic.com, or call 330-545-9768. Input 17 at www.missioncriticalmagazine.com/instantproductinfo
GTI Bi-Fuel® — A Product of the HOERBIGER Group
Mission Critical Care By Douglas H. Sandberg Doug Sandberg is principal, DHS Associates
What’s in Your Program? Electrical safety, Part 3 of 3 he problem with mission-critical emergency systems is that failures only occur when the systems are called upon to operate. Comprehensive electrical maintenance does not preclude a failure; however, it dramatically increases the odds that a problem can be detected and corrected in advance. The problem with electrical safety is that folks rarely realize the potential consequences until after an incident occurs. Don’t allow your employees, contractors, or business to become another statistic. Electrical shock and arc flash are the two primary types of electrical safety hazard in your workplace.
T
• Hazard class: the level of hazard exposure. • Incident energy: the amount of energy generated during an electrical arc impressed on a surface, 18 in. (the length of the average forearm) away from the source of the arc expressed in calories per centimeter squared (cal/cm²). This is worst case as if you were standing directly in front of the energized conductor. The farther you are from the source, the lower the cal/cm². • Personal protective equipment (PPE) required: the specific PPE required for the class hazard faced. • Voltage hazard: the voltage level one would be exposed to at the point of access. • Equipment identification: the equipment the information refers to. • Arc flash protection boundary: the distance from the access point at which the incident energy from an arcing fault falling would equal 1.2 cal/cm² (equivalent to a mild sunburn). • Limited approach boundary: the line that may not be crossed by unqualified persons, unless accompanied by qualified persons both wearing appropriate PPE. • Restricted approach boundary: the boundary that only qualified persons are permitted to approach exposed, ener-
Arc-flash hazard warning label
18 |
Mission Critical
Electrical shock occurs when the human body becomes part of an energized electrical circuit. The degree of injury is directly related to the path the current takes through the body. As little as one milliamp is enough to cause death. Arc flash is literally a fireball that occurs when an energized conductor is unintentionally connected to another energized conductor or ground. The air within the sphere of the established arc becomes conductive and the arc grows exponentially until such time as current is interrupted. Question: What is an arc-flash hazard warning label? Answer: A label containing all necessary information about the arc-flash hazard faced at a specific location that is affixed to each piece of electrical equipment with a removable cover or door providing access to current carrying conductors when energized (see the figure). Question: What information is contained on the arcflash hazard warning label? Answer: All pertinent information necessary so personnel understand the degree of hazard faced and protective measures required.
January/February 2012
FAAST™
Fire Alarm Aspiration Sensing Technology
Breathe easy.
System Sensor offers Aspirating Smoke Detection. YES! Send me a FREE FAAST™ information kit. Please fill out this card, tear it off, and drop it in the mail to receive more information about FAAST Fire Alarm Aspiration Sensing Technology, or visit systemsensor.com/faast. Name (print): ______________________________________________________________________ Company: _____________________________________ Title: ______________________________ Address: _________________________________________________________________________ City: ________________________________________________ State: ________ Zip: ___________ Phone: _____________________________________ Email: ________________________________ MC12/01,02
Learn more at systemsensor.com/faast.
FAAST™
Fire Alarm Aspiration Sensing Technology
Breathe easy.
System Sensor offers Aspirating Smoke Detection. YES! Send me a FREE FAAST™ information kit. Please fill out this card, tear it off, and drop it in the mail to receive more information about FAAST Fire Alarm Aspiration Sensing Technology, or visit systemsensor.com/faast. Name (print): ______________________________________________________________________ Company: _____________________________________ Title: ______________________________ Address: _________________________________________________________________________ City: ________________________________________________ State: ________ Zip: ___________ Phone: _____________________________________ Email: ________________________________ MC12/01,02
Learn more at systemsensor.com/faast.
Breathe easy. System Sensor offers aspirating smoke detection. To keep your mission-critical facility up and running, you need to manage issues rather than react to emergencies. The FAAST™ Fire Alarm Aspiration Sensing Technology offers the most early and accurate fire detection available, so you can mitigate risks before disaster strikes. Input 73 at www.missioncriticalmagazine.com/instantproductinfo
Learn more at systemsensor.com/faast.
Mission Critical Care Continued from page 18 gized conductors, wearing appropriate PPE and with a written and approved work plan. • Prohibited approach boundary: the line that is considered to be the same as actually contacting the exposed part. A risk assessment must be completed prior to crossing this line. Question: How do manufacturers deal with this hazard when designing electrical gear? Answer: Manufacturers are promoting a variety of design features generally divided into active and passive solutions. Active protection seeks to prevent the arc from happening and mitigating the event to a high degree. Examples include: • Arc-flash detection. Since an arc flash will continue until current is interrupted, early detection is a huge advantage. One such detector incorporates an unclad fiberoptic loop routed around the inside of the gear to detect a sudden change in the intensity of the ambient light over a
very brief duration of time coupled with current transformers to detect the current spike associated with an arc-flash incident. The output of this detector is designed to trip the upstream over current device very quickly thereby minimizing the duration of the event. • Some manufacturers have taken a more direct physical approach by coupling rapid detection of an arc-flash event with the direct physical intervention of a secondary fault, which is designed to safely deplete the energy from the original fault and trip the upstream over current protective device. One such device is GE’s Arc Vault. This device is connected directly to the bus and after detecting an arc flash, strikes a plasma arc inside a robust container thereby sapping the energy from the uncontrolled arc flash and effectively extinguishing the destructive arc flash. Passive protection is mainly provided in what is being termed “fault tolerant” design.
This means that gear is designed to minimize damage and physically withstand an arc flash. Common mechanical design features include louvers in the top to relieve the tremendous pressure created by an arc-flash event, ducts or chutes to direct the arc up and out, and reinforced cover and doors, etc. While this approach is desirable, it is reactive. Regardless of these features, an arc flash creates real direct and collateral damage that must be repaired. It is a bit like insurance, the building burned, we lost everything, but we got a check. Arc-flash incidents result in damage and interruption of business operations. Examples of passive solutions are: • Remote-controlled circuit breaker draw out machines. This device is designed primarily to protect the operator in case of a malfunction during service work. • Service setting on controls and over current protective devises temporarily set relays and trip devices to minimum levels during service and repair activities again in order to rapidly trip the relay or device and interrupt fault current extinguishing the arc flash. Electrical safety is not an option. This topic is broad and complex and requires the allocation of significant resources to establish a comprehensive program. Four to five injuries or deaths occur each day in the U.S. as a result of electrical shock or arc flash. You can debate the difference between standards and statutes; however, standards are the basis for statutes and codes. One industry study concludes the minimum cost of an arc-flash event is $750,000. I would submit that it is likely to be a lot higher when you consider the direct damage to the equipment and facility, the liability as a result of injury or death, and the business disruption. As a facility manager, you could be held personally liable in the event of an incident if you fail to enforce safe work practices for your employees and contractors. In a court of law or the court of public opinion, you’ll fare much better having done the right thing. It’s time to get serious about electrical safety in every facility. Protect your employees, your contractors, and your company. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
Input 103 at www.missioncriticalmagazine.com/instantproductinfo
20 |
Mission Critical
January/February 2012
What do you look for in standby power for your data center? ✓ Complete, integrated power systems ✓ Experienced application engineers to customize your system ✓ Paralleling and generator controls that share the same technology ✓ Local service and support virtually anywhere in the world
With Cummins Power Generation, you can check every box. Only Cummins Power Generation can provide a completely integrated standby power system. Along with PowerCommand® generators, transfer switches and paralleling systems, you get a team of engineers and project managers with extensive data center experience. Plus a network of distributors delivering local service on a global scale. This unique combination of advantages is The Power of One.™ We’re putting this power to work for some of the biggest names in the business. Let’s put it to work for you. Learn more at www.cumminspower.com
©2011 Cummins Power Generation Inc. All rights reserved. Cummins Power Generation and Cummins are registered trademarks of Cummins Inc. Cummins Power Generation holds the registered trademark of PowerCommand® and the trademarks of The Power of One™ and “Our energy working for you.™”
Input 38 at www.missioncriticalmagazine.com/instantproductinfo
Zinc Whiskers By Bruce Myatt Bruce Myatt PE is director of Mission Critical Facilities at M+W Group and founder of the Critical Facilities Round Table. M+W Group is a global EPC contractor that specializes in the design and construction of data centers and similar critical facilities worldwide. See www.MWgroup.net or call 415-748-0515 for more information.
Beyond PUE and onto HPC Power and cooling efficiency is the order of the day or the last five years, data center owners and operators across the globe have been very focused on improving the energy efficiency of our data center power and cooling systems and reducing PUE. As a result of that focus, just about every realistic power and cooling efficiency solution has become commonplace in the design of our newer facilities. Aisle containment, elevated supply air temperatures, outside air economizers, VFDs, and energyefficient transformers and UPS systems are nearly always specified in our designs. PUEs of 1.3 and below are now consistently achieved, even for highly redundant facilities in the most hot and humid regions of the country, and a 1.15 is often our target in cooler and dryer climates.
F
backup power and provides low-latency computing that results in extremely high power densities, often measured in the thousands of watts per square foot. Differences in the way the two types of centers are operated are very apparent. However, the newest data centers, built for the cloud, are strikingly different than legacy data centers. In fact, they are becoming more and more like the HPC centers that have operated for decades in university and federal government research and development facilities. Let’s look at how cloud data centers and HPCs are becoming more similar and why their PUEs are so low. I’d also like to look at how HPCs are changing and how cloud facilities can benefit from HPC operating experience and R&D. And, while we are at it, let’s explore how we might look beyond low PUEs and find opportunities to become more energy efficient in our data center designs for the future.
SIMILARITIES BETWEEN SUPERCOMPUTING AND THE CLOUD
The 7-MW XT Jaguar supercomputer at Oak Ridge National Labs.
But, the most efficient of all IT facilities are the cloudcomputing data centers (the cloud) and the high-performance computing centers (HPCs) that are now achieving PUEs as low as 1.05. Those levels of efficiency can result in tremendous savings in a large facility and really deserve a close look. The low PUEs also demonstrate how well we can control our power and cooling energy costs these days and how little opportunity remains to further reduce our PUEs and become more efficient. In the last “Zinc Whiskers” (Sept./Oct. 2011), I identified four different types of IT facilities and took a close look at the differences between traditional data centers with their highly redundant back up electrical power systems and low power densities and HPC centers with their IT failover strategies. The HPC failover strategy requires very little
22 |
Mission Critical
January/February 2012
As recently as five years ago, aisle enclosures and 80ºF server supply air temperatures were just a figment of the imaginations of a few engineers looking for ways to reduce costs. So, for the sake of planning for more efficient operations tomorrow, it isn’t too early to explore the possibilities of things that HPC centers are beginning to do today. The similarities between cloud computing and supercomputing environments are becoming more evident as we build out our newest and most efficient data center spaces. After all, both cloud and HPC maximize computing performance and efficiencies by continuously operating processors at close to maximum speeds, creating very high compute densities and power densities alike. This means that a lot more power is directed into smaller spaces, creating heat loads so high that it becomes impossible to cool the space with air alone. The most efficient HPC centers use an optimized combination of air and water cooling at the rack to remove all that heat. The newest of our cloud environments deploy 10 and 15-kilowatt (kW) racks that equate to power densities of as much as 600 watts per square foot. Cloud data center operators are preparing designs to go to even higher densities by using air and water cooling systems together and using outside air systems operating in conjunction with in-row coolers or rear-door heat exchangers to cool the same space.
Always secure. Always available.
Anywhere in the world you need power, Schneider Electric is there. Power loss poses a threat to the equipment, people, and processes you rely on. And with today’s stricter security and safety regulations, process automation, and increasing dependence on sophisticated high-tech systems, the need for uninterrupted power is critical. Add the rising cost of energy and environmental concerns into the mix, and it becomes essential to protect your power with solutions that not only meet your availability demands, but are energy efficient, too.
Secure power solutions that deliver the performance you need Products: Our complete catalog of power solutions, featuring our leading brands such as APC™ by Schneider Electric and Gutor™, offers an unmatched range of single- and three-phase UPS units, rectifiers, inverter systems, active filters, and static transfer switches from 1 kVA to several MVAs.
Why Schneider Electric is the right power protection choice You may know us as the market leader in delivering IT power protection. But we also offer a full range of reliable and highly efficient power protection solutions designed to safeguard business-critical applications and environments outside the IT room. Our innovative, best-of-breed products, services, and solutions provide the secure and available power you need to keep your systems up and running, while increasing efficiency, performance, and safety.
Services: Schneider Electric Critical Power & Cooling Services can proactively monitor and maintain the health of your systems, protecting your investments, reducing total cost of ownership and operating expenses, and providing peace of mind throughout the equipment life cycle. Solutions: Choosing the right combination of products and services from Schneider Electric gives you the convenience of a total solution – systems, software, and services from a single source.
Guaranteed availability for business-critical systems No matter what industry you’re in, our unrivaled portfolio offers a solution that’s guaranteed to suit your specific business needs and keep your power on. Thanks to Schneider Electric™ power and energy management capabilities, in-house expertise, broad investments in R&D, and global presence, you have a trusted resource for reliable power, anywhere in the world.
The Different Types of UPS Systems White Paper 1
> Executive summary
Make the most of your energy
Learn the importance of power protection for your mission critical applications by downloading 5 FREE white papers! Visit www.SEreply.com Key Code k920v
©2012 Schneider Electric. All Rights Reserved. All trademarks are owned by Schneider Electric Industries SAS or its affiliated companies. • 998-4982_GMA-US 35 rue Joseph Monier, CS 30323, 95506 Rueil Malmaison Cedex (France) • Tel. +33 (0) 1 41 29 70 00
Input 75 at www.missioncriticalmagazine.com/instantproductinfo
SM
Zinc Whiskers Continued from page 22 And, for different reasons, both cloud and HPC computing environments depend less on the redundancy and reliability systems than do traditional data centers. The cloud provides inherent reliability within the processing environment that allows for fail over from one computer to another, and now, with the right network architecture, from one facility to another. Supercomputers, on the other hand, are usually used for high-volume computational and problem-solving purposes that can stop and start without jeopardizing operations. So with the right fail-over strategies, supercomputers can come down “softly” by saving data to backed up data storage devices and restarting their computing when power becomes available again. Backup electrical power, in the form of UPS systems, is less important in both environments. These are two good reasons as to why their PUEs are so low.
FACEBOOK Last July, Facebook hosted a Critical Facilities Round Table meeting at its Mountain View, CA, headquarters to present the design of its Prineville cloud data center (see www.cfroundtable.org/membership-meetings.html). Facebook presented 400-V electrical distribution centers, 100-percent free-cooling HVAC, and variable-speed server fan controls that taken altogether represent a real breakthrough in data center design. As much as we all applaud these advances in data centers, it is interesting to note that many of these features have been deployed in
supercomputing spaces for years. The high-voltage electrical systems and the “combined air-and-water” cooling solutions that are only now finding their way into our most aggressive data centers designs have been deployed in HPC environments for decades.
RECENT ADVANCES IN SUPERCOMPUTING EFFICIENCIES PUEs are now so low it is becoming evident that the next advances in data center energy efficiencies will have to come from somewhere other than simple improvements in current methods of power and cooling, probably from the processing technologies themselves. And, in fact, the newest supercomputers are achieving higher and higher compute densities that actually require less power, space, and cooling to accomplish the same work performed by older computers. The most efficient HPCs today utilize a computing strategy that integrates the operation of multiple processors types into the same computer. The combination of central processing units (CPUs) and several graphics processing units (GPUs) has proven to be much more efficient than adding more CPUs to the same computer. So instead of developing CPUs with many more cores in the same processor, we are now combining the capabilities of processors with different characteristics to perform the same amount of work more efficiently. Using this strategy over the last few years, supercomput-
[email protected] MIRATECH provides innovative turnkey emissions solutions to meet the needs of data center standby or peak shaving applications. From basic 2 MW gensets with sound attenuation to 36 MWs of units with DPFs, oxidation catalysts, SCRs and commissioning, MIRATECH engineers and manufacturers the robust catalysts, silencers and controls packages you can depend on every day. Contact our experienced sales team first to have MIRATECH provide and install the entire exhaust after-treatment scope of supply.
Over 20,000 Systems Installed NSCR t SCR t DPF t Silencer t AFR t NESHAP CPMS Field Service t Training t Turnkey
www.miratechcorp.com 420 South 145th East Avenue t Drop Box A Tulsa, OK 74108 t USA t +1 918 933 6271 Input 109 at www.missioncriticalmagazine/instantproductinfo
24 |
Mission Critical
January/February 2012
Flexible power solutions in minutes. Not weeks.
Expanding your power distribution capacity shouldn’t be a hardship. And with the flexible Starline Track Busway, it won’t be. Our overhead, scalable, “add-as-needed” system can expand quickly, with no down time, and no routine maintenance. Make dealing with the jungle of under-floor wires a thing of the past, and reliable expansion and reconfigurations a part of your future. To learn more about Starline Track Busway and to find a representative near you, just visit www.uecorp.com/busway or call us at +1 724 597 7800. Input 33 at www.missioncriticalmagazine.com/instantproductinfo
Zinc Whiskers Continued from page 24 ers have increased their computing capacity by a factor of ten while using the same amount of energy required by older computers (see http://en.wikipedia.org/wiki/supercomputer). Similar advances in server efficiencies are already making their way into our cloud environments. Seagate Technologies, for example, offers a server that utilizes a strategy founded upon the HPC model of combining processor types to achieve superior performance. Seagate accomplishes this by combining CPUs with a multitude of smaller processors much like those found in your cell phone. Their SM1000 server is said to require only 25 percent of the power space and cooling of the servers they replace, while achieving compute densities four times higher than previous servers and without increasing the power density at all. Seagate received an award and grant from the Department of Energy in 2009 for developing energy efficient computing technologies like these that were discussed at length in a previous “Zinc Whiskers” column (see Mission Critical, July/August 2010 p. 20).
HPC ADVANCES WILL CHANGE POWER AND COOLING STRATEGIES Advances in technology will to continue to drive changes in our facilities infrastructures. Recent research involving superior materials, cooling strategies, and testing methods are leading us to develop computers that are far more efficient than those we operate today,
and they are fundamentally different in several ways. These changes will require a new approach to the way we operate the data centers that house them and very different methods of providing the power, space, and cooling that support them. That will be the subject of our next issue of “Zinc Whiskers” where we will go into detail about these changes and what they mean to our facilities. CFRT hosted a panel at the Technology Convergence Conference at the Santa Clara Convention Center on February 2nd, during which panelists addressed the issues presented in this article (see www.teladatatcc.com). CFRT is also planning to visit a nearby highperformance computing facilities to see newly installed 100-kW racks in operation. CFRT is a non-profit organization based in the Silicon Valley that is dedicated to the open sharing of information and solutions amongst our members made up of critical facilities owners and operators. Please visit our Web site at www.cfroundtable.org or contact us at 415-748-0515 for more information. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
H O L LY W O O D , F L • M AY 1 5 - 1 7 , 2 0 1 2 • T U E S D AY - T H U R S D AY
16TH ANNUAL STATIONARY BATTERY CONFERENCE Learn about the latest industry developments
“Both you and your company will benefit from attending Battcon!”
Experience industry-specific education and networking Visit trade show vendors to find solutions you need Hear from leading authorities in presentations and panel discussions Participate in breakout workshops based on the stationary battery focus topics.
800-851-4632 • 954-623-6660 www.battcon.com
[email protected]
To be a presenter, pre see the Call for Papers page Battcon always sells out. Register early! at www.battcon.com. www.ba
Input 10 at www.missioncriticalmagazine/instantproductinfo
26 |
Mission Critical
January/February 2012
We Design, Build & Operate Data Centres.
7RPD[LPL]HHQHUJ\HI¿FLHQFLHVFRVWVDYLQJVDQGWLPHLQGXVWU\OHDGHU
Engineering
(KYHUWLV\RXUVLQJOHVRXUFHIRUHQJLQHHULQJFRQVWUXFWLRQLQWHJUDWLRQDQG
Construction
VXSSRUW:KHWKHU\RXUPLVVLRQFULWLFDOSURMHFWLV.:RU0:ZH
Emergency Power Systems
FDQWDNHFDUHRIHYHU\WKLQJIURPVWDUWWR¿QLVK
Mission Critical Cooling
Save time. Save energy. Save money.
Data Centre Relocation Monitoring
EHVERT engineering ZZZHKYHUWFRP Input 110 at www.missioncriticalmagazine.com/instantproductinfo
%XLOGLQJDQG,7,QIUDVWUXFWXUH
Hot Aisle Insight By Julius Neudorfer Julius Neudorfer is the CTO and founder of North American Access Technologies, Inc. (NAAT). Based in Westchester, NY, NAAT’s clients include Fortune 500 firms and government agencies. NAAT has been designing and implementing data center infrastructure and related technology for over 20 years.
Moving into 2012 Cloud computing comes with a carbon lining would like to introduce myself and my first “Hot Aisle Insight” column here at Mission Critical. I have read Mission Critical for many years and occasionally contributed articles. I am honored to have been asked join as a regular columnist, in addition to writing my blog on the website. I hope to cover the trends and technology of infrastructure designs, in addition to the developments of the IT equipment (which can obviously impact the design of the data center power and cooling infrastructure). Lest we
I
In early November, The Green Grid (TGG)—IT and data center vendors’ leading voice for advancing resource efficiency in data centers and business computing ecosystems—and the Open Data Center Alliance (ODCA)—the leading enduser driven cloud requirements consortium— announced a strategic collaboration at Cloud Computing Expo 2011 West. forget, supporting the computing hardware is the ultimate “mission” of the mission critical data center. I hope to keep it topical and technically interesting, yet with a bit of skepticism and my personal commentary and opinion. I already posted my 2012 predictions on my blog, but I thought that I would expand on what I foresee for 2012.
SEA CHANGES AND PLATE SHIFTING I hear the sound of tectonic plates shifting in the computing world. In the last months of 2011, we saw some interesting alliances formed. Imagine vendor-based groups forming alliances with customer-based organizations. Can you picture mavericks from social media collaborating with the belt-andsuspenders, big-business crowd? What’s next, cloud computing everywhere, yet the actual computing hardware never to be seen again by mere mortals? All hardware to be hidden away in faraway hyperscale data centers whose operators will also become carbon-credit arbitrage traders?
28 |
Mission Critical
January/February 2012
As we move from “traditional” computing (a shifting term in itself) to virtualized and cloud computing, the old rules and norms seem to be dissolving rapidly. What am I ranting about? In early November, The Green Grid (TGG)—IT and data center vendors’ leading voice for advancing resource efficiency in data centers and business computing ecosystems—and the Open Data Center Alliance (ODCA)—the leading end-user driven cloud requirements consortium—announced a strategic collaboration at Cloud Computing Expo 2011 West. On what are they going to focus their first collaboration efforts? The carbon produced by cloud computing. What, you did not realize that cloud computing uses real energy and produces carbon just like “real” computing? The ODCA and TGG group effort brings together the leading customer voice on cloud computing and the global authority on resource-efficient data centers and business computing ecosystems. The ODCA, a group of more than 300 companies that represent over $100 billion in annual IT spending, recently published the first customer-driven requirements for the cloud with release of its initial usage models. TGG, which was launched in 2007 mainly by the major manufacturers of data center infrastructure equipment and computing hardware, is now a global consortium focused on driving resource efficiency in business computing by developing meaningful and user-centric metrics to help IT and facilities better manage their resources. Their first efforts resulted in the introduction of the power usage effectiveness (PUE) metric for data center physical infrastructure, which has now become PUE version 2, a globally accepted metric. In December 2010, TGG introduced the carbon usage effectiveness (CUE) metric, which is again based on the physical data center. Correlating how cloud computing corresponds to actual data center power usage is the key question at hand, and the initial focus of the collaboration. In an email interview, Mark Monroe, executive director, of TGG, commented, “The alliance between ODCA and The Green Grid will result in user-centric work focused on the efficiency of cloud computing in real world application scenarios. The
Input 108 at www.missioncriticalmagazine.com/instantproductinfo
Cronin’s Workshop Continued from page 28 strengths of the two organizations, when combined, cover the full spectrum of efficiency and operational excellence in the emerging field of cloud computing.” ODCA was founded in 2010 by major global business customers, but it is highly focused on cloud computing. The ODCA claims to represent $100 billion IT purchasing power, which could bring new meaning to “collective bargaining.” As we all know, money talks, especially in today’s economy. So what about the tectonic plates? Well, surprisingly in late October, the big business, financially conservative ODCA also announced that it was collaborating with the Open Compute Project (OCP). OCP was formed as an offshoot of Facebook’s innovative, but maverick, Prineville, OR, data center design, in which it entirely re-invented the center’s power and cooling infrastructure, as well as even building their own unique non-standard (e.g., 1.5U) servers and racks, using 277 Vac as primary power and 48 Vdc for rack-level battery backup. Moreover, Digital Reality Trust, a major co-location provider, joined OCP and is offering to build OCP compliant suites or even data centers for their clients. And not to be overlooked, the 2011 update of ASHARE TC 9.9 Thermal Guidelines is a potential game changer, with the stated goal being to eliminate the use of mechanical cooling whenever and wherever possible, primarily by the wide ranging use of airside economizers, with allowable equipment air inlet temperatures of up to 113°F (not a typo—Class A4). It is nothing less than an open challenge to end the legacy thinking of the sanctity of the data center as a bastion of tightly controlled environmental conditions and potentially rendering “precision cooling” an archaic term. Clearly not everyone will suddenly rush to run 95°F or more in the cold aisle (will that term become an oxymoron?), and virtually abandon humidity control (think 8 to 95 percent RH). However, it may cause many to re-evaluate the need to tightly control the environmental conditions in the data center, while others will still keep the temperature at a “traditional” 68° to 70°F and 50 percent RH (complete with “battling” CRACs trying to control the humidity within ±5 percent) and wasting huge amounts of energy to support the perception that the reliability of IT equipment will be impacted if the temperature even went near 77°F (the 2004 recommended limit), or the humidity fluctuated. In fact, the new ASHRAE guideline has gone so far as to put forth what once would have been considered pure heresy; the “X” factor, which introduced a scenario of assuming and accepting a certain amount of IT equipment failure as an expected part of allowing far broader environmental conditions in the data center.
still stands for power usage effectiveness, it actually now relates to annualized energy. TGG added more metrics beside CUE, such as pPUE, WUE, ERE, DCcE, and still more metrics to come, as well as the Maturity Model. And to help measure and track all those metrics, look to data center infrastructure management (DCIM) systems. DCIM, as a term and category, only came into being in 2010, began to emerge in 2011, and will explode in 2012. DCIM sales will skyrocket as data center facilities and IT managers look for ways to share information and manage for a common goal—energy efficiency and optimization by collectively coordinating the use of limited resources (constrained CapEx, OpEx, and energy resources). Picture facilities and IT all together singing “Kumbaya,” assuming your imagination can stretch that far. Moreover, while one part of the industry moves toward ever larger, super-sized mega centers, others think in smaller modular terms in the form of containerized data centers, which offer “near perfect” PUE numbers approaching 1.0 (if you believe the marketing department hype) as well as rapid deployment and flexible growth. In addition to the modular data centers and containers from the likes of IBM, HP, and Dell, power and cooling modules are being offered by the infrastructure manufacturers as well.
THE ‘METRICS’ SYSTEMS
◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
And 2011 also brought forth many new metrics, even PUE is now PUE version 2, and while the PUE acronym
30 |
Mission Critical
January/February 2012
TO SUM UP So the days of a typical data center full of “standard” CRACs and racks may evolve into the next generation of hyperscale computing, driven by social media and search, to be housed in mega-sized data centers or in rows of modular containers in a parking lot or both—many utilizing free air-cooling (imagine servers that can tolerate the same outside air as humans). These new designs may look radically different from today’s hot-aisle, cold-aisle data centers, which could make our current data centers seem as out of date as the old “legacy” mainframe glass house looks to us today. The formally conservative ASHRAE is now openly advocating free cooling, and ODCA members are using their purchasing clout to influence equipment manufacturers and are giving serious long-term commitments and bringing sustainably thinking to cloud-based services. They are all well aware that there will still have to be real computing hardware running reliably in data centers somewhere (using real energy with a related carbon footprint). Nonetheless, while some changes in the name of efficiency can be good, it is important to remember that everything has a price, and that ultimately there is no such thing as a carbon-free lunch. ■
Elevating the enclosure to an art form.
High density, customizable, and easy to use. Now that’s smart. Opt-X® fiber enclosures offer the perfect combination of sophistication and simplicity. Whether it’s the Opt-X Ultra® for data centers or the new 1000i and 500i for medium and small enterprise applications, each comes packed with features for better cable organization and maintainability. Learn more at Leviton.com/opt-xultra.
THE FUTURE IS ON
Input 74 at www.missioncriticalmagazine.com/instantproductinfo
© 2012 Leviton Manufacturing Co., Inc. All rights reserved.
12 considerations in finalizing a site
BY FRED CANNONE usinesses today face a complex array of conditions, including ever-changing economic climates, technology trends, and other obstacles that thwart many less-than-solid companies. A company’s data center facilities often include an array of features such as prime location, uptime capabilities, and peering opportunities that strengthen the business. These facilities also provide services such as disaster recovery, multiple security, and redundancy components important to the enterprise, so it is important to make knowledgeable decisions when siting a new data center. In many cases, businesses find that colocation, managed IT, or cloud hosting facilities help them adapt, adjust, and scale their IT operations to meet the demands of the enterprise.
B
Fred Cannone is director of sales and marketing for Telehouse America. He is an information and telecommunications industry veteran with more than 25 years of experience in executive management, technical sales, and marketing for data center and hosting providers, international voice, IP, and data carriers, as well as global trade intelligence.
32 |
Mission Critical
January/February 2012
Unlike running a business, finding an ideal data center facility doesn’t have to be a complex process. With knowledgeable support and guidance, a business can confidently choose among many options based on its specific requirements. The following 12 best practices are the top areas that businesses can consider when shopping for data center space. Assess data center requirements. Companies must first assess whether or not to design, manage, and operate their own data centers or outsource to another provider. Outfitting an in-house data center to be scalable, secure, and have multiple levels of redundancy is challenging and expensive. Colocation and managed IT providers promise a number of benefits designed to help enterprises sustain and grow their businesses, including service reliability, 24/7 support, disaster preparation/ recovery, flexibility, security, and access to skilled IT personnel and engineers, all of which can lead to significant cost savings as the provider is able to leverage significant economies of scale across all of these areas. In addition, a carrier-neutral data center typically provides a wide range of connectivity options to meet the need for secure multiple connectivity options with lowlatency, high-capacity bandwidth at lower costs.
Integrated Data Center Infrastructure Solutions KVM Switches
Rack Enclosures
s 5P TO +6- PORTS PER 5 SCALABLE TO HUNDREDS
s %)! COMPLIANT ENCLOSURES AND OPEN FRAME RACKS FROM 5 TO 5
s /PTIONAL FOLDAWAY ,#$ CONSOLE AND )0 REMOTE ACCESS
s 4OOLLESS 0$5 MOUNTING AND LOAD RATINGS UP TO LB
Cables
s #ATEA 4 #8 lBER OPTIC POWER AND STORAGE CABLES s 0ATCH PANELS ADAPTERS AND CABLE MANAGERS
Network-Grade PDUs
UPS Systems
s 3INGLE PHASE AND PHASE MODELS WITH LOAD CAPACITIES UP TO K7
s 3INGLE PHASE AND PHASE ON LINE MODELS UP TO K6! K6! IN PARALLEL
s /PTIONAL LOAD METERING NETWORK INTERFACE AND REMOTE OUTLET CONTROL
s (IGHLY EFlCIENT OPERATION SCALABLE RUNTIME AND PURE SINE WAVE OUTPUT
Contact our experienced project engineers for a free data center assessment: WWWTRIPPLITECOMSOLUTIONS s SOLUTIONS TRIPPLITECOM s 4RIPP ,ITE 7 TH 3T #HICAGO ),
90 Years of Trusted Reliability—7ITH A LONG HISTORY OF PROVIDING OUTSTANDING CUSTOMER SERVICE SINGLE SOURCE CONVENIENCE AND THOUSANDS OF COST EFFECTIVE VENDOR NEUTRAL )4 INFRASTRUCTURE SOLUTIONS 4RIPP ,ITE HAS THE EXPERIENCE AND EXPERTISE TO HELP YOU MAXIMIZE THE AVAILABILITY MANAGEABILITY AND EFlCIENCY OF YOUR DATA CENTER Input 78 at www.missioncriticalmagazine.com/instantproductinfo
Data Center Reliability Starts with Site Selection
Exterior of Telehouse's facility in the Chelsea area of New York City.
Mantrap regulates traffic to the data center area.
When selecting a facility, knowing the electrical configuration can be crucial to uptime considerations.
34 |
Mission Critical
January/February 2012
Managed IT services offer many of the same benefits as colocation providers, while also providing trained experts who can handle regular maintenance of a business's IT infrastructure. Managed IT services ensure the availability of qualified, knowledgeable professionals who can assume responsibility for monitoring, maintaining, and troubleshooting, along with the overall management of IT systems and functions, enabling IT operations to be handled as-needed or on a 24/7 basis. Cloud computing services promise to add yet more flexibility to IT. The cloud provides companies with the ability to deploy virtual IT resources on demand while reducing or eliminating direct data center power, infrastructure, and equipment costs. Economic factors that push companies to drastically reduce costs, streamline processes, and conserve energy have driven the surge of cloud applications. Jonathan Koomey, a researcher who has studied data center energy use at Stanford and Lawrence Berkeley National Labs, told DatacenterKnowledge, “There are powerful economic factors pushing us towards cloud computing. One of the major reasons is the more efficient use of power by cloud computing providers.” Disaster recovery-business continuity planning issues constitute the final aspect of data center selection. The right plan and data center services should provide complete protection and security for all of a company’s business processes and client data. While basic backup and recovery services are standard IT or data center processes, more robust data centers provide complex, strategic disaster recovery options including fully built, power-protected, secure infrastructures to ensure network operations are consistently up and running. Patty Catania, CBCP and COO of TAMP Systems, a worldwide provider of business continuity and disaster recovery and planning software and consulting services, said, “As a respected member of the continuity and disaster recovery planning community, Telehouse's data center facilities in New York and California are some of the most secure and finest in the United States. Disaster recovery has become a major issue of concern in recent years. Clearly it has become vital to explore and include your DR-BC planning when purchasing co-location space or other IT services.” Consider the liabilities and tisks. When choosing a disaster recovery or business continuity planning process, companies need to assess the potential risks to the organization that could result in times of disaster or emergency situations, along with the day-to-day perils that may cause an interruption to daily lives and business processes. In doing so, a company must explore the kind of impact each risk and resulting liability may have on its business’s ability to continue normal operations. The Disaster Recovery Guide (http://www.disaster-recovery-guide.com/risk.htm) offers a comprehensive list of the types of threats that can wreak havoc on a business, a sampling of which is listed below:
If you don’t have you might as well just open the window. gForce from Data Aire not only saves your data it saves your money. Its energy-efficient fans, coils and advanced airflow are designed to save on energy costs. This is your window of opportunity to make a difference in your bottom line.
DataAire.com
Combine gForce with our unique “Unity Cooling System” and the savings soar. Use your smart phone to take a picture of the QR code to see a video about gForce Unity Cooling, or Visit DataAire.com to learn more about this breakthrough in mission critical cooling.
The Reliable Choice in Precision Cooling Equipment 714·921·6000
· Shortest lead times in the industry · Built to customer’s specifications · Advanced control systems · Ultra reliable technology
Input 15 at www.missioncriticalmagazine.com/instantproductinfo
Data Center Reliability Starts with Site Selection
• • • •
Battery backups must be monitored and maintained.
Whether a colo or hosted facility, cable management plays a factor in IT operations.
Environmental Disasters Organized and/or Deliberate Disruption • Act of terrorism • Act of sabotage • Theft • Arson Loss of Utilities and Services Equipment or System Failure • Internal power failure • Air conditioning failure • Production line failure • Hardware failure Serious Information Security Incidents • Cyber crime • Loss of records or data Other Emergency Situations • Workplace violence • Public transportation disruption
36 |
Mission Critical
January/February 2012
Health and safety regulations Mergers and acquisitions Negative publicity Legal problems
Telehouse America’s strategic partner, Tamp Systems, stresses the importance of businesses recovering with speed and efficiency following any type of crisis. Its Disaster Recovery DRS-I Module provides customers with a simplified interface, offering fast implementation, complete with key features that include 24/7 secure access, reporting options, business impact analysis, and workflow tools. Companies of all sizes and from all industries can benefit by utilizing their own processes or a disaster recovery-business continuity package. Check data center background and obtain references. Who better to trust than industry peers? Speaking with customers who are colocated in the data center facilities that companies are considering is a great way for them to learn more and understand real-world experiences. The success of data centers is indeed measured by its level of customer satisfaction and overall retention. Smith comments, “Voxel's long standing relationship with Telehouse America has been marked by exceptional customer service and technical support.” Consider equipment maintenance contracts. data centers offer a variety of hardware solutions related to the facility’s infrastructure capabilities, reliability, and uptime, along with maintenance plans and the ability to track the maintenance records. How can a company best decide what type of contract would be best? In April 2011, Faulkner Information Services released a report, “Data Center Equipment Maintenance Contracts,” which lists the principal responsibilities of the executive in charge of finalizing the maintenance contract with a data center: “Ensure that the maintenance is: (1) performed on schedule and according to the terms of the agreement; (2) conducted with minimal disruption to IT operations, particularly ‘customer-facing’ operations; and (3) verified via a program of post-maintenance testing.” Maintenance plans are usually set up to be monthly, quarterly, or yearly agreements based on the manufacturer’s electrical and mechanical equipment recommendations. For example, diesel generators are typically turned over monthly but inspected and serviced quarterly, while UPS systems, monitored 24/7 by the building management system are physically inspected twice a year. All client-facing maintenance cycles are announced at least three weeks in advance and done usually at off-peak times. Evaluate the certification of a data center. SAS 70 certified data centers—important or not? Offering his
What does your shirt say about you? I am globally recognized
I AM EVOLVING WITH
MY INDUSTRY
I am leading the COMPETITION +#/#$+%5+
DCDC
GET THE PUBLICATIONS. TAKE THE COURSES. EARN THE CREDENTIAL. Today’s data center designers must possess knowledge in mechanical, electrical and telecommunications systems, as well as reliability, security and building requirements. Acquire the skills needed to excel in all aspects of the constantly growing and changing field. Become a BICSI Data Center Design Consultant. Learn more at www.bicsi.org/dcdc6 or with your smartphone. Input 79 at www.missioncriticalmagazine.com/instantproductinfo
Data Center Reliability Starts with Site Selection
Telephone interconnects and meet-me rooms must be adequate to manage I/O communications.
This photo shows empty racks waiting for a tenant move.
Telehouse's rooftop units also have a view of New Jersey.
38 |
Mission Critical
January/February 2012
'Industry Perspective' to DatacenterKnowledge, Certified Public Accountant Ali Gheewala, describes the “age of regulation,” and how government compliance is quickly becoming a crucial component for data centers. In turn, the SAS 70 audit has emerged as a widely recognized auditing standard developed by the American Institute of Certified Public Accounts. In response, data center providers are busy at work marketing its SAS-70 and SAS 70 Type II auditing capabilities—especially to financial companies, a sector that relies heavily on such a standard. (Type II refers to compliance for subsequent years after a SAS 70 Type I audit.) Confirm power reliability. There is no doubt that data centers have evolved to become power-centric. Increasingly, organizations need more power to maintain and grow their businesses using more powerful hardware and complex processes. Data centers must find ways to provide more power and associated cooling through the implementation of efficient and costeffective hardware and management strategies. Online electrical backup systems include multiple diverse power feeds into the data center facility, such as redundant uninterruptible power supply systems (UPS), batteries, and generator systems. Additionally, there is the mechanical infrastructure with its supporting cooling systems, which include computer room air conditioning (CRAC) units, chillers, cooling towers, pumping stations, air-handling units, and more. Telehouse’s Chelsea center in New York currently has four static 750-kilovaltampere (kVA) with a three plus one configuration and two 2.5-megavolt-ampere diesel-powered generators in an N+1 configuration. Redundant power ensures that there is no downtime, which is critical when it comes to keeping businesses up and running. Any delays or interruptions could potentially mean the loss of a customer. Compare service level agreements (SLAs). A solid and reputable data center provider will offer a contract to its customers complete with a highly detailed SLA that will guarantee specific uptime, service response, bandwidth, physical access protections and other key elements. “It is important to ensure the SLA clearly states what the data center’s responsibilities are should it fail to meet or carry out the agreement as stated, such as failing to provide critical power, maintaining uptime standards, scheduled maintenance, or poor response to service requests, cooling temperature settings, etc.,” said David Kinney, deputy director, facilities and operations, Telehouse. Check data center cross connect fees. When seeking out a data center facility, companies should carefully assess all fees including monthly recurring, installation costs, and other one-time charges. For instance, the costs of networking and the cabling or cross connects needed to deploy circuits for voice, data, and internet services can quickly add up as a company’s IT network expands.
THERE’S MORE THAN ONE WAY TO
SUPPRESS A FIRE IN YOUR DATA CENTER
WE SUGGEST YOU CHOOSE
THE ONE THAT WON’T GET YOU WET FIKE CLEAN AGENT SYSTEMS, THE MOST EFFECTIVE WAY TO SUPPRESS FIRES, PROTECT YOUR HIGH-VALUE ASSETS AND ELIMINATE POTENTIAL DOWNTIME. U Extinguishes a Fire FASTER Than Water U Is Safe for People U Requires No Clean-up
THERE ARE MANY REASONS TO CHOOSE
FIKE FIRE SUPPRESSION SYSTEMS
Waterless clean agent fire suppression
INERT GAS
SMALL SPACE
SYSTEMS
CLEAN AGENT
with IMPULSE
designed to be safe and economical
Fire Protection Systems
TECHNOLOGY™
WWW.F IK E .C O M
1-866-758-6004
Input 16 at www.missioncriticalmagazine.com/instantproductinfo
Data Center Reliability Starts with Site Selection
Cross-connects are physical connections between networks that are an important function of carrier-neutral data centers and multi-tenant carrier hotels (e.g., Internet gateways) that commonly take place in a central “meet-me room.” As these types of interconnections can be frequent, this is this primary reason why one-
time or low monthly cross-connect fees are essential for the business looking to maximize cost savings, as well as prepare for growth and future network needs. Telehouse is currently one of the data center providers that offers low or no cross-connect fees. Consider the on-site service. The ideal data center provider will offer its tenants 24/7
HIGH DENSITY ChilledDoor RACK COOLING Rack Cooling System
®
Achieve total power saving up to 90% over traditional data center cooling systems! 40 kW per rack with 65F chilled water 45 kW per rack with 59F chilled water t3FQMBDFTSFBSEPPSPG any standard or custom server rack t&$GBOT 1-$ TMJEJOH rail operation all included as standard t$PNQMFUFMZSFNPWFTBMM server heat at its source t/PIPUTQPUTFWFS t/PSBJTFEnPPS $3"$ VOJUT DPOEFOTBUFQVNQT or aisle containment
Contact Motivair™ for details COOLING SOLUTIONS 85 Woodridge Drive I Amherst, NY 14228 I 716-691-9222 2
[email protected] I www.motivaircorp.com
Scan with a QR reader on your smartphone to find out more about the ChilledDoor Rack Cooling System
Input 18 at www.missioncriticalmagazine.com/instantproductinfo
40 |
Mission Critical
January/February 2012
access to authorized personnel, onsite security, multi-level technical support, and facility engineering experience. These attributes are absolutely required while other features, such as basic day-to-day IT functions, are either included in-part or often performed as an extra option under remote hands type services, including equipment resets, rebooting, etc. Peering is key. Organizations with medium-to-heavy bandwidth traffic should actively seek facilities with both private and/or public IP peering exchanges in order to improve connections, average down bandwidth cost, and increase traffic routing options. Know deployment time frames. How long is too long when it comes to a data center installing and deploying tenant equipment? “With the exception of very large or non-standard installations and certain telecommunication provisioning, anything that exceeds two- to three-weeks’ time,” according to Cannone, “is taking too long.” Ask for additional services. Everybody likes perks—especially when it comes to a business gaining more for the money it spends. Data center providers can offer quite a few unique benefits and services in addition to its primary services. This includes low or no cross-connect fees, diverse internal and external fiber routes, a varied client base, carrier-neutral facilities, and global connectivity options, scalable managed IT services, and modular data center construction with or separately data center facility management.
CONCLUSION Whether a small business or an enterprise, all types of organizations share the same goal of achieving steady cost-savings and efficiency. Flexible, stable, and reliable IT operations and support functions are the foundation of a successful business or enterprise; which is why it is critical for businesses to choose a data center provider that is reputable, cost-efficient, and one that offers a multitude of beneficial service offerings in a variety of locations. The twelve best practices featured in this article serve as an essential guide to securing the right data center space for businesses of all sizes and types. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at devriesj@ bnpmedia.com or at 248-244-1726.
Baldor UL2200
Gensets UL Verified Components Tested as a Complete System Not every UL2200 genset on the market is fully tested as a complete system. Many are open units that are upgraded with third party components and shipped to customers without a complete system test to verify performance and reliability. Every Baldor UL2200 genset is a complete system designed, manufactured AND tested at our plant. We use only UL verified components and every genset is tested as a complete system before it leaves our factory. Once each genset passes rain water ingress tests, hi pot alternator tests, air blockage and flammability tests, proper safety shutdown checks and verification that component temperatures are below combustion levels, then and only then will we apply the UL label and the Baldor name.
baldor.com
479-646-4711
©2011 Baldor Electric Company
Input 13 at www.missioncriticalmagazine.com/instantproductinfo
New guidelines make a big difference BY MARK MONROE igh temperatures make data center managers break out in cold sweats. Even though the thermostat may read a comfortable 76ºF, customers at colocation facilities say, “There must be a problem here, this room is so hot.” So many people suppose that data centers should feel cold, but few question why. The attitude in
H
Mark Monroe is an expert in corporate sustainability, data center efficiency, and many aspects of information technology (IT). In his current role as chief technology advisor at Integrated Design Group, Mark leads the effort in staying at the forefront of the latest innovative and green design technologies. Mark is also the executive director of The Green Grid, an IT industry and end-user consortium focused on resourceefficient data centers and business computing environments. He is also on the board of directors for the Center for ReSource Conservation in Boulder, CO. He works on sustainability advisory boards with the University of Colorado and local Colorado governments.
42 |
Mission Critical
January/February 2012
data center operations has been to follow the conventional wisdom; no one ever got fired for having a cold data center. Well, maybe someone should be fired. Data center operators are wasting billions of dollars in capital and operating expenses and creating millions of tons of carbon dioxide by routinely cooling their computers too much. Since the early days of electronic data processing, computer rooms and data centers have controlled environmental conditions that surround the computing equipment in order to improve system reliability and application availability. The American Society of Heating, Refrigeration, and Airconditioning Engineers (ASHRAE) first published specifications addressing the acceptable temperatures and humidity ranges in 2004 and updated the specification in 2008. Most operators of data centers use the ASHRAE specs to define the environmental operating ranges for their facilities. ASHRAE updated the specifications again in May, 2011, to reflect industry movement toward energy efficiency. ASHRAE worked with information technology (IT) equip-
™
Pioneering Airflow Management
Engineered Solutions for Mission Critical Environments
Family of Solutions
KoldLok Grommets are designed d rto virtually eliminate bypass air5DLVHG)ORRU$LUÁRZ0DQDJHPHQW
ÁRZLQFUHDVHFDSDFLW\HQDEOH
3URGXFWV
\ KLJKHUGHQVLW\DQGGUDPDWLFDOO\
KoldLok Round Grommet KoldLok Wave Grommet KoldLok Integral Grommet
KoldLok Surface Grommet KoldLok Mini Grommet KoldLok Extended Grommet
ORZHUHQHUJ\FRQVXPSWLRQ Wave® Grommet
+RW/RNSURGXFWVUHGXFHKLJKLQWDNH air temperatures by preventing hot 5DFN$LUÁRZ0DQDJHPHQW
H[KDXVWDLUIURPFLUFXODWLQJWRWKH
3URGXFWV
IURQWRIWKH,7FDELQHWV7KLVLPSURYHV
HotLok Blanking Panels HotLok Temperature Monitoring HotLok Round 4” Rack Mount Grommet
IT equipment reliability and supports FRROLQJLQIUDVWUXFWXUHRSWLPL]DWLRQ
4” Rack Grommet
$LVOH/RNSURGXFWVUHGXFHKLJKLQWDNHDLUWHPSHUDWXUHVE\ \
™
SUHYHQWLQJKRWH[KDXVWDLUIURPFLUFXODWLQJWRWKHIURQW \ RIWKH,7FDELQHWV7KLVLPSURYHV,7HTXLSPHQWUHOLDELOLW\
$LVOH$LUÁRZ0DQDJHPHQW
DQGVXSSRUWVFRROLQJ
3URGXFWV
LQIUDVWUXFWXUH
AisleLok Under Rack Panel AisleLok Acrycell Sealing Tape
RSWLPL]DWLRQ
SM
UnderRack Panel
(QHUJ\/RN6HUYLFHVSURPRWHFRPSXWHU URRPDLUÁRZRSWLPL]DWLRQDQGHQKDQFH HQHUJ\HIÀFLHQF\7KLVVXLWHRIVHUYLFHV
5RRP$LUÁRZ0DQDJHPHQW6HUYLFHV 3URGXFWV (QHUJ\/RN3URÀOH EnergyLok Check
DSSOLHVWKHVFLHQFHRIDLUÁRZPDQDJHEnergyLok Tune EnergyLok CFD Analysis
PHQWWRLPSURYHWKHFDSDFLW\UHOLDELOLW\ DQGFRVWVZLWKLQWKHGDWDFHQWHU
Attend our live presentation: ´0DQDJLQJ$LUÁRZ&RVWV&DSDFLW\ 5HOLDELOLW\µ
D TA CENTER WORLD® Designer & Manufacturer (888) 982-7800
www.upsite.com
NEED OPTIMIZED DATA CENTER COOLING?
/DV9HJDV0DUFKVW Visit us at Booth 1326
Input 42 at www.missioncriticalmagazine.com/instantproductinfo Copyright © 2012 Upsite Technologies Inc.
Thermography
The Impact of Expanded ASHRAE Ranges on Airside Economization
The new ASHRAE paper also includes information about the potential increase in temperature-related failures based on IT systems operating for a time at higher inlet temperatures, so that operators can quantify the potential impact of energy savings on their overall reliability. The calculations show that the impact of higher temperatures is hard to detect in a large population of machines, maybe as little as one additional failure per year in a population of 1,000 servers. In other words, there should be no measurable impact on availability while enabling millions of dollars in savings.
WHY CONDITION?
Figure 1. As early as 1962, researchers at Bell Labs established a relationship between temperature and electronic component reliability (Dodson, 1961).
2004 Version
2008 Version
Low end temperature (˚)/(˚C)
68˚ (20˚)
68˚ (20˚)
High end temperature (˚)/(˚C)
77˚ (25˚)
80.6˚ (27˚)
Low end moisture (%RH)
40
41.9˚F DP (5.5˚ C)
High end moisture (%RH)
55
60% RH and 59˚F DP (15˚C DP)
Table 1. A comparison of temperature/RH values in the 2004 and 2008 ASHRAE thermal guidelines
ment manufacturers to develop updated temperature and humidity ranges. The 2011 version classifies computer facilities into six broad categories of environmental control and provides guidance about the balance between potential energy savings and computer equipment reliability. Specifically, ASHRAE extended the range of allowable temperatures and humidity in data centers to match the increased desire of operators to take advantage of free cooling opportunities. Operators around the U.S. might be able to use free cooling an average of 18 percent more hours, with some locations able to use 1,600 more hours per year, if data center operators are simply willing to run equipment anywhere in the class A1 “allowable” ranges of the specifications. If data center managers are willing to occasionally run their data center in the class A3 Allowable range, virtually every location in the U.S. can achieve 100 percent free cooling. The financial implication of this adjustment in operations is an average annual savings of $67,000 per year per 1,000 kilowatt (kW) of IT load, with absolutely no capital expenditure and an implementation time that takes days.
44 |
Mission Critical
January/February 2012
When considering the impact of changes to the recommended and allowable temperature and humidity ranges inside modern data centers, one may ask the basic question, “Why do we condition data centers at all?” The answer lies back at the beginning of the computer era when big mainframes used large amounts of power, and electronics in the boxes were more fragile. Early computer experts noticed that rooms needed to be cooled in order to keep their big, power-hungry mainframes from overheating. In the 1980s and 1990s, minicomputers and volume servers became more standard in computer rooms, offering better reliability and wider environmental operating ranges, but computer rooms remained cold with highly controlled humidity for fear of upsetting the delicate IT apple cart. There is a basis for the sensitivity of IT operators to temperature. As early as 1962, researchers at Bell Labs established a relationship between temperature and electronic component reliability (Dodson, 1961). Based on work done by chemist Svante Arrhenius, IT manufacturers began using the Arrhenius equation to help computer equipment manufacturers predict the impact of temperature on the mean time between failures (MTBF) of the electronics. The higher the equipment’s operating temperature, the shorter the time it takes to breakdown microscopic electronic circuit elements and, ultimately, cause a computer equipment failure. IT manufacturers use Monte Carlo and other modeling techniques to develop the predicted MTBF for each model of computer vs. the server inlet temperature. Monte Carlo models provide a range of probabilities for the MTBF value and typically have an uncertainty range of ±10 percent. Over the temperature range shown on the chart (see figure 1), the Arrhenius model shows that MTBF at 25ºC inlet temperature could be anywhere between 120,000 and 146,000 hours, or about 15 years of continuous operation. Running the server at 40ºC could change the MTBF to somewhere between 101,000 and 123,000 hours, or about 13 years of operation. The fact that the range of predicted MTBF values at 25ºC and 40ºC overlap means there may be no impact at all on server reliability over this temperature range for this model of server. There have been studies with observational data that support this notion. E. Pinherio, W.D. Weber, and L. A. Bar-
Because for you,
“Downtime” Is a four-letter word
aquatherm 801-805-6657
Don’t worry, downtime isn’t acceptable to us either. That’s why Aquatherm’s PP-R pipes and fittings don’t corrode, leak, or wear out. Our fast heat-fusion connections are ready for service in minutes and don’t produce any fumes, smoke, or off-gassing. Contact your local Aquatherm rep or visit our website to learn how Aquatherm can help you keep your critical systems running perfectly.
www.aquathermpipe.com Input 107 at www.missioncriticalmagazine.com/instantproductinfo
With Siemon creating space in your Data Center...
The Impact of Expanded ASHRAE Ranges on Airside Economization
roso’s (2007) paper, “Failure Trends in a Large Disk Drive Population,” which is a study of more than 100,000 disk drives, found no detectable relationship between operating temperature and disk drive reliability. Computer manufacturers have also been making systems more robust, and operating temperature ranges have changed in conjunction with these efforts. Fifteen years ago, systems like the Sun Microsystems Enterprise 10000 system required operating temperatures restricted from 70ºF to 74ºF. Today’s equipment is typically specified to operate in the temperature range from 41ºF to 95ºF (5ºC to 35ºC). Many manufacturers offer equipment that can operate at the even higher temperatures specified by the Network Equipment Building System (NEBS) standards of 41ºF to 104ºF (5ºC to 40ºC). Dell recently changed its operating specification to allow non-NEBS equipment to operate at 104ºF (40ºC) for 900 hours per year, and as high as 113ºF (45ºC) for up to 90 hours per year without impacting system warranty.
ASHRAE’S DATA CENTER TEMPERATURES
...doesn’t have to be difficult
High density, versatile data center cabinet solution
ASHRAE’s Technical Committee 9.9 (TC 9.9) developed the first edition of the book, Thermal Guidelines for Data Processing Environments, in 2004. This was a big step forward: with the help of engineers from computer and facility system manufacturers, ASHRAE developed a general guideline that all the manufacturers agreed on, so that data center operators could point to a single reference for their temperature and humidity set points. The 2004 specification resulted in many data centers being designed and operated at 68ºF to 72ºF air temperatures and 40 to 55 percent relative humidity (RH). ASHRAE updated the specification in 2008 in response to a global movement to operate data centers more efficiently and save money on energy costs. Building a consensus among its members, ASHRAE’s updated guidelines widened the recommended envelope to encourage more hours of free cooling, decreased the strict humidity requirements, and allowed designers and operators to reduce the energy consumption of their facilities’ infrastructure. A comparison of the 2004 and 2008 versions shows the difference in the recommended operating ranges between the two specifications (see table 1). Also in 2008, ASHRAE defined four classes of computer facilities to help designers and engineers talk about rooms in a common, shorthand fashion. Each class of computer room has a “recommended” and “allowable” range of temperatures and humidity in order to reduce the chance of environmental-related equipment failures. The recommended range for the 2008 specification is the same for all classes: 64ºF to 81ºF (18ºC to 27ºC) dry bulb, 59ºF (15ºC) dew point, and 60 percent RH. Allowable ranges extend the range of conditions a little more, creating opportunities for energy savings by allowing higher inlet temperatures and less strict humidity controls for at least a part of the operating year.
2011 UPDATE TO DATA CENTER TEMPERATURE RANGES
w w w. s i e m o n . c o m / v e r s a p o d
Input 95 at www.missioncriticalmagazine.com/instantproductinfo
Between 2008 and 2011, attention to data center efficiency increased dramatically, and ASHRAE responded with another updated version of the data center guidelines in May of 2011. The biggest changes were in the Class A definitions and the allowable
The Impact of Expanded ASHRAE Ranges on Airside Economization
temperatures and humidity for this class of data center spaces. Widespread use of airside economization, also known as free cooling, was one of the primary drivers for this update to the data center specifications, with the logic being the wider the range of allowable temperatures inside the data center, the more hours that unconditioned outside air can be used to cool the data center, and the less energy that is required for making cool air for the computers to consume. The 2011 update changes the old numeric designations into alphanumeric ones. Class 1 becomes class A1, class 3 becomes B, and class 4 becomes class C. The new spec splits class A into four different subclasses, A1 through A4, which represent various levels of environmental control, and thus different levels of capital investment and operating costs. The A1 and A2 classifications are the same as the old class 1 and 2, but class A3 and A4 are new classes, representing conditioned spaces with wider environmental control limits. According to ASHRAE, the new A3 and A4 classes are meant to represent “information technology space or office or lab environments with some control of environmental parameters (dew point, temperature, and RH); types of products typically designed for this environment are volume servers, storage products, personal computers, and workstations.” The new classes have the same recommended ranges of temperatures and humidity, but much wider ranges of allowable conditions. Wider ranges mean that data center operators can choose to run their data centers at higher temperatures, enabling higher efficiency in the cooling systems and more hours of free cooling for data centers with economizers as part of the design. The differences are evident when the ranges are plotted on a psychrometric chart (see figure 2). On the chart, the recommended range is shown, as are the four class A allowable ranges. Class A3 allows inlet air temperatures as high as 40ºC (104ºF), and class A4 allows up to 45ºC (113ºF) for some period of operation. In this version of the spec, ASHRAE actually encourages operators to venture
into allowable ranges when it is possible to enable energy and cost savings by doing so. The white paper released with the updated guidelines states, “it is acceptable to operate outside the recommended envelope for short periods of time (e.g., 10 percent of the year) without affecting the overall reliability and operation of the IT equipment.”
TEMPERATURE AND RELIABILITY: THE X FACTOR But data center operators still hesitate to increase operating temperatures because they don’t know the impact on reliability, and the risk of unknown impact on reliability vs. the benefit of savings on energy costs is too great for most operators. So
www.mtu-online.com
MORE THAN A BACKUP PLAN. TOTAL POWER SOLUTIONS. At mission-critical facilities around the world, MTU Onsite Energy generator sets are trusted to come to life in seconds providing emergency and prime power. Airports, healthcare facilities, data centers and more depend on our exceptional reliability, cutting-edge technology and proven performance. MTU Onsite Energy delivers flexible, intelligent power solutions. To read our customers’ stories, visit www.mtu-online.com and click on “Technical Info.” Next Generation Power from MTU Onsite Energy
A TOGNUM GROUP BRAND
MTU Onsite Energy / 100 Power Drive / Mankato / Minnesota 56001/ Phone 800 325 5450 Input 19 at www.missioncriticalmagazine.com/instantproductinfo
January/February 2012
www.missioncriticalmagazine.com
| 47
The Impact of Expanded ASHRAE Ranges on Airside Economization
at a constant inlet temperature of 68ºF (20ºC). That is, if a population of servers ran 7x24 with a constant inlet air temperature of 68ºF, one would expect the number of temperature-related failures to be some number, X. Since failure analysis is a statistical problem, the table in the paper predicts it would be normal to expect the number of failures in this population to be between 0.88*X and 1.14*X (see figure 3). If the whole population operated 7x24 for a year at a constant 81ºF (27ºC), the table predicts that annual failures would increase to between 1.12*X and 1.54*X. The chart above shows how this might appear on a graph of X-factor vs. inlet temperature. The small overlap in the range of X-factors Figure2. The psychometric chart compares the allowable and recommended operating ranges from the 2004 and 2011 thermal guidelines. between the ranges means there is a chance there may be no difference ASHRAE introduced the concept of the X-factor. The X-factor is at all in failure rates at these two inlet temperatures. The second important consideration about X-factor is, what exactly meant to be a way to calculate the potential reliability impact of is “X,” the rate of temperature related failures in IT equipment. Intel’s operating IT systems at different temperatures. There are four aspects about X-factors that are critical to under- Don Atwood and John Miner (2008) showed failures between 2.45 stand: relative failure rates, absolute failure rates, time-at-temperature and 4.46 percent in blade server populations, but did not break out impact, and hardware failures vs. all failures. These four elements temperature-related failures (see figure 4). “X” would be a maximum are vital to applying the information in the ASHRAE guidelines to a of 45 per 1,000 servers per year if all failures were thermally induced, a specific data center operation, and to being able to unlock the potential highly unlikely situation. Los Alamos National Labs (LANL) did a study of failure in 4,750 advantages offered by the updated guidelines. First, the X-factor is a relative failure rate normalized to operation nodes of supercomputers over nine years, and categorized over 23,000
IT infrastructure from XS to 3XL
MC08114RITT.indd 1
48 |
Mission Critical
January/February 2012
8/4/11 2:52 PM
failure records. Overall, the study averaged 0.538 failures per machine per year from all causes. Hardware failures ranged from 10 to 60 percent of all failures, or between 0.027 and 0.32 failures per machine per year. In a population of 1,000 machines, this would mean “X” between 27 and 320 hardware failures per year. Dell’s Shelby Santosh (2002) stated that a Dell PowerEdge 6540 system has an estimated MTBF of 45,753 hours. For a population of 1,000 servers, this would mean 191 hardware failures per year, right in the middle of the range determined by the LANL study. The point is that the failure data are highly variable and difficult to collect. The data are so variable that it is virtually impossible to measure the impact of raising the average inlet temperature on server reliability. There is no field data available that support a decrease in reliability with increasing inlet temperature. On the other hand, it is easy to demonstrate the savings that result from raising inlet air temperature, chilled water temperatures, and free cooling. The third key consideration is time-at-temperature. Figure 3. Since failure analysis is a statistical problem, it would be normal to expect the number of failures in the population to be between 0.88*X and 1.14*X. ASHRAE also points out that in order to accurately calculate They also note that using outside air might cause servers to spend the impact of higher temperatures, the amount of time spent as each temperature must be calculated and summed for overall impact on time with inlet temperatures lower than 68ºF, thus increasing reliserver reliability. The example included the warning, “If the server ran ability. If the server spent 100 hours at 68ºF and 200 hours at 59ºF 7x24 at 68ºF…” When using outside air economizers to cool the data (average X-factor 0.88), then the calculation would be: center, it is likely that inlet air temperature could vary with outdoor Combined X-factor = (100 hrs * 1.0 + 200 hrs * 0.88) ÷ (100 + air temperature. The net X-factor impact for various temperatures can be estimated by adding the proportional amounts of each factor. For 200) = 0.92 example, if a server spent 100 hours at 68ºF inlet temperature (average X-factor 1.0), and 200 hours at 81ºF (average X-factor 1.34), the comIn other words, the server population relative failure rate would bined impact on average X-factor would be calculated by the following: be expected to be lower than running the servers at a constant 68ºF. Combined X-factor = (100 hrs * 1.0 + 200 hrs * 1.34) ÷ (100 + ASHRAE plots the impact X-factor for a number of cities, estimat200) = 1.23 ing that running data centers on outside air in eight of 11 cities
www.rittal-thesystem.com
Input 65 at www.missioncriticalmagazine.com/instantproductinfo MC08114RITT.indd 2
January/February 2012
8/4/11 2:53 PM
www.missioncriticalmagazine.com
| 49
The Impact of Expanded ASHRAE Ranges on Airside Economization
City
Airside Economizer with A1 Recommended Temperatures (hr)
Airside Economizer with A1 Allowable Temperatures (hr)
Difference (hr)
Percent
Boston
7,099
7,834
735
10
New York
6,734
7,448
714
11
Wash, DC
6,124
6,941
817
13
Atlanta
5,331
6,356
1025
19
Miami
1,541
2,516
975
63
Chicago
6,846
7,523
677
10
St Louis
5,979
6,796
817
14
Dallas
4,561
5,515
954
21
Houston
3,172
3,922
750
24
Austin
3,907
4,863
956
24
Denver
8,145
8,643
498
6
Las Vegas
5,880
7,170
1290
22
Phoenix
5,065
6,699
1634
32
Seattle
8,606
8,755
149
2
San Francisco
8,657
8,758
101
1
Los Angeles
6,816
8,370
1554
23
Average
853
18
Table 2. Impact of using the ASHRAE allowable range for class A1 spaces when compared with always keeping the space within the recommended range for class A1.
50 |
Mission Critical
January/February 2012
examined should have no impact on overall reliability. A final consideration when weighing the risk-benefit of economization and wider temperature ranges is the number of IT hardware failures in a data center vs. failures from all causes. Emerson Network Power published a Ponemon Institute study in 2011 that categorized outages into IT equipment failures, human error, cooling system failures, generators, etc. Emerson generated figure 5, which shows that IT equipment failures in this study accounted for only 5 percent of all unplanned outages. The Los Alamos National Labs study cited the wide variation in failure modes in their own study and the 19 studies referenced in their paper. In the studies referenced, and LANL’s own studies, hardware problems accounted for 10 to 60 percent of the failures, a huge variation that means the process of failures is largely unknown. It would be extremely difficult in this situation to determine a small change in the number of temperature related failures in a data center.
FREE COOLING HOURS On the savings side of the equation, the biggest impact on the operation of data centers based on the updated 2011 guidelines is an increase in the number of hours available to data center operators for use of economizers. Economizers provide cooling through use of outside air or evaporative water cooling in order to reduce or eliminate the need for mechanical chiller equipment. The number of hours available to use economizers varies with local weather conditions and the operating environment allowed inside the computer spaces. The wider ASHRAE ranges mean that more hours are available for airside economization in most locations. In 2009, The Green Grid released free cooling tools for North America, Europe, and Japan that allow data center operators and designers to estimate the number of airside and waterside economizer hours per year that are possible for a given location. Using zip codes in North America and city names in Europe and Japan, the tool allows users to input the operating conditions inside their data centers, then estimates the number of hours per year when 10-year averages for temperature and humidity of outside air would allow economizer use. Table 2 summarizes the impact of using the ASHRAE allowable range for class A1 spaces when compared with always keeping the space within the recommended range for class A1. This minor shift in operating policy, allowing system inlet temperatures to occasionally run as high as 32ºC (89.6ºF), enables between 100 and 1,600 hours per year more airside economization hours per year, with an average increase of 18 percent more hours. The most important result from this table is the monetary savings it demonstrates. The increase in airside economization results in savings of $999 to $40,000 per megawatt per year vs. using economizer for only the recommended ranges. If data center operators are willing to let systems run in the class A3 Allowable range, virtually everywhere in the United States is able to run on 100 percent free cooling year round. Using The Green Grid Free Cooling Tool, 10 out of 16 cities showed 8,750 hours of free cooling available (99.9 percent), and all cities were above 8,415 hours per year (96 percent).
Figure 4. Don Atwood and John Miner (2008) showed failures between 2.45 and 4.46 percent in blade server populations, but did not break out temperature-related failures. “X” would be a maximum of 45 per 1,000 servers per year if all failures were themally induced, a highly unlikely situation.
Figure 5. IT equipment failures in this Ponemon Institute study accounted for only 5 percent of all unplanned outages.
SUMMARY The ASHRAE expanded thermal guidelines whitepaper has a wealth of information, only some of which is covered here. The guidelines open the door to increased use of air and water economization, which will enable average savings of $20,000 to $67,000 per megawatt of IT load per year in the cooling and conditioning of data center spaces, all with virtually no capital investment, if economizers are already in place. With this information in hand, data center operators who simply follow the conventional wisdom of keeping servers chilled like processed meat might not be able to hide behind the reliability argument any more. The expanded guidelines are clear about the potential impacts, and now the risks and benefits can be better understood. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
January/February 2012
www.missioncriticalmagazine.com
| 51
12- vs. 24-fiber MTP cabling for higher-speed Ethernet BY GARY BERNSTEIN ast Africa is host to the extraordinary Great Migration. Every year, millions of creatures— zebras, wildebeest, gazelles, and many others—travel 1,800 miles and must overcome numerous threats to survive. Data centers regularly undertake their own great migration, to ever higher speed networks. Applications from development software and ERP systems to
E
Gary Bernstein is director of product management-fiber and data center, Leviton Network Solutions. Gary has more 15 years of experience in the telecommunications industry, with extensive knowledge of copper and fiber structured cabling systems. Gary has had held positions in engineering, sales, product management, marketing and corporate management. He is a member of the TIA TR42.7 Copper Cabling Standards Committee and the TIA TR42.8/11 Optical Fiber Cabling Systems Committee.
52 |
Mission Critical
January/February 2012
consumer content, medical and academic records, and a host of others are continously driving demand for greater bandwidth, and the network must keep pace. Unimaginable a decade ago, 10G is now common in larger enterprises. Several 40G core, edge, and top of rack (ToR) switches are on the market today, including equipment from Force10, Cisco, Arista, Extreme Networks, Hitachi, and Blade Networks. Cisco, AlcatelLucent, Brocade, and Juniper Networks have introduced 100G equipment as well. By 2015, higher-speed Ethernet will have about a 25 percent share of network equipment ports, according to Infonetics Research (see figure 1). The need is clear: a higher-speed Ethernet migration plan is rapidly becoming a matter of survival. Not every network is optimized for this inevitable growth. Yet, organizations that anticipate migrating can create a simple, cost-effective migration path by installing a structured cabling system that can support future 40/100G networking needs. An ideal system will include the following:
• One simple, modular connectivity solution for legacy 1G and 10G applications that is also compliant to 40G and 100G • One standardized connector theme able to support future high-bandwidth applications • Preconnectorized components compliant to all current and anticipated industry standards A foundational understanding of laser optimized multimode (LOMM) 40/100G structured cabling and awareness of the pros and cons of 12- vs. 24-fiber MPO/MTP cabling is necessary to prepare for higher-speed Ethernet. (MTP is a high-performance MPO connector manufactured and trademarked by US Conec, Ltd. The authors use the term MTP to refer to all MPO/MTP interfaces and connectors.)
UNDERSTANDING 40/100G Planning for migration to higher speed Ethernet can be daunting. The standards for 40G and 100G are significantly different from previous generations; active equipment and transmission methods are unique. Even polarity takes on a new importance.
IEEE AND TIA STANDARDS Structured cabling systems design is always guided first by standards. IEEE creates the standards that define performance parameters, while TIA writes those that define how to apply the parameters to structured cabling systems. Familiarity with these standards will help designers create data center infrastructure that better supports network upgrades. IEEE 802.3ba 40Gb/s and 100Gb/s Ethernet is the only current standard that addresses the physical layer cabling and connector media maximums for 40G/100G fiber channel requirements (the standard does not address copper UTP/ SCTP categories). IEEE 802.3ae 10Gb/s Ethernet covers the fiber protocols for 10G transmission. Figure 2 highlights some differences between the two types, including the tighter link-loss parameters with 40/100G. To achieve proper performance throughout the channel, each system component must meet lower loss limits as well. TIA-942 Telecommunications Infrastructure Standard for Data Centers establishes design criteria including site space and layout, cabling infrastructure, tiered reliability, and environmental considerations. The standard recommends using the highest capacity media available to maximize infrastructure lifespan. 10G equipment is the most frequently installed today, but as noted in the Infonetics Research forecast, 40G and 100G Ethernet will soon grow to become common networking speeds.
commonly utilize the GBIC (gigabit interface converter). For 8G Fibre Channel SAN and OTU2, as well as some 10G, the transceiver is the SFP+ (small form-factor pluggable plus). Interfaces for 40G and 100G active equipment include QSFP (quad small form-factor pluggable), CFP and CXP (100G form-factor pluggable). MPO/MTP is the designated interface for multimode 40/100G, and it’s backward compatible with legacy 1G/10G applications as well. Its small, high-density form factor is ideal with higher-speed Ethernet equipment.
ACTIVE EQUIPMENT INTERFACES
PARALLEL OPTICS
Fiber connectivity in higher-speed active equipment is being condensed and simplified with plug-and-play, hotswap transceiver miniaturization. 1G and 10G networks
LOMM 40G and 100G Ethernet employ parallel optics. Data are transmitted and received simultaneously on MTP interfaces through 10G simplex transmission over
Figure 1. Projected share of network ports – 2015. 1G/10G/40G/100G Networking Ports Biannual Market Size and Forecasts © Infonetics Research, April 2011
Max Distance Fiber Type (m)
Max Channel Insertion Loss (db)
Max Channel Connector Insertion Loss (db)
10G OM3
300
2.6
1.5
10G OM4
550
2.6
1.5
40/100G OM3
100
1.9
1.5
40/100G OM4
150
1.5
1.0
Figure 2. IEEE 850 nm OM3 and OM4 Ethernet Performance Specifications.
January/February 2012
www.missioncriticalmagazine.com
| 53
Are You Ready for 40G and 100G?
12- VS. 24-FIBER CABLING INFRASTRUCTURE
Tx Tx Tx Tx
Rx Rx Rx Rx
All higher-speed Ethernet networks use 12- or 24-fiber MTP trunks. The differences between the two schemes determine how to best optimize a cabling plant when upgrading. These differences include migration, density, congestion, and cost.
MIGRATION Figure 3. 40G 12-fiber MTP connector. Pins 1-2-3-4 are for Transmit (Tx), and 9-10-1112 are for Receive (Rx). Pins 5-6-7-8 are not used.
Tx Tx Tx Tx Tx Tx Tx Tx Tx Tx
Rx Rx Rx Rx Rx Rx Rx Rx Rx Rx
Figure 4. 100G 2x12-fiber MTP connector. Pins 2-11 on 1st cable are for Transmit (Tx), and pins 2-11 on the 2nd cable are for Receive (Rx). Pins 1 and 12 are not used.
Rx Rx Rx Rx Rx Rx Rx Rx Rx Rx
Tx Tx Tx Tx Tx Tx Tx Tx Tx Tx Figure 5. 100G 24-fiber MTP connector – IEEE recommended option. Pins 14-23 are for Transmit (Tx), and pins 2-11 are for Receive (Rx). Pins 1, 12, 13, and 24 are not used.
each individual strand of the array cable. Current IEEE channel/lane assignments for active equipment interfaces determine the transmission methodology (see figures 3-5).
POLARITY TIA-568-C.0 Generic Telecommunications Cabling for Customer Premises includes three MTP array cable polarity methods—A, B, and C. In addition, TIA will soon be releasing two new addenda—TIA-568-C.0-2 and TIA-568-C.3-1—to specifically address the polarity and cabling requirements needed to support 40G and 100G applications. As the market moves toward 40G and 100G networking speeds, polarity becomes more and more important. With multiple channels within a single connector, all components must be manufactured with the same polarity; differences cannot be reconciled by flipping or switching connector position in the field. Many end users prefer Method B, as it has the same “straight-through” MTP array cord on both ends of the channel, which greatly simplifies upgrades.
54 |
Mission Critical
January/February 2012
Figures 6-11 show 12- and 24-fiber system configurations for 1G-100G networks. With the 40G 12-fiber legacy configurations, a second trunk and another set of array harnesses will be needed to achieve 100 percent fiber utilization. For 100G, these additional components will be required for any 12-fiber legacy upgrade. On the other hand, with 24-fiber trunks, a single cable can support a 1G-100G channel and will simplify network upgrades immensely. 1G and 10G networks will link the trunks to active equipment with MTP-LC modules and LC duplex patch cords. When equipment is upgraded, modules and patch cords are exchanged for the appropriate new MTP components, with no need to install new trunks. In addition, limiting changes reduces the inherent risks to network security and integrity whenever MAC work is completed.
DENSITY Higher density connectivity in the enclosure leaves more rack space for active equipment, reducing the total amount of floor space required. 24-fiber cabling has the obvious advantage. If the active equipment is configured for 24-fiber channel/lane assignments, enclosures can have twice as many connections with the same number of ports compared to 12-fiber (or the same number of connections using only half the ports, see figure 12). For 40G networks, a 24-fiber MTP wiring scheme can deliver true 100 percent fiber utilization—no dark fibers or empty pins. With this configuration, density is doubled at the adapter plate/enclosure side as compared to 12-fiber 40G wiring schemes. The flip side of density is congestion. The more connectivity that runs in the same footprint, the more crowded it can become at the rack or cabinet. Here again, 24-fiber MTP trunks offer a huge benefit. Anywhere there’s fiber, from within the enclosures to cable runs that connect different areas of the network, will have just half the number of cables vs. 12-fiber. Runs carry a lighter load, fibers are easier to manage, and improved airflow reduces cooling costs.
COST 12-fiber configurations may enable the continued use of existing trunks when equipment is upgraded, if 12-fiber MTP-MTP trunks are available, but will likely
(x 6)
Electronics
LC Duplex Patch Cord
(x 6)
12-fiber MTP-LC Module
12-fiber MTP-LC Module
1x12-fiber MTP Trunk Cable
LC Duplex Patch Cord
Electronics
Figure 6. 1/10G Channel 12-fiber configuration
Option A
(x3)
Electronics
(x3)
8-Fiber MTP Array Cord
2x12-fiber to 3x8-fiber MTP Module
2 x 1x12-fiber MTP Trunk Cables
2x12-fiber to 3x8-fiber MTP Module
8-fiber MTP Array Cord
Electronics
2x12-fiber to 3x8-fiber MTP Harness
MTP Adapter Plate
2 x 1x12-fiber MTP Trunk Cables
MTP Adapter Plate
2x12-fiber to 3x8-fiber MTP Harness
Electronics
Option B
Electronics
Figure 7. 40G Channel 12-fiber configuration
Electronics
2x12-fiber to 1x24-fiber MTP Harness
MTP Adapter Plate
2 x 1x12-fiber MTP Trunk Cables
MTP Adapter Plate
2x12-fiber to 1x24-fiber MTP Harness
Electronics
Figure 8. 100G Channel 12-fiber configuration
require additional trunks, more connectivity components, and other network modifications. In the long run, it’s many times more expensive to retain these trunks than to upgrade to 24-fiber up front. Figures 13 through 15 present 12- vs. 24-fiber deployment cost comparisons for a 24-channel/48-fiber 10G network, 40G upgrade, and 100G upgrade (components only). The tables show that the migration cost savings with
24-fiber trunks increase at higher networking speeds. For the 10G network, cost is almost equal, but 24-fiber trunks reduce end-user costs about 10 percent for a 40G upgrade, and almost 25 percent for a 100G upgrade. Factor in the labor costs of installing additional trunks and other components with 12-fiber, and the difference is even greater.
January/February 2012
www.missioncriticalmagazine.com
| 55
Are You Ready for 40G and 100G?
(x12)
Electronics
(x12)
LC Duplex Patch Cord
1x24-fiber MTP Trunk Cable
24-fiber MTP-LC Module
24-fiber MTP-LC Module
LC Duplex Patch Cord
Electronics
Figure 9. 1/10G Channel 24-fiber configuration
Option A
(x3)
Electronics
(x3)
8-Fiber MTP Array Cord
24-fiber MTP to 3x8-fiber MTP Module
1x24-fiber MTP Trunk Cable
24-fiber MTP to 3x8-fiber MTP Module
8-Fiber MTP Array Cord
Electronics
1x24-fiber to 3x8-fiber MTP Array Harness
MTP Adapter Plate
1x24-fiber MTP Trunk Cable
MTP Adapter Plate
1x24-fiber to 3x8-fiber MTP Array Harness
Electronics
Option B
Electronics
Figure 10. 40G Channel 24-fiber configuration options
Electronics
1x24-fiber MTP Array Cord
MTP Adapter Plate
1x24-fiber MTP Trunk Cable
MTP Adapter Plate
1x24-fiber MTP Array Cord
Electronics
Figure 11. 100G Channel 24-fiber configuration
Rack Units
Max # of Opt-X Plates/Modules
Max 10G LG Channels
Max 40G MTP Channels
Max 100G MTP Channels**
1RU
3
36
18
18
2RU
6
72
36
36
3RU
12
144
72
72
4RU
15
180
90
90
* = Maximum possible density. Achievable density may be less depending on enclosure model ** = Requires minimum 48-fiber trunk cables
Figure 12.
56 |
Mission Critical
January/February 2012
12-fiber MTP Cabling
24-fiber MTP Cabling
Required Components
Cost ($)
Required Components
Cost ($)
4 x 12F OM3 MTP-MTP 100 ft. trunks
2,000
2 x 24F OM3 MTP-MTP 100 ft. trunks
2,000
8 x 12F MTP-LC modules
2,700
4 x 24F MTP-LC modules
2,600
48 x LC-LC patch cords
2,200
48 x LC-LC patch cords
2,200
Total cost for 24 channels
6,900
Total cost for 24 channels
6,800
Cost per channel
279
Cost per channel
283
Figure 13. Cost for 24 channel 10G cabling infrastructure
12-fiber MTP Cabling
24-fiber MTP Cabling
Required Components
Cost ($)
Required Components
Cost ($)
4 x 2 x12F MTP-MTP modules
1,800
4 x 24F MTP-MTP modules
1,650
12 x 12F MTP array cords
2,250
12 x 8F MTP array cords
2,000
Total cost for 24 channels
4,050
Total cost for 24 channels
3,650
Cost per channel
675
Cost per channel
608
Figure 14. Upgrade cost to 24 channel 40G cabiling infrastructure. Array cord configuration—Option A in Figures 7 and 10
12-fiber MTP Cabling Required Components
24-fiber MTP Cabling Cost ($)
Required Components
Cost ($)
2 x MTP adapter plates
100
2 x MTP adapter plates
100
4 x 2 x 12F-24F MTP array harnesses
1,600
4 x 24F MTP array cords
1,200
Total cost for 24 channels
1,700
Total cost for 24 channels
1,300
Cost per channel
850
Cost per channel
650
Figure 15. Upgrade cost to 24 channel 100G cabling infrastructure
CONCLUSION Being prepared for 40/100G is essential; within a few short years higher-speed Ethernet will be common in data centers across all types of organizations. Installing a highperformance 24-fiber 40/100G MTP system will provide several benefits when the network is upgraded:
In short, a 24-fiber higher-speed Ethernet MTP system will future-proof network cabling, lower the cost of ownership, and maximize return on investment. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
• Fewer connectivity components to be replaced or added simplifies migration and reduces costs for both components and installation • Higher density connectivity leaves more rack space for active equipment • Fewer trunks reduce cable congestion throughout the data center
January/February 2012
www.missioncriticalmagazine.com
| 57
United Parcel Service, Inc. updates a primary data center’s conditioned power distribution systems BY C. BENJAMIN SWANSON AND CHRISTOPHER M. JOHNSTON nited Parcel Service, Inc. (UPS) built its name on more than a century of efficient distribution. Supported by two primary data centers, the iconic brand meets the IT and logistical demands of
U
C. Benjamin Swanson has been the managing director of Mission Critical Facilities for United Parcel Service, Inc. since 2006. His team is responsible for all aspects of data center management, hardware planning, and security for their two domestic U.S. Tier IV data centers. Ben has been working for UPS’ engineering department for 24 years. Christopher M. Johnston is a senior vice president and the chief engineer for Syska Hennessy Group's Critical Facilities Team. Chris specializes in the planning, design, construction, testing, and commissioning of critical 7x24 facilities and leads team research and development efforts to address current and impending technical issues in critical and hypercritical facilities. With over 40 years of engineering experience, Chris has served as quality assurance officer and supervising engineer on many projects.
58 |
Mission Critical
January/February 2012
handling more than 15 million packages a day. The design, construction, and migration to new uninterruptible power supply systems (UPSS) for the company’s Alpharetta, GA, data center sought to do the same. In operation since 1995, the company’s Tier IV mission critical facility needed to upgrade its aging equipment in order to maintain optimal reliability and flexibility for its 2,000-kilowatt critical load. As an educated owner-operator, UPS performed 10 months of evaluations to determine just the right solution for its conditioned power distribution needs. In an effort to lower the risk of interruption and raise reliability while reducing cost and maintaining Tier IV infrastructure, UPS chose to replace its single network legacy conditioned power distribution system with one that split the IT and mechanical loads onto separate conditioned power networks. Syska Hennessy Group designed this new 2N solution and worked together with the project team to refresh the equipment while minimally impacting the 172,540-squarefoot live data center. Mitigating this and other challenges, while building a flexible and reliable conditioned power network, new UPSS distribution was delivered from construction through two-phase systems migration.
The New EPO System Once proper design of a data center’s mechanical and electrical systems has been established, potential outages related to human error pose the greatest threat to reliability. And the emergency power off (EPO) system is one of the usual suspects. Typically controlled by a large red push button located adjacent to the computer room’s egress doors, the EPO system, when manually initiated, facilitates the automatic shut down of power and ventilation to the data center as defined by firewall separation. Smoke and fire dampers are required to close and maintain the integrity of the fire-rated envelope. Like most data center stakeholders, UPS was interested in mitigating this potential human error risk. Under the 2008 edition of the NEC being enforced by the Authorities Having Jurisdiction (AHJs), Article 645 provides allowance for an “orderly shutdown” of the integrated electrical system or alternate installation practices. Syska Hennessy Group conducted analysis and met with local fire marshals and other AHJ. The result was the creation of an EPO-like system with required shutdown, isolation, and monitoring. The traditional push button at the computer room door was replaced with a centralized EPO system in a controlled location. The system consists of an A-side panel and a B-side panel, each containing keyed switches specific to the zone they service. This new system was approved upon stipulation that the facility be manned 24x7x365, with qualified personnel present to work with the emergency responders and perform an orderly shutdown of power and ventilation.
After over a decade in operation, the United Parcel Service, Inc.’s Alpharetta, GA, data center updated its network infrastructure in 2011. The 172,540 -sq. -ft. facility is one of the company’s two primary data centers.
MAINTAINING FLEXIBILITY AND RELIABILITY When it opened its doors 16 years ago, the IT equipment at UPS’ Georgia data center required a significant amount of power. In order to meet this demand, two sets of rooms were dedicated to the load’s associated conditioned power systems. An additional set of rooms intended to serve future load growth was built out at day one, with underground conduit connected to the data center space. Thanks to Moore’s Law, however, today’s IT equipment requires less electrical power while delivering more computing power, so the facility has yet to come close to using 50 percent of its total capacity. The third set of rooms was utilized during the current project as swing space. This allowed the current condi-
While the existing underground conduit had, so far, managed the current load, its re-use would have put the exiting IT load at greater risk. Therefor, UPS and Syska Hennessy Group created a new overhead distribution system.
tioned power distribution systems to continue supporting the live load, while new equipment was simultaneously installed in the extra spaces. Deploying existing space rather than brick-and-mortar construction minimized project costs and enhanced project phasing and expansion of the 2N system.
January/February 2012
www.missioncriticalmagazine.com
| 59
2N Package Delivery
IT LOAD MIGRATION One of the greatest challenges and successes of the data center refresh project was its IT load migration. Migrating the data center’s existing critical load to the new uninterruptible power supply systems (UPSS) was completed over a period of weeks in two phases. The following schematics represent the original configuration all the way to systems isolation and finally ultimate restoration.
Restoration Step # 3 Parallel STS-Aux-01 and STS-103 by Closing PDU’s Primary Breakers INPUT 3A
STS-AUX-01
CB1
CB3
CB6
INPUT 3B
CB1
CB2
CB5
STS-103
INPUT 3A
INPUT 1B
CB4
CB2
CB3
CB5
CB4
CB6 CB7
CB7
STS-Aux-01 and STS-103 in Normal Configuration STS-AUX-01
INPUT 3A
CB1
CB3
INPUT 1B
CB6
CB4
ALT
INPUT 1B
CB1
CB2
CB5
STS-103
INPUT 1A
CB3
CB5
CB4
CB6 CB7
ALT
PRI
PRI
Figure 3. Restoration step 3. Parallel STS-Aux-01 and STS-103 by closing PDU primary breakers.
ALT
INPUT 3A
PDU-L
PDU-R
CB1
CB3
INPUT 1B
CB5
CB6
CB4
PDU-R
CB6
INPUT 3B
CB1
CB2
CB5
STS-103
INPUT 3A
INPUT 1B
CB4
CB2
CB3
CB5
CB4
CB6 CB7
ALT
PRI
PRI
ALT
INPUT 1B
CB1
CB2
PDU-L
CB7
STS-103
INPUT 1A
STS-AUX-01
CB1
CB3
STS-AUX-01
ALT
Restoration Step # 5 Place STS-103 and Aux-STS-01 On Line
Isolation Step # 5 Transfer IT Load from UPS1B to UPS3A INPUT 3A
PRI
CB2
CB7
Figure 1. STS-Aux-01 and STS-103 in normal configuration
PRI
CB2
CB3
CB5
CB4
CB6
Figure 4. Restoration step 5. Place STS-103 and Aux-STS-01 On Line.
PDU-L
PDU-R
CB7
CB7
ALT
Figure 2. Isolation step 5. Transfer IT load from UPS 1b to UPS 3A
PDU-L
PRI
PRI
ALT
PDU-R
the new UPSS source 3A and 3B, the load transfer process is reversed to return the critical load back to STS-103. The PDU alternate inputs are opened, and STS-103 returns to online mode. STS-AUX-01 output is a common cable bus connected to multiple PDU auxiliary inputs. Migration of other STS units occurs using the same process.
STS-AUX-01 provides a means to isolate an STS with an alternate input path to the associated PDUs. This auxiliary STS configuration allows both A and B sources feeding dualcorded loads to be preserved during concurrent maintenance. After connecting STS-AUX-01 to the new UPS source 3A, the PDU load temporarily migrates to the new UPSS source on STS-AUX-01. The transfer from STS-103 to STS-AUX-01 includes a closed transition manual operation using source 1B. Then a STS operation is initiated to move the critical load to the new UPSS source 3A. After replacing the feeders Figure 5a and b. Electrical power distribution, before and after. Exiting critical load b. Revised critical load after 2N UPSS migration to STS-103 and connecting to
60 |
Mission Critical
January/February 2012
Before the extra rooms could be utilized for installation and migration, the design team would develop a new plan for power distribution to connect the new UPSS equipment to its corresponding computer rooms. While the existing underground conduit had, so far, managed the current load, its re-use would have put the exiting IT load at greater risk. Therefore, UPS and Syska Hennessy Group created a new overhead distribution system. In addition to furthering data center flexibility and future load growth, overhead distribution provides the facility with a secondary path for running feeders between the conditioned power distribution systems and the computer rooms they serve. An additional layer of distribution added to the system with an alternative main allows all feeders to endure future conditioned power refreshes as well. Power centers are fed directly from the new layered distribution and can easily be connected to another source, while provisions were made for future tie-ins at new data hall switchboards. Built as one of the world’s first Tier IV data centers in 1995, the facility exhibits UPS’s strong commitment to reliability and service. Continuing this high Tier IV standard was imperative to UPS. The new UPSS’ 2N design was created to provide ultimate reliability by separating the mechanical and IT loads. The 2N design includes two, five-module parallel UPSS for the IT total load capacity of 3.38 megawatts (MW) and another two two-module non-parallel UPSS for mechanical and BCP total load capacity of 1.35 MW. Redundant 125-volt direct current (Vdc) station batteries supply the UPSS switch-gear controls with supplementary back-up power. Additionally, 480 V of conditioned power with high resistance grounding was maintained in the refresh for enhanced fault tolerance. BIM added a level of reliability to the project during the design phase by identifying potential conflicts and eliminating unsuccessful installation sequences and plans early on in the project. Holder Construction also used BIM during the construction phase to detail conduit routes, piping, power distribution system requirements, and more. As conflicts were identified, the design in question evolved to meet the appropriate parameters.
Modular central plants from 100 - 2000+ tons
Consolidated system responsibility
CAPEX savings compared to built-up central plants
Single point electric controls
Fast speed of deployment
ETL label and listed
Minimize field labor risk and cost
Brand neutral application specific
Call us at 480.503.8040 Visit us online at: www.chil-pak.com Manufacturer of Integrated Central Plants
Input 102 at www.missioncriticalmagazine.com/instantproductinfo
TEAM COLLABORATION Planning the integration and migration of the old and new systems with minimal impact to a live data center with a 2,000 kW critical load was not a simple task for the project team. Working together as a team from day one, UPS, Syska Hennessy Group, and Holder Construction, Atlanta, met at regularly scheduled meetings and project milestones. Organized team collaboration provided a “reality check” that determined which designs would ultimately play out in the field and how they would be executed. Running new overhead cables was one of the most significant challenges the team faced. Using a scaffolding system with netting around it, along with tethers on every tool to prevent even a screwdriver from dropping onto the computer equipment below, new conduits were placed on metal racks and suspended from the concrete slab above. Holes were drilled through concrete beams and used to attach the conduits to the steel framework for support.
Input 106 at www.missioncriticalmagazine.com/instantproductinfo
2N Package Delivery
Vacuum cleaners with high-efficiency HEPA filters were used to capture the concrete dust during the drilling. Planning and sequencing the migration, then developing the methods of procedure to track when, where, and how it would be done, and then ultimately moving data center loads from the old to the new UPSS took tremendous management and logistical skill that involved all disciplines. Integrating the new equipment into the data center’s existing comprehensive building automation system required similar planning as well. Additionally, other procedures were developed to move wiring into the tight spots surrounding the static transfer switches (STS). In order to extend the life of the legacy STS equipment through the retrofit, high-flex cables with very fine strands were employed to turn in tight corners. Installers used Shoo-Pin adaptors to help terminate the cables in tight spaces. Legacy power distribution units were also retained through the UPSS retrofit and employed downstream of the STS to power computer equipment. In June 2011, the UPSS supporting the data center’s IT load were installed and fully commissioned, moving the IT loads from the old to the new UPSS, completing Phase I of the project. Phase II, scheduled for completion by the
end of 2011, will move the mechanical systems from the old UPSS onto the new. Efficient distribution means more than just shipping packages across the globe for United Parcel Service, Inc. It also means promoting flexibility and reliability while optimizing conditioned power distribution systems back in their data centers at home.
EPILOGUE Prior to the completion of this project, the 2011 NEC was released. Changes within Article 645 in the new NEC align with the direction taken at United Parcel Service’s Alpharetta, GA data center, utilizing the EPO relocation exception and working with the AHJ and current adopted codes. ■ ◗ REPRINTS OF THIS ARTICLE are available by contacting Jill DeVries at
[email protected] or at 248-244-1726.
THE BEST INVESTMENT YOU’LL MAKE IN YOUR DATA CENTER IN 2012 Data Center World is the premier conference for data center management professionals. It is the only data center event where professionals can hear directly from peers in the trenches, dealing with the real issues of managing a data center today. Join experts in the data center industry for five days of competitive strategies, insightful techniques, and workable solutions to propel your data center ’s excellence. ce. e.
Register by Feb 24 to save $100. www.datacenterworld.com
March 18-22, 2012 | Las Vegas, Nevada www.datacenterworld.com 62 |
Mission Critical
January/February 2012
THE WORLD’S LEADING PEER-LED DATA CENTER CONFERENCE & EXPO SERIES 2012 CALENDAR ANNOUNCED LOCATION Atlanta
25% of financial sector organisations in the United States are building their own cloud infrastructure
DATE February 23rd
New York
March 13th
Montreal
April 16th
Phoenix
April 24th
Seattle
May
San Francisco
July
Washington DC
August
Chicago
October
Dallas
November
Toronto
December
US Awards Event
Although the average growth of facility profile will be 9% into 2012, this will still account for more than 30% of global data center investment
July
DCD Converged pulls together the people, process and technology necesary to execute a world class data center strategy under one roof. With 70% of DCD attendees responsible for the direction, tactical management, and operational implementation of their data center strategy we are continually evolving our conference programmes to meet their growing information needs. No matter what a delegate’s role is in the data centre value chain, they will be able to benefit from the insight of the world’s industry-leading practitioners presenting case studies or technical papers on how to optimise internal and external IT requirements from a facility, IT, and business perspective.
Dates subject to change check www.datacenterdynamics.com for latest information
NEW SMARTPHONE APP FINDING NEW WAYS TO INTERACT WITH COLLEAGUES IN A LIVE ENVIRONMENT
DatacenterLeadersAwards2012 Recognising excellence in facility design and operation
This App is new for 2012 App name: DCD Planner Apple App Store BlackBerry App World Android Marketplace
Our first US awards program launches in 2012 with an 8 category roster. The award winners will be announced at a special ceremony at DCD San Francsico. For more information: www.datacenterdynamics.com/awards
Optimizing People
www.dc-professional.com
www.datacenterdaynamics.com
Products FREE INFO: WRITE 201
FREE INFO: WRITE 202
Software from Geist
UPS from Eaton
Geist announced a double software upgrade for both Environet and Racknet. Version 3.4 features a streamlined alarm and history configuration. Racknet now includes the ability to export and import device templates and export values via SNMP or BACnet. The more efficient alarm and history configuration saves set-up time and cost associated with the configuration. In addition, the new interface is even more user friendly with only two configuration tabs compared to the original six tabs. The ability to import and export device templates into Racknet increases customization and makes device management more accurate than ever before. In addition, it gives users the flexibility to add and delete devices easily as the data center changes. Users also have the ability to define the values that are exported via SNMP or BACnet. This allows the data collected in Racknet to be fed outward to additional third-party systems and gives users 100 percent control over which values are exported.
Eaton Corporation has released its 9E uninterruptible power system (UPS), a highly efficient backup power device designed specifically with the information technology (IT) manager in mind. Integrated with Eaton’s award-winning Intelligent Power Manager software, the 9E delivers affordable data center power management in a small footprint. The 9E’s sleek, compact tower configuration delivers superior power protection for expanding loads in space-constrained data centers while operating at up to 98 percent efficiency, making it the most efficient UPS in its class. The 9E is the first UPS in its class to offer internal batteries up to 60 kVA, which saves significant floor space when compared to external battery cabinets. The 9E comes complete with Eaton’s complimentary Intelligent Power Manager supervisory software, which allows quick and easy management and monitoring of multiple power devices across a network from any personal computer with an internet browser. The 9E also boasts a range of seamlessly integrated accessories to maximize run-time options, meet unique location requirements, and allow for planning expansion.
FREE INFO: WRITE 203
FREE INFO: WRITE 204
Gen Sets from Caterpillar
Cooling from Schneider Electric
Caterpillar released its Cat 3516C-HD diesel gen set, certified to meet U.S. EPA Tier 4 Interim and California Air Resource Board Tier 4 Interim standards. This 60-Hz package is rated at 2,500 ekW for standby power, 2,250 ekW at prime power, and 2,050 ekW at continuous power, offering highly efficient fuel consumption rates, a compact footprint, and lower emissions for prime power, peak shaving, standby, and mission-critical applications. The 3516C-HD is the latest addition to Caterpillar’s line of Tier 4 Interim certified generator sets. Ranging from 455 kW to 2,500 kW, Caterpillar offers the widest range of Tier 4 Interim certified generator sets in the industry. This diesel generator set also features integrated electronics for monitoring, protection and closed loop NOx control, an ADEM A4 controller, air-to-air after cooler cooling system, MEUI fuel system, and state-of-the-art Cat EMCP 4 control panel. This simple-to-use control panel delivers a more intuitive, user-friendly interface, and navigation, and it is scalable to meet a wide range of customer power requirements.
Schneider Electric’s EcoBreeze is a modular indirect evaporative and air-to-air heat exchanger cooling solution. The EcoBreeze has the unique ability to switch automatically between air-to-air and indirect evaporative heat exchange to consistently provide cooling to data centers in the most efficient way. The design of the EcoBreeze reduces energy consumption by leveraging temperature differences between outside ambient air compared to IT return air to provide economized cooling to the data center. The EcoBreeze meets ASHRAE 90.1/TC 9.9 requirements for efficiency and economization with multiple frame sizes with varying voltages and phases to address any data center’s cooling needs. The EcoBreeze addresses the needs of today’s data centers by implementing multiple forms of economization into each module. The unit, located outside the perimeter of the data center, takes advantage of localized climates and can automatically switch between two forms of economized cooling: either air-to-air heat or indirect evaporative heat exchange.
64 |
Mission Critical
January/February 2012
Products FREE INFO: WRITE 206
FREE INFO: WRITE 205 Battery Monitoring from Eagle Eye Power Solutions
Data Center Cooling from Emerson Network Power
Eagle Eye’s IBwatch-Series is designed to monitor and analyze the aging status of critical battery backup systems in real-time by measuring and recording string: voltage and current; and jar/cell: voltage, internal resistance, connection resistance, and temperature. IBwatch solutions are equipped with battery management software that allows all battery systems to be monitored 24 hours a day, 365 days a year via a remote computer(s). The included Eagle Eye Centroid Software offers the most comprehensive battery diagnosis and reporting capabilities to ensure the integrity of critical battery backup. With flexible expanding solutions, the IBwatch series can monitor all systems in real time. All IBwatch battery systems utilize patented technology to identify and alarm the user of early signs of battery deterioration to ensure the integrity of your critical back-up batteries. In the event of voltage sag, power failure or an alarm, the event is transmitted to the administrator immediately. The IBwatch-Series meets all IEEE standard recommendations for battery monitoring.
Emerson Network Power has released the Liebert DSE, designed specifically for medium to large data centers. The Liebert DSE is a higher efficiency, downflow-only version of the Liebert DS precision cooling system that provides a nominal 125 kW of net sensible cooling. With an energy-efficient SCOP (seasonal coefficient of performance) rating of 2.9, the Liebert DSE is 52 percent more efficient than the ASHRAE 90.1 (2010) minimum requirement of 1.9 for data center cooling units. It also incorporates an optional new-to-the-world “free cooling” technology feature, EconoPhase. At full capacity, it is 114 percent more efficient than an air economizer and mitigates the operational limitations or concerns commonly associated with air economizer technologies. The Liebert DSE’s integrated free cooling uses the same refrigerant circuit, coils, and condenser in both economizer and non-economizer modes. This air-cooled system employs a two-phase refrigerant vs. a traditional single-phase economizer solution, resulting in a simple and efficient cooling system that maximizes the hours available to use reliable free cooling. The Liebert DSE also uses the new Liebert condenser platform, the Liebert MC, that features an innovative micro-channel cooling coil design, EC (electronically commutated) axial fans, and an ability to communicate with the indoor unit to optimize total system efficiency.
FREE INFO: WRITE 207
FREE INFO: WRITE 208
Tank Cleaning from AXI
Switch Gear from ASCO
AXI (Algae-X International) has released the latest addition to its line of standard Mobile Tank Cleaning Systems, the HC-80, a high capacity, multi-stage, automated fuel tank cleaning system equipped with a smart filtration controller. This new system stabilizes and decontaminates diesel fuel, bio-diesel, light oils, and hydraulic fluids while restoring fuel to a “clear and bright” condition. The HC-80 is ideal for service providers in the tank cleaning industry. With changes in fuel production and the growing implementation of bio-fuels, the need for fuel maintenance, including periodic tank cleaning, has increased dramatically. The MTC HC-80 system efficiently cleans tanks, removes water and sludge and restores optimal fuel quality. The system is equipped with a fully automated controller that provides an instant visual status report of system power, pump operation, and alarms for high pressure, high vacuum, and high water levels. AXI’s Mobile Tank Cleaning (MTC) systems excel in combining high capacity fuel optimization, filtration, and water separation with low operating costs and a compact design to provide optimal fuel quality for reliable, peak engine performance.
The ASCO 7000 Series generator paralleling control switch gear is designed to be the world’s most intelligent and advanced multipleengine power control system. It can be custom engineered to meet the specific requirements of any project in a wide range of emergency power applications, from industrial/commercial to large health-care and business-critical facilities. The 7000 Series system is based on state-of-the-art digital control technology to synchronize, manage, and parallel engine-generator sets. It provides sophisticated engine control, load management, and communications capabilities, including unsurpassed power system monitoring and protection. Features include 115 to 600 V, three-phase, four-wire, 100 percent neutral, 25 percent copper ground bus, 50/60 Hz, AC system; UL 1558 construction, listing and labeling; and master control via a 12-in. color touch screen operator interface panel.
January/February 2012
www.missioncriticalmagazine.com
| 65
Products FREE INFO: WRITE 209
FREE INFO: WRITE 210
Cables from Tripp Lite
Engines from Cummins Inc.
Tripp Lite’s Angled Cat6 cables are available in a variety of lengths and angle types. They are ideal for use in confined spaces like networking racks, behind desks or at wall connections. Key features and benefits include an angled design enables connection in areas with limited space; up, down, right, and left angle configurations available; 3, 5, and 10-ft lengths available; and 550-Mhz Gigabit speed rated.
Cummins Inc. released the new QSK95 engine with over 4,000-hp (2,983 kW) output, designed as the world’s most powerful highspeed diesel. The 95-liter 16-cylinder QSK95 is the first engine to be introduced in a new high-horsepower diesel and gas platform from Cummins. The product line will extend up to the 120-liter 20-cylinder QSK120, capable of over 5,000-hp (3,728 kW) output. Designed with exceptional strength and high power density, the 16-cylinder QSK95 exceeds the power output of other large 1,800-rpm high-speed engines with 20-cylinders. Compared with much larger medium-speed engines operating below 1,200 rpm, the QSK95 offers a far more compact and cost-effective solution to achieve the same power output. The QSK95 is ideally suited for highhour, high-load applications in passenger and freight locomotives, many types of marine vessels and ultra-class mine haul trucks. Operators can expect higher levels of equipment uptime and a longer life-to-overhaul with the QSK95.
FREE INFO: WRITE 211
FREE INFO: WRITE 212
UPS from HP HP has added two new slim-lined uninterruptible power systems (UPS) to its power family. The HP UPS R5000 is 3U and delivers 4,500 W, and the HP UPS R7000 is only 4U and delivers a full 7,200 W. Combining unity rating with more true wattage in smaller form factors, the HP rack-mountable UPSs are highly dense. The unique HP online, on-demand hybrid technology optimizes efficiency and heat output during general operation and provides a transformer-less design for an optimum size/functionality ratio. A distinctive processing system on these rack-mounted UPS models incorporates a three-stage power filter with a double-conversion, line-interactive system to provide all of the benefits of an on-line UPS without the side effects of low efficiency, extra heat, and short battery life. All HP rack-mountable UPSs ship with HP’s Network Module and HP’s Power Protector Software that incorporates enhanced Battery Management, an exclusive patented technology that doubles battery service life to lower investment costs. Hot-swappable batteries and an intelligent automatic bypass (for continuous power) improve overall serviceability.
66 |
Mission Critical
January/February 2012
Gateways from FieldServer Technology and Point Six Wireless FieldServer Technologies has released the Point Six Wireless Gateway. Point Six Wireless manufactures a variety of unique, high value wireless solutions for commercial and industrial OEMs. They have shipped over 400,000 products incorporating a broad spectrum of radio technologies spanning 418/433/900-MHz proprietary protocols as well as WiFi. The FieldServer Technologies Gateway solution enables these devices to easily interface with current open building automation protocols including BACnet, LonWorks, Metasys N2 by JCI, Modbus, and SNMP. Users can now take advantage of the wide variety of Point Six environmental sensors and be assured that the data are compatible with standard building automation protocols making it easy to interface with building and campus management systems. The FieldServer gateway supports the complete line of Point Six WiFi, 900-MHz and 418/433-MHz sensors spanning from environment sensors to voltage, current, and digital sensors. Typical applications include temperature and humidity monitoring in hospitals, pharmacies, and laboratories; ambient condition monitoring for building automation systems; freezer, cooler, and refrigeration temperature monitoring in restaurants; and temperature, leak detection, and power monitoring in data centers.
News Pike Research Releases Report on UPS Traditionally, most uninterruptible power supply (UPS) systems were point solutions designed to protect an individual PC, server, medical device, airport, or factory. Today, new technologies and architectures are emerging that can more effectively integrate UPS systems into the larger power infrastructure and take advantage of the large amount of energy storage already installed worldwide. In particular, as green IT becomes an important goal for many IT vendors and users, UPS systems that can fit into and augment existing IT infrastructures to support the vendors’ overall green IT objectives will be in increasing demand. According to a recent report from Pike Research, these trends, along with significant growth in emerging economies, will lead to strong growth for the UPS sector in the next few years. The global market for UPS systems will expand from $8.2 billion in 2011 to $9.4 billion in 2012, a year-on-year growth rate of 14 percent, the cleantech market intelligence firm forecasts. Going forward, the market will grow to $13.2 billion by 2015. “UPS systems are already an important energy storage feature in cost-efficient and smart buildings,” says Bob Gohn, vice president of research. “The emergence of hybrid topologies that automatically switch between different power modes can reduce energy costs over time without compromising power quality.” Next generation UPS systems will combine several key features, including a built-in energy storage source, such as batteries, flywheels, or compressed air, and circuitry to supply clean and sufficient power over periods lasting from a few seconds to several hours. Most leading UPS systems also have some form of surge protection or power filtering circuitry. These advanced features enable these systems to play a larger role in the overall smart energy infrastructure, making them indispensable to a holistic energy management strategy. Pike Research’s report, “Next Generation Uninterruptible Power Supplies”, provides a comprehensive examination of the global market for uninterruptible power supplies including a focus on small, medium, and large UPS industry dynamics. Emerging technologies and key industry players are analyzed in depth, and market forecasts for each segment and world region extend through 2015. An executive summary of the report is available for free download on the firm’s website at www.pikeresearch.com/. ■
Google Data Centers Receive Certification Google has announced that all of its U.S. owned and operated data centers have received ISO 14001and OHSAS 18001 certification. For the last year, Google’s data center team has been working on a project to bring its facilities to even higher standards for environmental management and workforce safety. The company is the first major Internet services company to gain external certification for those high standards at all of our U.S. data centers. On the company’s blog, Google has posted a video highlighting its efforts and the improvements it has made. “Like most data centers, ours have emergency backup generators on hand to keep things up and running in case of a power outage. To reduce the environmental impact of these generators, we’ve done two things: first, we minimized the amount of run
CALENDAR Industry Events MARCH Green Grid Tech Forum, March 6-7, 2012, Doubletree Hotel, San Jose, CA. www.thegreengrid.org/events.aspx Data Center World, March 18-22, 2012, Mirage Hotel, Las Vegas. www.datacenterworld.com/ Green Data Center Conference, March 27-29, 2012, Dallas/Fort Worth Marriott, Dallas. http://greendatacenterconference.com/
APRIL Northern California Data Center Summit, April 12-14, 2012, Santa Clara Convention Center, Santa Clara, CA. http://apartmentsummit.com/norcaldata/ DatacenterDynamics Phoenix, April 24, 2012, Hyatt Regency, Phoenix. http://www.datacenterdynamics.com/conferences/2012/ phoenix-2012
MAY 2012 GreenInfoTech Summit, May 3, 2012, Embassy Suites Hotel, Fort Lauderdale. http://greeninfotechsummit.com/ Uptime Institute Symposium 2012, Hyatt Regency Santa Clara, Santa Clara, CA. http://symposium.uptimeinstitute.com/ time and need for maintenance of those generators. Second, we worked with the oil and generator manufacturers to extend the lifetime between oil changes. So far we’ve managed to reduce our oil consumption in those generators by 67 percent,” writes Joe Kava, senior director, data center construction and operations. “A second example: each of our servers in the data center has a battery on board to eliminate any interruptions to our power supply. To ensure the safety of the environment and our workers, we devised a system to make sure we handle, package, ship and recycle every single battery properly,” Kava added. Google’s data centers in the following U.S. locations have received this dual certification. The company plans to pursue certification in its European data centers as well. • The Dalles, OR • Council Bluffs, IA • Mayes County, OK • Lenoir, NC • Monck’s Corner, SC • Douglas County, GA ■
January/February 2012
www.missioncriticalmagazine.com
| 67
R EQUEST I NFORMATION FROM A DVERTISER Write in the product numbers from the advertisements from which you want FREE information from Mission Critical.
Request FREE product information online at: www.MissionCriticalMagazine.com
or Mail this form to: C/O Creative Data 440 Quadrangle Dr., Suite E Bolingbrook, IL 60076 or Fax to 1-888-533-5653
FREE S UBSCRIPTION F ORM 1. Would you like to receive a FREE subscription to Mission Critical? Please check your preferred format: Would you like to receive the Mission Critical eNewsletter for FREE?
T YES! T no TDigital TPrint T YES! T no
Signature ______________________________________________________________________________Date ______________________ Name ________________________________________________ Title _______________________________________________________ Company _________________________________________________________________________________________________________ Address ______________________________________________ City/State/Zip ______________________________________________ Work Phone___________________________________________ Work Fax __________________________________________________ E-mail ____________________________________________________________________________________________________________ • By providing your fax number, you're giving us permission to fax subscription offers to you. • You will receive subscription and renewal notices from BNP Media via e-mail. • If you provide your email address, it may be used by our advertisers to provide you with the information you've requested.
2. Which of the following best describes your title? (select ONE only) 01 T Management: Corporate Officer, Owner, Consultant, Information Technology Officer, VP Data Management Center, VP Facility Engineering, VP Building Services, Data Management Officer, Internet Developer/Operator, CIO, MIS Director, IS VP, Real Property 02 T Operations Management: Property Manager, Facility Manager, Data Center Manager, Security Manager, IT Manager, Network and Communications Manager, Disaster Recovery Manager 03 T Engineering & Engineering Management: Mechanical Engineer, Electrical Engineer, Consulting Engineer, Specifying Engineer, Engineering Manager, Engineering Staff 98 T Other (please specify) ______________________________________________________________________________________________________
3. Which of the following best describes your business/industry? (select ONE only) Industrial/Commercial/Institutional 01 T Telecom/Computer/Data Facility 02 T Banking/Financial Services 03 T Industrial/Manufacturing Facility 04 T Commercial/Office/Retail/Condominium Professional Services 20 T Consulting Engineering 21 T Data Center Designer 22 T Builder 23 T Co-location Facility
T T T T
05 06 07 08 09
T T T T T
Government/Military Facility Hospital/Health Care Facility Educational Facility Pharmaceutical Facility Hospitality/Convention Facility
10 11 12 13
24 25 26 27
T T T T
Real Estate Broker Maintenance and Service Provider Electrical Contractor Software Developer/Reseller
28 T Security 29 T Internet Developer/Operator 98 T Other (please specify)
Semi-Conductor Manufacturer Chain Store/National Account Airport/Mass Transit Facility Municipal Services Facility
RAP201
Advertisers’ Index To receive free information about products and services mentioned in Mission Critical, visit www.missioncriticalmagazine.com/instantproductinfo and simply enter the info number from this ad index on the convenient form. Or use the Free Information Card on the opposite page. AFCOM www.datacenterworld.com Page 62 Alber http://bit.ly/alber_mc (800) 851-4632 Page 26 Info #10 Altronic GTI http://bit.ly/altronic_mc (330) 545-9768 Pg 17 Info #17 Aquatherm http://bit.ly/aquatherm_mc (801) 805-6657 Page 45 Info #107 ASCO Power Technologies http://bit.ly/asco_mc (800) 800.ASCO Pg BC Info #12 Baldor Generators http://bit.ly/baldor_mc (479) 646.4711 Pg 41 Info #13 BICSI http://bit.ly/bicsi_org_mc Page 37 Info #79
Crenlo LLC – Emcor Enclosures http://bit.ly/crenlo_mc (507) 287-3535 Page 13 Info #90
Leviton http://bit.ly/leviton_mc Pg 31 Info #74
Stulz Air Technology Systems, Inc. http://bit.ly/stulz_mc Pg 11 Info#41
Cummins Power Generation http://bit.ly/cummins_mc Pg 21 Info #38
Miratech http://bit.ly/miratech_mc (918) 933-6271 Page 24 Info #109
Data Aire, Inc. http://bit.ly/dataaire_mc (714) 921.6000 Pg 35 Info #15
Mitsubishi Electric Power Products, Inc. http://bit.ly/mitsubishi_mc (724) 778.3134 Pg IFC Info #31
Tate Access Floors http://bit.ly/tate_mc (800) 231-7788 Page 15 Info #76
Motivair Corp. http://bit.ly/motivair_mc (716) 689.0222 Pg 40 Info #18
The Siemon Company http://bit.ly/siemon_mc Page 46 Info #95
Data Center Dynamics www.datacenterdynamics.com (800) 922-7249 Page 63 Eaton http://bit.ly/eaton_whitepapers_mc Page 5 Info #21 Ehvert http://bit.ly/ehvert_mc (416) 868-1933 Page 27 Info #110 Fike http://bit.ly/fi ke_mc (866) 758-6004 Page 39 Info #16
CDM Electronics, Inc. http://bit.ly/cdm_mc Page 20 Info #103
Geist http://bit.ly/geistglobal_mc (800) 432.3219 Pg 7 Info #20
Chil-Pak http://bit.ly/chil-pak_mc (480) 503-8040 Page 61 Info #102
Great Lake http://bit.ly/greatlakes_mc (866) TRY-GLCC Page 9 Info #94
MTU Onsite Energy http://bit.ly/mtu_mc (800) 325-5450 Page 47 Info #19 PowerSecure http://bit.ly/powersecure_mc (866) 347-5455 Page 29 Info #108 Rittal Corporation http://bit.ly/rittal_mc Page 48-49 Info #65 Schneider Electric http://bit.ly/apc_mc Pg 23 Info #75
System Sensor http://bit.ly/systemsensor_mc Page 19 Info #73
Tripp-Lite http://bit.ly/tripp-lite_mc (888) 447-6227 Page 33 Info #78 United Metal Products http://bit.ly/unitedmetal_mc Page 59 Info #106 Universal Electric Corporation http://bit.ly/universal_mc +01 724 597 7800 Pg 25 Info #33 Upsite Technologies, Inc. http://bit.ly/upsitetechnologies_mc (888) 982-7800 Pg 43 Info #42
Sensaphone http://bit.ly/sensaphone_mc (877) 373-2700 Page 12 Info #63
January/February 2012
www.missioncriticalmagazine.com
| 69
Heard H eard o on nt the he IInternet nternet January 9, 2012 1:52:45 PM EST
From Our Website
datacenterpulse.org Is Public Cloud Computing Green – Or at least Greener than Traditional IT? Unfortunately, there isn’t a simple answer to the “Is Public Cloud Greener” question as the only real answer is “it depends.” At the core of the question is that assumption that because you’re theoretically using fewer physical machines more effectively, that you are thereby greener or more efficient.
Most Popular (as of January 5)
January 5, 2012 9:24:18 AM EST Jones Lang LaSalle Green Blog Posted by Carey Guerin Global effort for a global cause Sustainability University Today, our firm announces that we have reached our goal to employ 1,000 people with LEED and other energy and sustainability accreditations. Actually, as of Jan. 1, we employed 1,075 such individuals.
January 3, 2011 @wjbumpus Winston Bumpus New head of NIST’s IT lab says cloud, mobile to drive vision federalnewsradio. com/364/2682043/Cl… < Best wishes to Cita
December 29, 2011 11:21:00 AM EST Data Center Design UPS Configuration Availability Rankings Peter Sacco, president and founder, PTS Data Center Solutions, recently wrote a new white paper on UPS Configuration Availability Rankings.
@JimmiBono Jim Wilson Why 2012 will be year of the artistentrepreneur gigaom.com/2011/12/29/ why…
December 14, 2011 7:00:00 AM EST News@Cisco Beg, Borrow or Steal? Young Professionals, College Students Admit They’ll Go To Extreme Measures for Internet Access Despite IT Policies, Identity Theft Risks Tendencies of World Workforce’s Next Generation to Ignore Online Threats Poses Challenge To Personal, Corporate Security; Magnify Findings in Cisco 2011 Annual Security Report
December 12, 2011 5:40:02 PM EST ServerTech Blog Posted by Robert There is a neat little collection of interesting data centers over at Colo & and Cloud. My favorite has to be the James Bond style of the Bahnhof’s Pionen “White Mission Critical
data center UPS EFFICIENCY DATA CENTER facility efficiency
6. cost 7. application 8. building 9. data center companies 10. data center design
MOST EMAILED ARTICLES: 1. HP Enables Organizations Worldwide to Expand Data Center Resources with HP POD http://bit.ly/yNzdCR 2. Jacobs Announces Acquisition of KlingStubbins http://bit.ly/sFCqLG 3. Alpha White Paper Closed Loop Cooling Validation Testing R.A.S.E.R. H.D. http://bit.ly/w2lMFw 4. Internap Opens High-Performance Data Center in Dallas/ Fort Worth http://bit.ly/tybv7W 5. 451 Research Publishes Market-Sizing Report for the DCIM Software Market http://bit.ly/s7d78j 6. Clean Agents Protect Data Centers From Fire http://bit.ly/ucgP4m 7. Hot Aisle Insight: 2012 Predictions http://bit.ly/ucgP4m 8. Free Trial of Schneider Electric’s DCIM Software Available http://bit.ly/u0yUYA 9. Eaton Takes Own Advice for New LEED Certified Data Centers http://bit.ly/s9iWFs 10. Uptime Institute Accepting Applications for 2012 Green Enterprise IT Awards http://bit.ly/ulbtQ9
MOST VIEWED ARTICLES:
December 29, 2011
70 |
TOP SEARCHES: 1. 2. 3. 4. 5.
January/February 2012
1. Alpha White Paper Closed Loop Cooling Validation Testing R.A.S.E.R. H.D. 11/03/11 http://bit.ly/w2lMFw 2. Data Centers and the U.S.A. 11/21/11 http://bit.ly/vyNWny 3. Eaton Takes Own Advice for New LEED Certified Data Centers 11/20/11 http://bit.ly/s9iWFs 4. Jacobs Announces Acquisition of KlingStubbins 11/03/11 http://bit.ly/sFCqLG 5. Alpha Tests on the R.A.S.E.R 11/14/2011 http://bit.ly/xeow6y 6. Skanska To Partner With nlyte Software to Optimize Data Center Management 12/07/11 http://bit.ly/tLsxui 7. The Information Economy Demands Continuous Uptime 11/30/11 http://bit.ly/xjKdHq 8. The Incredible Shrinking Data Center 11/03/11 http://bit.ly/rNxckf 9. Classification of Data Center Management Software Tools 11/03/11 http://bit.ly/rtfsaY 10. Clean Agents Protect Data Centers From Fire 11/30/11 http://bit.ly/ucgP4m
Mountain” data center. But, I do respect the idea of re-purposing other buildings …
W E BIN A R BEST PRACTICES IN DATA CENTER COOLING Industry experts Vali Sorell of Syska Hennessy Group and John Musilli of Intel discuss the pros and cons of common data center cooling strategies, with an eye to helping you understand which cooling technologies can help you achieve the highest levels of energy efficiency and reliability appropriate to your facility.
ON-DEMAND UNTIL: February 7, 2012
Mr. Sorell and Mr. Musilli have years of industry experience, which helps them understand the different requirements of Internet, enterprise, and SMB facilities.
Sponsored by:
Free registration at: http://webinars.missioncriticalmagazine.com
Expert Speakers: Mr. Vali Sorell, P.E.
John Musilli
Vice President, National Critical Facilities Group Chief HVAC Engineer Syska Hennessy Group
Sr. Data Center Architect Intel Corporation
Moderator: Kevin Heslin
Editor, Mission Critical
Register at: http://webinars.missioncriticalmagazine.com
E E R F
e th iew y! v to oda r e t ve t s i i g Re arch
"4$0JOWJUFTZPVUPFYQMPSF UPDZDMFBOEDZDMFQPXFSUSBOTGFSTXJUDIFT PGFOHJOFFSTBHSFF± 4FMFDUJWFDPPSEJOBUJPOSFRVJSFTDIPJDF OPUDPNQSPNJTF 4FMFDUJWFDPPSEJOBUJPOEFNBOETUIFBCJMJUZUPDIPPTF"GUFSBMM TJODFOPUXPFNFSHFODZBOE CBDLVQQPXFSTZTUFNTBSFBMJLF XIZTFUUMFGPSBDPPLJFDVUUFSTFMFDUJWFDPPSEJOBUJPOEFTJHO "4$01PXFS5SBOTGFS4XJUDIFT
"DIJFWFEJOEVTUSZ»STUDZDMFSBUJOH
2VBMJ»FEDZDMFQFSGPSNBODFPODPSFDZDMFTXJUDI BOPUIFSJOEVTUSZ»STU
4BUJTGZUIFEFNBOETPGVOJRVFBQQMJDBUJPOTXJUI BDZDMFPQUJPO
5SVMZPQUJNJ[FTFMFDUJWFDPPSEJOBUJPOGPSXIBUUIF BQQMJDBUJPOSFRVJSFT UZQJDBMMZBODZDMFUSBOTGFSTXJUDI
1SPWJEFBDPTUFGGFDUJWFTPMVUJPOCZVUJMJ[JOH UPDZDMFSBUJOHT
"SFDFSUJ»FEUP6- UIFEJUJPO "QSJM UFTUDSJUFSJB
$PNQBSFQPXFSUSBOTGFSTXJUDIFTGPSTFMFDUJWF DPPSEJOBUJPOBQQMJDBUJPOT5IFO TFMFDU"4$0 XXXFNFSTPOOFUXPSLQPXFSDPN"4$0 "4$0
BTDPBQVDPN 3FTVMUTSF¼FDUUIFPQJOJPOTPGNPSFUIBOFOHJOFFSTQPMMFEJOBSFDFOU8FCDBTUFYJUTVSWFZ "4$01PXFS4XJUDIJOH$POUSPMT +VTUBOPUIFSSFBTPOXIZ&NFSTPO/FUXPSL1PXFSJTBHMPCBMMFBEFSJO NBYJNJ[JOHBWBJMBCJMJUZ DBQBDJUZBOEFGGJDJFODZPGDSJUJDBMJOGSBTUSVDUVSF
&NFSTPOBOE"4$0BSFUSBEFNBSLTPG&NFSTPO&MFDUSJD$PPSPOFPGJUTBG»MJBUFEDPNQBOJFT&NFSTPO&MFDUSJD$P$4"4$01PXFS5FDIOPMPHJFT
& . & 3 4 0 / $ 0 / 4 * % & 3 * 5 4 0 - 7 & % Input 12 at www.missioncriticalmagazine.com/instantproductinfo