Setting efficiency goals for data centers
Monday, April 12, 2010
For the past decade, we have been working to make our data centers as efficient as possible; we now use less than half the energy to run Google's data centers than the industry average. In the open letter below, I am very happy to welcome a group of industry leaders who collectively represent most of the world's most advanced data center operators. -Urs Hoelzle
Recently, the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) added data centers to their building efficiency standard, ASHRAE Standard 90.1. This standard defines the energy efficiency for most types of buildings in America and is often incorporated into building codes across the country.
Data centers are among the fastest-growing users of energy, according to an EPA report, and most data centers have historically been designed and operated without regard to energy efficiency (for details, see this 2009 EPA Energy Star survey). Thus, setting efficiency standards for data centers is important, and we welcome this step.
We believe that for data centers, where the energy used to perform a function (e.g., cooling) is easily measured, efficiency standards should be performance-based, not prescriptive. In other words, the standard should set the required efficiency without prescribing the specific technologies to accomplish that goal. That’s how many efficiency standards work; for example, fuel efficiency standards for cars specify how much gas a car can consume per mile of driving but not what engine to use. A performance-based standard for data centers can achieve the desired energy saving results while still enabling our industry to innovate and find new ways to improve our products.
Unfortunately, the proposed ASHRAE standard is far too prescriptive. Instead of setting a required level of efficiency for the cooling system as a whole, the standard dictates which types of cooling methods must be used. For example, the standard requires data centers to use economizers — systems that use ambient air for cooling. In many cases, economizers are a great way to cool a data center (in fact, many of our companies' data centers use them extensively), but simply requiring their use doesn’t guarantee an efficient system, and they may not be the best choice. Future cooling methods may achieve the same or better results without the use of economizers altogether. An efficiency standard should not prohibit such innovation.
Thus, we believe that an overall data center-level cooling system efficiency standard needs to replace the proposed prescriptive approach to allow data center innovation to continue. The standard should set an aggressive target for the maximum amount of energy used by a data center for overhead functions like cooling. In fact, a similar approach is already being adopted in the industry. In a recent statement, data center industry leaders agreed that Power Usage Effectiveness (PUE) is the preferred metric for measuring data center efficiency. And the EPA Energy Star program already uses this method for data centers. As leaders in the data center industry, we are committed to aggressive energy efficiency improvements, but we need standards that let us continue to innovate while meeting (and, hopefully, exceeding) a baseline efficiency requirement set by the ASHRAE standard.
Chris Crosby, Senior Vice President, Digital Realty Trust
Hossein Fateh, President and Chief Executive Officer, Dupont Fabros Technology
James Hamilton, Vice President and Distinguished Engineer, Amazon
Urs Hoelzle, Senior Vice President, Operations and Google Fellow, Google
Mike Manos, Vice President, Service Operations, Nokia
Kevin Timmons, General Manager, Datacenter Services, Microsoft
Joseph F. Dzaluk, VP, Global Infrastructure and Resource Management, ITDelivery, IBM
Update (4/20/10): Added Joseph F. Dzaluk (IBM) as a signer.
I have written my own blog post http://www.greenm3.com/2010/04/google-microsoft-amazon-nokia-digital-realty-trust-dupont-fabros-vs-ashrae-standard-901-requirement-for-economizers-li.html referring to the above. I am encouraging readers to comment on this blog to allow Google to collect public opinion. I agree with the industry position Google has presented. -Dave Ohara
ReplyDeleteI strongly support Google et al and their opposition to the proposed changes to ASHRAE 901. Every data center faces unique cooling challenges that vary based on region, rack density, HVAC architecture, and more. A focus on metrics and benchmarking, as Google notes, is the most holistic means of ensuring efficiency in the data center, not a "one size fits all" methodology. - Todd Boucher, Leading Edge Design Group
ReplyDeletePlease see the following response from ASHRAE
ReplyDeleteThe proposal to address data centers was made because a significant amount of energy is required to cool and ventilate computer rooms. Through this addendum, ASHRAE is proposing cost effective measures in the prescriptive path of Standard 90.1 to save energy in data centers. The addendum includes eight exceptions to requirements for the use of economizers in data centers. The addendum does not change the portion of the standard that already allows, through the Energy Cost Budget method (an alternate method of compliance), for data centers to be designed without economizers if other energy saving methodologies, including power usage effectiveness (PUE), are employed. ASHRAE is committed to excellence in the consensus standard development process and encourages anyone with comments regarding the proposed addendum regarding data centers (addendum bu) to participate in the public review process. The draft addendum and instructions for submitting comments may be found at http://www.ashrae.org/technology/page/331. Comments are due to ASHRAE by April 19, 2010.
Jodi Scott
ASHRAE communications manager
As a matter of principle, I support a industry-wide, consensus based position as Google has presented here. Yet, it should be noted that ASHRAE does recognize and allow for various exceptions to the measuring methods prescribed in the proposed amendment to 90.1 – in so far as stipulated in the proposed language… The question would be which ought to be accepted as the “norm” and which is the “exception”. Wilhelm Wang, TransReg LLC, Westwood, NJ
ReplyDeletePUE and DCiE are dimensionless ratios (kW/kW). They show data that may be valuable to the owner/operator, in terms of IT equipment power and total facility power, but they do not measure efficiency. For example, if there are two exact data centers side by side in one city, with the same square footage (say 100,000 sf) and the same number of servers and same HVAC equipment, if they both have the same PUE of 2.5, but for one facility it is 1000/400 kW and for the 2nd facility it is 2000/800 kW, how can they be considered to have the same "efficiency"? Or, if one data center has a PUE of 2.5 (1000/400 kW) and another one has a PUE of 2.0 (1000/500 kW), which one is more "efficient"? The only difference is that one facility uses less electricity for non-IT functions. They have the same electric bill. More useful metrics would be kWh/square foot/year (or month or day), peak Watts / square foot, overall Watts or kWh used per teraflop of server function, Btu/square foot/year (or month or day), Watts or kWh per square foot of raised floor area / year or month or day, etc.
ReplyDeleteAlso, if you look at the Energy Star survey results, the EPA is supporting an "EUE" metric, which incorporates an incorrect "source energy factor" to distort what is happening. EPA does not support PUE.
ReplyDeleteASHRAE 90.1 does have a performance path to meeting the standard. The prescriptive path that Google is commenting on is used by building owners/ design teams that do not have the resources to (or choose not to) go through an energy analysis to optimize their design. The prescriptive path attempts to determine what will work best for most projects to achieve a baseline efficiency, without requiring much analysis.
ReplyDeleteBuilding Energy Engineer, Enermodal Engineering
I also agree that a very prescriptive approach is non constructive and only helps engineers in conformity and the ability of judgment of a process compared to a scope of works. (I’m an engineer and I certainly know the risk averse drives engineers use)
ReplyDeleteEnergy efficiency of a data centre is easy to understand.
1- Reduce the number of servers & IT equipment you need to do a task (Initial power & cooling drop) Eg; virtual server, non rotating storage, and low loss communications ect.
2- If you put power into it you have to cool it and the less energy required to cool the better. The best way it to place your data centre in cooler environment areas (less than 35degC in summer) and use outside air cooling.
3- Use energy efficient support process like UPS systems, Mechanical systems, correctly or slightly over sized cabling & switchgear to reduce resistance, purchase renewable energy and quality insulation.
Dave Mc
There is really no point in directly imposing a restriction on datacenter efficiency. If using too much electricity has impact on the environment, closing coal mines and raise its import tax should do the trick. This is just another attempt to redistribute power among various regulators.
ReplyDelete'Unfortunately, the proposed ASHRAE standard is far too prescriptive. Instead of setting a required level of efficiency for the cooling system as a whole, the standard dictates which types of cooling methods must be used.'
ReplyDeleteYou don't have to know anything whatsoever about data centers of cooling systems to know that dictating particular methods is the wrong way to go. If you want increased efficiency, require increased efficiency. Counting on technology NOT improving has NEVER succeeded, and it never will. Google et al are right on in opposing these standards.
Great posts, all. Fascinating thread.
ReplyDeleteWe've been designing / refitting an 80,000 s/f, 12 - 15mW co-lo facility atop a large aquifer. 100% groundwater cooled data center, where chilled (48.5 degrees F) groundwater is pumped and circulated through CRAHs. N+X. Open loop system, with the added mechanical efficiencies of a siphon effect because of the recapture wells. For every 1mW of critical load our total cooling load is <65kW!
Very efficient. Zero water loss, to boot.
We look forward to proving our theoretically low PUE through operating history over time.
Capital and operating costs are significantly offset.
If any of you might be interested in participating in a test bed / pilot of this project in what we call Dataville, please feel free to get in touch with me directly. I'm happy to share our engineering diagrams for 100% groundwater cooling freely, open-source -style. Perhaps you might help improve / refine them.
This year-round solution requires zero chiller plant, and even works around the need for free airside economizers.
I would be delighted and honored if any subject matter experts and thought leaders who posted to this blog might critique or contribute thought / engineering rigors to our groundwater cooling schematic.
Incidentally, we are also recipients of a $2 million CANARIE research grant to create a 100% carbon-neutral next-gen Internet / utility computing platform across Canada, and are the East Coast IX / node in this project - the GreenStar Network. The project just started.
We are opening GSN to peering and benchmarking arrangements, and welcome your interest.
Anton E. Self, Bastionhost Ltd.
direct: 902-482-6466
www.bastionhost.com
I would strongly advise Google, Amazon, MS, and similar premier companies to use the performance based compliance model offered in the same standard. The prescriptive path would be what I would expect to see applied to a small data center, under 1,000 sf tucked in the basement of an 100,000 sf office building, never to a purpose built and engineered facility. Is Google concerned that local officials will adopt only the prescriptive path and strike the performance path when implementing in local code? That is something to fear - never underestimate the ability of a local government to screw up badly...
ReplyDeleteASHRAE's (American Society of Heating and Ventilating Engineers) forte is the use of air to heat and cool. And quite naturally they will prescribe an air based solution to cool data centers.
ReplyDeleteThe sad thing about the prescription is that it is far from optimum. Air cooling in any form is horribly inefficient. For example, Facebook estimates that with free cooling their energy cost for cooling is about 15% of IT load. But this IT load includes the server fan energy which accounts for about 10% of the total. Moving this to the cooling side of the equation shows that the real cost of cooling is about 38%.
The other problem with air cooling is that it is difficult to manage. Baffles, screens, raised floors, actuators, humidifiers, dehumidifiers etc. are all required. Tight control loops are impossible to realize as air moves in complex ways and time constants are very long.
Our tests of liquid based direct touch server cooling show that cooling overhead in a climate similar to Facebook's location (pacific northwest) would be about 5% of the real IT load. Control is also much easier as time constants are much shorter and fluids move in predictable ways through set channels.
If the 90.1 changes become ratified, the only way innovative data center cooling solutions can be approved by the individual States' planning organizations is through the mechanism outlined in chapter 11. The tools available to perform the calculations are both arcane and out of date. To all intents and purposes, this kills all chance of innovation.
ASHRAE is opening itself up to questions on its competence to be the arbiters of data center cooling design. Their actions to date also have the potential of hurting America's competitiveness worldwide.
In the interest of full disclosure, I should point out that my company, Clustered Systems Company, Inc. designs and manufactures advanced direct touch liquid cooling systems.