NREL Readies Launch Of World’s Most Energy-Efficient High-Performance Data Center

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Looking to walk the clean-energy and energy-efficiency talk, the US Department of Energy’s National Renewable Energy Laboratory (NREL) is about to flip the switch on the world’s most energy efficient high-performance computing (HPC) data center.

Requiring as much as 1 megawatt (MW) of electrical power to run, NREL’s ultra-efficient Energy Systems Integration Facility (ESIF) is the first HPC data center “dedicated solely to advancing energy systems integration, renewable energy research, and energy efficiency technologies.” It will also be “the first petascale HPC to use warm-water liquid cooling and reach an annualized average power usage effectiveness (PUE) of 1.06 or better.”

With a price tag of $10 million, not only is NREL’s HPC data center system cleaner, greener, and more energy efficient, it costs no more to build than a conventional data center system of the same kind. In fact, it actually cost less to build and will be much cheaper to operate, according to an NREL press release.

Credit: NREL
Credit: NREL

Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!

The NY Times’ “Power, Pollution and the Internet”

There are tens of thousands of data centers sucking up electrical power across the globe today, according to a year-long study undertaken by the New York Times, as Internet industry stalwarts and up-and-coming data center providers look to satisfy and capitalize on people’s seemingly insatiable demand for more news, entertainment, data and information. An estimated one-third of the world’s data centers are in the US, accounting for some 1.5% of national electricity usage as of 2006, according to the US Environmental Protection Agency (EPA).

Despite high-profile green data center initiatives by the likes of Apple and Google, the first in a planned “Power, Pollution and the Internet” series based on a year-long examination by The New York Times “has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.”

“Most data centers by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show,” according to a report from the NY Times’ James Glanz.

“Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid.”

Data centers’ energy, carbon, and environmental footprints can be reduced significantly however, resulting in economic, social, and environmental gains and benefits. NREL’s work with leading private sector industry participants in designing and building the world’s most energy-efficient HPC data center demonstrates how all this can be taken from the drawing board to physical reality.

NREL’s “Chips-to-Bricks” Approach to Building an Ultra-Efficient HPC Data Center

Taking a holistic, integrated approach to data center sustainability, NREL worked with HPC data center and architectural design industry leaders, including HP, Intel, SmithGrouipJJR and the Integral Group, in designing and building the new facility.

Energy efficiency is built-in right in from the start and extends throughout NREL’s ESIF. Rather than using the typical 208-volt (V) electricity, high-voltage 480 VAC electricity is supplied directly to the HPC data center’s server racks. That will result in savings on power electronics equipment, power conversions and losses, NREL explains in the press release.

Using a liquid as opposed to air-cooled system also translates into a cleaner, greener, more efficient, and less costly HPC data center. Energy-efficient pumps, for example, replace “noisy, less-efficient fans,” NREL points out.

Explained Steve Hammond, director of NREL’s Computational Science Center;

“First, we wanted an energy-efficient HPC system appropriate for our workload. This is being supplied by HP and Intel. A new component-level liquid cooling system, developed by HP, will be used to keep computer components within safe operating range, reducing the number of fans in the backs of the racks.”

Slated to reach petascale-capacity this summer, NREL’s HPC data center system is to include scalable HP Proliant SL230 and SL250 Generation 8 (Gen 8) servers based on eight-core Intel Xeon E5-2670 processors and next-generation servers using future 22 nanometer (nm) Ivy Bridge architecture-based Intel Xeon and Intel Many Integrated Core architecture-based Xeon Phi coprocessors, NREL explains in the press release.

“The methods of code optimization for Xeon Phi are identical to what one does to make the most of Xeon processors,” Intel general manager of high-performance computing Stephen Wheat elaborated.

“Finely tuned optimizations for Xeon Phi almost always result in a better-performing source code for Xeon processors. As the optimized and tuned application is run in production, the achieved performance per watt on both Xeon Phi and Xeon processors allows achieving the results with the lowest energy use.”

The First Warm-Water Data Center Cooling System

Aiming to minimize the energy required for cooling NREL’s supercomputer, the ESIF team took an unconventional approach, deciding to push the envelope a bit by installing a warm-water cooled high-performance system. “In traditional computer systems, you have a mechanical chiller outside that delivers cold water into the data center, where air-conditioning units blow cold air under a raised floor to try to keep computer components from overheating,” Hammond explained.

“From a data center perspective, that’s not very efficient; it’s like putting your beverage on your kitchen table and then going outside to turn up the air conditioner to get your drink cold.”

“NREL’s ultimate HPC system is currently under development and will be a new, warm-water cooled high-performance system,” added Ed Turkel, group manager of HPC marketing at HP. “It will be a next-generation HPC solution that’s specifically designed for high power efficiency and extreme density, as well as high performance — things that NREL requires.”

Capturing and Using Data Center Heat

Capturing what normally would be vented waste heat and using it to heat ESIF office space and labs was a core aspect of the design and construction process, Hammond continued.

“Last but not least, we wanted to capture and use the heat generated by the HPC system. Most data centers simply throw away the heat generated by the computers. An important part of the ESIF is that we will capture as much of the heat as possible that is generated by the HPC system in the data center and reuse that as the primary heat source for the ESIF office space and laboratories. These three things manifest themselves in an integrated ‘chips-to-bricks’ approach.”

The designed incoming water temperature point for the HPC data center is high enough to “eliminate compressor-based cooling systems and instead use cooling towers,” according to NREL.

“In a data center, this is comparable to a homeowner using an energy-efficient swamp cooler rather than an air conditioner. In addition, the pump energy needed to move liquid in the cooling system is much less than the fan energy needed to move the air in a traditional data center. Water is about 1,000 times more effective than air in terms of the thermodynamics, or the heat exchange.”

Entering the data center server cooling system at around 75ºF, the water exiting will be in excess of 100º. The water temperature and volume will be sufficient to serve as the primary source of heating for the ESIF’s office and lab spaces. The resulting heat by-product will even be enough to heat the building’s front plaza and walkway and help melt snow and ice. In that way, it will also improve employee health and safety, NREL notes.

Greater Data Center Energy Efficiency at Lower Cost

Seeing the green design and building plan through construction was actually less expensive than building a conventional HPC data center. Adding significantly to that are the lower operating costs, power consumption, and associated greenhouse gas emissions and overall environmental impact that will accrue over the building’s useful life. According to Hammond,

“Compared to a typical data center, we may save $800,000 of operating expenses per year. Because we are capturing and using waste heat, we may save another $200,000 that would otherwise be used to heat the building. So, we are looking at saving almost $1 million per year in operation costs for a data center that cost less to build than a typical data center.”

NREL’s design and construction partners see an “insatiable” demand when it comes to improving the energy efficiency and reducing resource use and the environmental impact of data centers. “People will have a need for ever-greater performance,” Wheat said.

“One of the things we are mindful of is that while our systems are becoming denser in terms of footprint, they are becoming more power efficient. NREL is the premiere place to demonstrate a means to continue the growth of HPC capability in an environmentally friendly way.”

Added HP’s Turkel,

“To get to the levels of scale that our customers are demanding of us, we have to fundamentally change the dynamic around power, density, and performance. We have to be able to do it in a much smaller package using less energy. This project is a step in that direction — and it’s apropos that NREL is a partner in the effort.”

“eBay, Facebook, and others have data centers that are water capable, but there aren’t any products on the market now that are providing liquid cooling,” according to Hammond. “NREL is getting the first product that is direct-component liquid cooled. We’re going to show it’s possible, efficient, safe, and reliable.”


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.