News Center - The evolution of data center cooling - Delta EMEA

17/09/2019

The evolution of data center cooling




There are around 8.5 million data centers in the world. 3 million of them are located in the US alone, if you count the smaller ones, too. That’s nearly a data center for every 100 people. And they make up nearly 2% of all the electricity consumed in America as well.

What happens to all that power? 100% of the electricity consumed by IT hardware is converted into heat. But servers don’t like heat. So, you need powerful (but efficient) data center cooling. Cooling by conventional methods requires power though, too. So much power, in fact, that cooling can represent over 50% of the total power requirement of a data center.

That basically makes our data centers enormous electric heaters. And we keep building more of them. It’s a trend that threatens to make sustainability more difficult for businesses and even countries. Luckily, there are more efficient alternatives to conventional data center cooling methods. The measuring stick used to determine how efficient a data center’s cooling system operates is known as Power Usage Effectiveness, or PUE. The lower your PUE, the better. It’s the ratio of the total power used by a data center to the energy delivered to the IT hardware. A rating of 2.0 is considered standard, 1.4 is good, and 1.0 is the best PUE rating you can achieve.


Doing it the old-fashioned-way: raised floor cooling

For the last half century or so, most data centers were cooled with pressurized air from a raised floor. A CRAC (Computer Room Air Conditioner) or a CRAH (Computer Room Air Handler) cools air that is moved by constant speed fans. This type of system worked fine when compute densities were low, and efficiency was not at the top of anyone’s priority list. But the disadvantages became more meaningful over time. PUEs of over 2.0 are common here, for instance.

A big problem with this type of cooling is the temperature gradient from the bottom of the rack to the top, which is known as stratification. The servers near the floor get the coolest air, and the servers at the top of the rack often have a considerably warmer inlet temperature. If you increase the air flow to mitigate this problem, efficiency decreases because air is now streaming past the face of the server rack and mixing with hot return air. Consequently, containment strategies such as hot aisle/cold aisle configurations were developed. These layouts also reduce recirculation, which occurs when hot air from the back of the server gets drawn back in the front, potentially resulting in dangerous equipment temperature levels.


Modern precision cooling solutions

Cooling technology has come a long way since the days of constant-speed fans blowing air into a raised floor. The increase in density has made these advances necessary. With cold air containment and hot air containment designs, the limitations of stratification and bypass have been largely overcome. In-row cooling units allow for even greater efficiency. Here, variable speed fans installed in cooling units that are integrated into the rows themselves allow data center operators to control temperatures and energy consumption with much greater precision. The energy consumed by these variable-speed fans is significantly lower than with systems that use constant-speed fans.


The use of economizers is on the rise

It’s hard to imagine that after decades of building data centers, the future of cooling them could be so very open and undecided. The underlying physics has not changed. Yet approaches still vary greatly. Part of this can be explained by climatic differences in geographic locations around the globe. One of the most promising strategies to save energy when cooling data centers is known as free cooling or economizing. In this approach, outside air is used in a cooling tower or to blow off heat from a chiller. This method is of course dependent on the temperature of air outside the data center being low enough to provide the necessary cooling. Cooler climates have a decided advantage here.


Electricity is not the only resource a data center consumes

Some alternative methods of cooling data centers that use cooling towers consume significant quantities of water. Yet many data centers are in drought-stricken parts of the world. Even when they’re not, water is a valuable natural resource that we must always try to conserve. This would, on the surface at least, appear to be an advantage for dry cooling systems.

Taking a more holistic view, however, we must also consider that the steam phase of a fossil fuel power plant also consumes water. So much water, that the extra power required by dry cooling systems results in higher water consumption for dry cooling in total versus onsite cooling towers.


Fresh air cooling tempts with low PUE

The only thing that is certain is that new approaches will be necessary. And there is no shortage of them! One of the most promising but also problematic approaches uses fresh outside air to cool the datacenter. With fresh air, PUEs that get very close to 1.0 are possible, as Yahoo has demonstrated with its “chicken coop” design. But bringing fresh air into the datacenter involves several challenges. First, outdoor temperatures need to be cool enough to provide safe inlet temperatures for the IT equipment. Otherwise, backup cooling systems are necessary. But humidity and dust are difficult problems to solve for as well.

If humidity in the data center is too low, the risk of electrostatic discharge increases. Recent research, however, downplays the risk of ESD to IT equipment. Especially when servers are mounted in the rack, the risk of damage is low. And when technicians are working on data center equipment, it is always best practice to wear a grounded wrist strap.

Excess humidity, on the other hand, may pose a greater risk. As air becomes more humid, it collects more dust particles, which can then gather on IT equipment. As the dust particles collect, they insulate components and make cooling more difficult. Zinc particles can also be deposited on circuits through humid air, eventually leading to shorts. Thus, air that is too humid requires dehumidification, which in turn consumes power and makes that low PUE less attractive.

Another considerable issue with fresh air cooling is dust and smoke. Depending on the particulate matter in the air, fresh air cooling can require expensive filters. And even with good filtration in place, there is a risk that the extremely sensitive fire suppression systems in a modern data center can be triggered by air pollutants brought in by fresh air cooling.


Immersion cooling

Power line transformers require greater cooling than air cooling permits. This is achieved by submerging the circuits in a dielectric fluid, generally mineral oil. Since the oil conducts heat but not electricity, it does not damage the components. Immersion cooling can also be used for servers. With this method, heat is removed from the electronics much more efficiently than with air cooling. Vendors are now offering submersion cooling solutions for data center applications.


How cool does your equipment truly need to be?

IT equipment is rated according to classes defined by the American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) Technical Committee. These classes (A1 through A4) are part of a set of guidelines that ASHRAE has developed to determine the optimal temperature and humidity levels for data center hardware. A1 has the narrowest allowable temperature range.

But many IT hardware makers are now producing A2-class equipment or even A3, which can give data centers greater flexibility in their cooling choices. Therefore, it is wise to carefully examine cooling requirements for your equipment instead of merely following the standard ASHRAE recommendations for classes A1-A4. Yes, of course, the hotter your equipment runs, the less reliable it will be. But the cost of a hardware failure is calculable. By forecasting the increased risk of failure with rising temperatures, it is possible to calculate the savings gained from cooling less versus the cost of an increased failure rate. And many companies are replacing their IT equipment after as little as three years anyway. In this case, the impact of higher temperatures may not be significant.


Experimentation will lead to greater efficiency

The most efficient cooling designs use free cooling or fresh air cooling in some form. Often, CRAC is required as a backup when outdoor conditions (hot, humid summer days) cannot provide enough cooling to meet a data center’s requirements. One of the keys that we’re seeing in all these pioneering designs, however, is that the building structure always plays an integral role. The days of planning a building, stuffing as much IT equipment into the whitespace as possible and thinking about cooling later may be over. New data center concepts will need to be architected with cooling in mind from the outset.

A Dutch company, for instance, has implemented a system that removes heat from the building using a giant rotating aluminum disc known as the Kyoto disc. Half of the disc is inside the building and half is outside. As the disc rotates, it warms up inside and transfers the heat it has collected to the outdoor air by the time it returns. But the disc itself is rather large and requires a considerable amount of space for the interior and exterior areas the disc rotates through.

Another option, instead of simply dumping of the waste heat into the environment, is to make good use of it. Projects already exist in which datacenters have been integrated with district heating and cooling systems. In this way, waste heat from the data center is being used to heat homes. This may be the most sustainable example of all. And district cooling systems offer a very efficient way to cool the data center, too. These are very exciting times, there are great competitive advantages to be gained by achieving high cooling efficiencies. The question of how to best achieve this is still completely open.
 

News Source:UPS & Data Center Infrastructure