与时俱进 - 高性能计算、超大规模和边缘计算的冷却技术

冷却技术:百花齐放的时代

Cooling: There’s no longer one answer



译 者 说

风冷到液冷,精密空调到风墙末端,数据中心的冷却系统不断发生变化。冷却技术百花齐放的背后,是节能降耗的驱动,是超密机柜的需求,也是边缘数据中心等新型数据中心兴起的推动。



数据中心越来越相似,但是设施冷却技术却越来越不同,多种技术可以实现冷却。

Data centers used to be uniform. Today there are many different kinds of facilities - and an array of techniques to keep them cool.


2020 年 12 月,当日本巨头 NTT 在伦敦设立数据中心时,一件大设备不见踪影。要是放几年前,数据中心经理会惊讶于达格纳姆32MW的数据中心大楼没有空调装置。

In December 2020, when Japanese giant NTT opened a data center in London, one big item of equipment was missing. Data center managers from a few years ago would have been surprised to see the 32MW building in Dagenham has no air conditioning units.




过去的几年里,关于数据中心冷却方式的旧共识已不复存在,新的变化迎面而来。

In the last few years, the old consensus on how to cool a data center has gone. And there are further changes on the way.


“最新的冷却技术消除了对压缩机和制冷剂的需求。”NTT全球数据中心EMEA设计与工程高级副总裁,Steve Campbell-Ferguson在达格纳姆的虚拟发布会上说。

"The latest technology removes the need for compressors and refrigerants," said Steve Campbell-Ferguson, SVP design and engineering EMEA for NTT Global Data Centres, at the virtual launch event of the Dagenham facility.


这并不是第一个以这种方式建造的数据中心。2015年,Digital Realty声称其为Rackspace建造的6MW伦敦大楼是英国第一个取消机械制冷的数据中心。

This was not the first data centers to be built this way, by a long chalk. In 2015, Digital Realty claimed that a 6MW London facility it built for Rackspace was the first in the UK to have no mechanical cooling.



运营商希望朝这个方向发展的原因很简单。数据中心设计师希望减少从IT负载中移除热量所需的能耗。在节能问题备受关注之前,数据中心是用空调设备制冷的,这些空调设备能耗可能与IT机架本身能耗一样多。

And there are simple reasons why operators should want to move in that direction. Data center designers want to reduce the amount of energy spent removing heat from the IT load in the building. Before energy conservation was a big concern, data centers were built with air conditioning units which could consume as much energy as the IT racks themselves.


21世纪,这种“浪费”的能源成为一个关键问题,运营商们希望尽可能地将空调能耗减少到零,从而使PUE值达到1.0。用更多的被动冷却技术取代空调机组是实现这一目标的一种方法,可以将制冷能耗减少80%左右:NTT承诺今年的PUE值为1.2,而Rackspace在五年前就宣称该值为1.15。

In the 21st century, this “wasted” energy became a key concern, and builders aim to reduce it as close to zero as possible, driving towards a PUE figure of 1.0. Replacing air conditioning units with more passive cooling techniques is one way of doing that, and can reduce the energy used in cooling by around 80 percent: NTT promised a PUE of 1.2 this year, while Rackspace claimed 1.15 five years ago.


这一变化不仅减少了能源消耗:它还减少了建筑中隐含能源和材料的用量,以及制冷剂的使用,而这些制冷剂本身就是强效温室气体。

The change does not just reduce energy consumption: it also reduces the amount of embodied energy and materials in the building, and also cuts the use of refrigerants which are themselves potent greenhouse gases.


被动式制冷的选择并非能在世界各地都起作用:在温暖或潮湿的气候中,一年中会有大量的日子需要制冷机。

This option doesn’t work everywhere in the world: in warm or humid climates, there will be a large number of days in the year when chillers are needed.


但这里有一个原则。在本世纪初,人们认为只有一种方法可以让数据中心实现冷却:机械式制冷机把冷空气传送到封闭的设备机柜中带走热量。现在,这个假设被打破了。

But there’s a principle here. At the start of the century, it was assumed that there was one way to keep a data center cool: mechanical chillers driving cold air through contained racks of equipment. Now, that assumption is broken down.


(数据中心冷却方式的变化)除了为了提升数据中心能效以外,还有另一个原因:数据中心不再趋于一致,数据中心的类型不尽相同,且需求各异。

Along with the drive to make data centers more efficient, there’s another reason: data centers are no longer uniform. There are several different kinds, and each one has different demands.



我们所说的托管数据中心在减少或消除机械冷却方面有一条行之有效的途径,但需要采取一些额外的步骤。

Colocation spaces, as we described, have a well-established path to reducing or removing the use of mechanical cooling, but there are other steps they may need to take.


也有一些新的数据中心空间类型出现,它们对于冷却的需求各不相同,下面举几个例子看一下。

There are also newer classes of data center space, with different needs. Let’s look at a few of these.


高性能计算

High Performance Computing (HPC)



超级计算机曾经是罕见的猛兽,但现在对高性能计算的需求越来越多。这种计算能力存在于现有的数据中心中,它提高了IT密度以及散热量,有时超过100kW/机柜。

Supercomputers used to be rare beasts, but now there’s a broader need for high performance computing, and this kind of capacity is appearing in existing data centers. It’s also pushing up the density of IT, and the amount of heat it generates, sometimes to more than 100kW per rack.


考虑到效率,数据中心运营商不想让设备处于过冷状态,因此想要增加几个这种容量的机架,数据中心是无法提供足够冷量的。

With efficiency in mind, data center operators don’t want to over-cool their facilities, so there simply may not be enough cooling capacity to add several racks of this sort of capacity.


增加HPC容量意味着要为特定机架增加额外的冷量,可以使用分布式冷却系统,将冷却系统(如背板热交换器)放置在需要更多冷量的特定机架中。

Adding HPC capacity can mean putting in extra cooling for specific racks, perhaps with distributed cooling systems, that place a cooling system such as a rear-door heat exchanger in specific racks that need more cooling.


或者,为HPC系统构建一套独立冷却系统,可以使用循环流体或浸没式水箱,例如由Summer、Asperitas或GRC提供的那些。

Alternatively, an HPC system can be built with a separate cooling system, perhaps using circulating fluid or an immersion tank, such as those provided by Submer, Asperitas, or GRC.




超大规模数据中心

Hyperscale


由脸书,亚马逊和谷歌等运营的超大型数据中心相比于世界其他地区有一定优势,这些数据中心规模巨大(平面图堪比足球场),标准化程度高(通常在标准硬件上运行应用程序)。

Giant facilities run by the likes of Facebook, Amazon, and Googlehave several benefits over the rest of the world. They are large and uniform, often running a single application on standard hardware across a floorplan as big as a football field.


超大规模突破了一些界限,包括数据中心内的温度控制。由于能够统一控制各个应用程序和硬件,可以做到提高运行温度,这意味着减少了对冷量的需求。

The hyperscalers push some boundaries, including the temperatures in their data centers. With the ability to control every aspect of the application and the hardware that runs it, they can increase the operating temperature - and that means reducing the need for cooling.



微软和谷歌超大规模数据中心是第一批无冷机的数据中心。2009年,谷歌在比利时圣吉斯兰设立了第一个无机械制冷的数据中心。同年,微软在都柏林也做了同样的事情。

Hyperscalers Microsoft and Google were among the first to go chiller-free. In 2009, Google opened its first facility with no mechanical cooling, in Saint-Ghislain, Belgium. In the same year, Microsoft did the same thing in Dublin.


超大规模数据中心被缓慢流动的空气冷却,有时会通过蒸发来获得额外的冷量。事实证明,产生这种气流的最节能的方法是用采用一面由大型慢转风扇组成的墙。

Giant data centers are cooled with slow-moving air, sometimes given an extra chill using evaporation. It has turned out that the least energy-hungry way to produce that kind of flow is with a wall of large slow-turning fans.


“风墙”已成为大型数据中心设施的标准特征,其好处之一是可以与IT一起扩展。每一个新的机架通道都需要在墙上安装两个风扇单元,这样建筑物中的空间就可以逐渐被填满。

The “fan-wall” has become a standard feature of giant facilities, and one of its benefits is that it can be expanded alongside the IT. Each new aisle of racks needs another couple of fan units in the wall, so the space in a building can be filled incrementally.


Aligned Energy建造了大批次的数据中心,并开发了Delta3冷却系统,这是一个风墙,但首席执行官Andrew Schaap将其称为“冷却阵列”,以避免商标问题。这面风墙支持高达50kW/机架热流密度,不会浪费任何冷量且可拓展。

Aligned Energy builds wholesale data centers, and makes its own Delta3 cooling system, a fan-wall which CEO Andrew Schaap describes as a “cooling array” to avoid trademark issues It supports up to 50kW per rack without wasting any cooling capacity, and scales up.


Schaap在2020年告诉DCD:“没有哪座数据中心的热流密度一开始就是800W/平方英尺。我可以从较低密度开始业务,比如100W/平方英尺,在两年内,客户可以在相同的占地面积内使热流密度增加,且业务无须中断。”

“No one starts out with 800W per square foot,” Schaap told DCD in 2020. “I can start a customer at a lower density, say 100W per square foot, and in two years, they can densify in the same footprint without any disruptions."


冷却专业厂商Stulz发明了一种名为CyberWall的风墙系统,而Facebook则与专家Nortek合作发明了一种风墙系统。

Cooling specialist Stulz has produced a fan-wall system called CyberWall, while Facebook developed one in association with specialist Nortek.



像物联网这样的分布式应用程序需要服务的快速响应,这引发了边缘数据中心概念的提出,(边缘数据中心)即靠近数据源的微型设施,以提供低延迟(快速)响应。

Distributed applications like the Internet of Things can demand fast response from services, and that’s led to the proposal of Edge data centers - micro-facilities which are placed close to the source of data to provide low-latency (quick) responses.


边缘数据中心正在兴起,将有各种各样的边缘设施,包括集装箱大小的运输设施,可能位于现有建筑物的信号塔、机柜或服务器室内,也可能位于街道设施级别的小型密闭设施里。

Edge is still emerging, and there will be a wide variety of Edge facilities, including shipping-container sized installations, perhaps located at cell towers, closets or server rooms in existing buildings, or small enclosures at the level of street furniture.


这里有一个共同的问题——将IT放入不适配的空间中,并且保持这些空间的温度将是一项艰巨的任务。

There’s a common thread here - putting IT into spaces it wasn’t designed for. And maintaining the temperature in all these spaces will be a big ask.


其中一些设施将采用传统方式冷却。像维谛和施耐德这样的供应商在集装箱式装置中有微型数据中心,其中包括他们自己的内置空调。

Some of this will be cooled traditionally. Vendors like Vertiv and Schneider have micro data centers in containers which include their own built-in air conditioning.


其他边缘设施将放置在建筑物内的房间中,这些房间原来已有自己的冷却系统。这些服务器室和机柜室可能只是将空调管道连接到建筑物现有的冷却系统 上,这可能是不够的。

Other Edge capacity will be in rooms within buildings, which already have their own cooling systems. These server rooms and closets may simply have an AC duct connected to the building’s existing cooling system - and this may not be enough.


“想象一下一个传统的办公室柜子。”维谛的Glenn Wishnew在最近的一次网络直播中表示,“这从来不是为IT热负荷而设计的。”办公空间空调通常设计为每平方英尺5W,而数据中心设备需要大约200W/平方英尺。

“Imagine a traditional office closet,” said Vertiv’s Glenn Wishnew in a recent webcast. “That’s never been designed for an IT heatload.” Office space air conditioning is typically designed to deal with 5W per sq ft, while data center equipment needs around 200W/sq ft.


将冷却基础设施添加到此边缘数据中心中可能很困难。如果设备处于开放式办公环境中,则无法使用嘈杂的风扇和空调。

Adding cooling infrastructure to this Edge capacity may be difficult. If the equipment is in an open office environment, noisy fans and aircon may be out of the question.


这引发了一些人的预测,液冷方式可能非常适合边缘数据中心。液冷方式安静,并且独立于周围环境,因此不会对建筑提出要求,也不会影响在场人员。

That’s led some to predict that liquid cooling may be a good fit for Edge capacity. It’s quiet, and it’s independent from the surrounding environment, so it won’t make demands on the building or annoy the occupants.


浸没式系统将设备安全地隔离在环境之外,因此无需调节外部空气和湿度。供应商推出了类似浸没式微舱之类的预制化装置,该装置将6kW的IT设备放入一米高的盒子中。

Immersion systems cocoon equipment safely away from the outside, so there’s no need to regulate outside air and humidity. That’s led to vendors launching pre-built systems such as Submer’s MicroPod, which puts 6kW of IT into a box one meter high.



当然,需要克服的问题是缺乏使用此类系统的经验。边缘数据中心将分布在难以快速获得技术支持的地方。

The problem to get over, of course, is the lack of experience in using such systems. Edge capacity will be distributed and located in places where it’s hard to get tech support quickly.


边缘数据中心运营商不会安装未经全面验证和测试的系统,因为现场运维需要花费大量金钱。

Edge operators won’t install any system which isn’t thoroughly proven and tested in the field - because every site visit will cost hundreds of dollars.


但是,液冷方式对边缘数据中心来说最终还是个很好的选择,甚至比空气冷却可靠性更高。正如另一家浸没式液冷供应商Iceotope的David Craig指出,这些系统没有活动部件:“浸没式冷却技术消除了烦扰的维护及其相关停机的需要。”

However, liquid cooling should ultimately be a good fit for Edge, and even provide higher reliability than air-cooling. As David Craig of another immersion vendor, Iceotope, points out, these systems have no moving parts: “Immersive cooling technology removes the need for intrusive maintenance and its related downtime.”




深 知 社




翻译:

Wren

哔哩哔哩基础工程部

DKV(DeepKnowledge Volunteer)计划精英成员


公众号声明:

本文并非官方认可的中文版本,仅供读者学习参考,不得用于任何商业用途,文章内容请以英文原版为准,本文不代表深知社观点。中文版未经公众号DeepKnowledge书面授权,请勿转载。

展开阅读全文

页面更新:2024-03-24

标签:边缘   机架   机柜   与时俱进   数据中心   密度   设施   空调   方式   设备   系统   技术

1 2 3 4 5

上滑加载更多 ↓
推荐阅读:
友情链接:
更多:

本站资料均由网友自行发布提供,仅用于学习交流。如有版权问题,请与我联系,QQ:4156828  

© CopyRight 2008-2024 All Rights Reserved. Powered By bs178.com 闽ICP备11008920号-3
闽公网安备35020302034844号

Top