Key Considerations When Adopting Liquid Cooling

In the last article on The Modern Data Center Journal, we spoke with Ben Graham, Mechanical Project Manager and Estimator at Compu Dynamics, to discuss liquid cooling, and why today’s advanced AI applications are making it the standard for data centers.

As rack power densities and energy consumption increase within the data center, the amount of heat generated also increases. This has made traditional air cooling no longer effective for cooling AI data centers, and has made liquid cooling the standard for the future.

But what do data center owners and operators need to know to embrace liquid cooling? What considerations do they need to keep in mind when designing and building their future data centers? And is it possible to retrofit their existing data centers to accommodate liquid cooling systems?

We set out to get the answers to these and other questions in our second conversation with Ben Graham.

Modern Data Center Journal (MDCJ): What needs to be done to ensure that the liquid introduced into the data center doesn’t damage the equipment or result in downtime?

Ben Graham: Some of the biggest problems regarding liquid cooling have already been solved. Early on, we didn’t have the required fittings and necessary piping needed to effectively carry liquid into the data center envelope. We didn’t have the products and equipment necessary to reduce leak rates and maintain a leak-free solution.

Thankfully, the equipment and components have evolved tremendously over the past few years. Today, we see a wide ecosystem of piping, fittings, and other liquid cooling system solutions. These solutions have been meticulously engineered with direct liquid cooling (DLC) applications in mind.

The most important thing that a data center owner or operator can do is work with a trusted partner who knows these solutions, understands what is available, and will bring the best systems and products to bear in their data center. By working with a trusted White Space Integration partner, data center owners can rest assured that high-level, precision-manufactured fittings and components are utilized in their data centers. This will ultimately reduce downtime and failures.

“These cooling systems are quite intricate and require a high level of understanding to install and get operational.” – Ben Graham

However, even with the proper, high-quality products being used, it’s still important to implement a great design, preventing any potential leaks or downtime associated with DLC. But this is another area where a White Space Integration partner can help. Not only will a partner help create a design, but they’ll be able to incorporate design elements that mitigate the risk of potential leaks.

MDCJ: Are there common mistakes data center owners and operators – or even construction teams – make when building a data center that will leverage liquid cooling?

Ben Graham: One of the critical mistakes data center owners and operators make is forgetting that these liquid cooling systems and infrastructure need to be maintained. If they didn’t plan for that in advance, problems and equipment failures could take a lot longer to solve and be far more detrimental to the data center.

For this reason, it’s essential to have proper isolation valves, drain valves, vents, and points-of-disconnect strategically placed throughout the piping infrastructure, as well as feeds for the equipment.

Another mistake I’ve noticed occurs when building redundancy into the design. I will frequently see data center engineers implementing two coolant distribution units (CDUs), which is excellent for redundancy.

“One of the critical mistakes data center owners and operators make is forgetting that these liquid cooling systems and infrastructure need to be maintained.” – Ben Graham

These two units will have to work together during normal circumstances. However, if these CDUs are not piped correctly, it can have a massive impact on their efficiency or ability to work together. It could also impact the ability of one CDU to handle the entire cooling load should the other stop functioning. This makes it even more essential to have deep knowledge and experience about how to design, build, and operate these systems.

Lastly, data center owners and operators sometimes fail to identify a fluid required for the secondary loop. This fluid will need to be compatible with all devices and components. I have seen my fair share of sludge, biological growth, and suspended materials pulled out of these loops.

In the absence of a specific computer coolant, such as propylene glycol, the risk of bio-growth or metal degradation significantly increases. Over time, this bio-growth can clog filters, strainers, heat sinks, or anything with narrow passages. This ultimately results in a compromised system, as the lack of flow and heat exchange can lead to system failure.

MDCJ: In the past, who was responsible for implementing cooling systems? Does that need to change in the era of liquid cooling? Who should be doing this work today in the data center?

Ben Graham: In the past, I’ve seen General Contractors with little self-perform expertise being asked to install complex systems that cool data centers. These cooling systems are quite intricate and require a high level of understanding to install and get operational. Most of the time, a factory-trained technician or factory representative will be mobilized out to the site, ensuring that the equipment has been installed and started up correctly.

“The most important thing that a data center owner or operator can do is work with a trusted partner who knows these solutions…and will bring the best systems and products to bear in their data center.” – Ben Graham

Data centers are held to incredibly high standards and elevated service level agreements (SLAs). To meet those SLAs, all systems and equipment need to be maintained and optimized. This means having technicians and resources capable of operating, maintaining, and optimizing the system is necessary.

Thankfully, this is another area where a White Space Integrator, like Compu Dynamics, can assist – providing educated and experienced staff who understand these systems and can provide much-needed support.

This is especially true as data center utilization grows. Often, a data center may operate at 25 percent of the compute load on Day One. But will not see its full compute load utilized till months after initial startup. This makes it essential to have resources available after handoff that can ensure all systems are working as the compute load increases.

To read the first part of our conversation with Ben Graham, click HERE. To learn more about the impact of AI on data center cooling, click HERE.

Related Posts