Air cooling works by moving large volumes of air across heat sinks. At GPU power densities below ~300W per chip, air cooling is sufficient. Modern AI GPUs exceed 700W (H100) to 1,000W+ (B200), and power per rack continues to increase. Air cooling at these densities requires impractical volumes of airflow and fails to maintain safe operating temperatures.
Liquid cooling — either direct-to-chip (cold plates on GPU modules) or immersion (submerging entire servers in dielectric fluid) — transfers heat 1,000x more efficiently than air. It's not optional for AI data centers; it's physics.
Vertiv is the largest public pure-play on data center infrastructure, providing both cooling and power delivery systems. Their liquid cooling order backlog has grown 300%+ year-over-year.
Modine Manufacturing provides thermal management solutions including data center cooling. They've pivoted aggressively toward AI data center applications and reported accelerating revenue growth.
CoolIT Systems (private) is a direct-to-chip liquid cooling specialist partnered with major server OEMs.
Cooling is a constraint sector: every new GPU rack needs cooling, deployment timelines are tight, and there are limited suppliers with proven solutions. This creates pricing power that persists through the CapEx cycle. Unlike GPU demand (which could plateau), cooling demand is cumulative — every GPU deployed needs cooling for its entire operational lifetime.
Cooling is tracked through the Energy layer of the Functional Index and is one of Closelook's five key constraint sectors. The thesis: cooling demand is more durable than GPU demand because it's cumulative.
Constraint Sectors →Power Constraint →Functional Index →