Data CentersFeaturedNorth AmericaUnited States

The Hidden Bottleneck of AI Data Centers: Water

Electricity enables AI growth, but water constraints are emerging as a decisive factor in data center siting and approval.

Electricity gets the headlines, but water is the quiet second input that will shape where—and how fast—the United States builds data centers for artificial intelligence (AI) compute, and other tasks. Unlike office parks or warehouses, large data centers move heat at an industrial scale—both directly through on-site cooling and indirectly through the water embedded in the electricity they consume.

Measuring the Hidden Water Footprint of Computing 

To provide a framework for understanding data center water use, The Green Grid has established a practical metric—Water Usage Effectiveness (WUE), measured in liters per kilowatt-hour of IT load, and an expanded version, WUE Source, which adds the water intensity of the electricity the site purchases. For example, at the National Renewable Energy Laboratory’s (NREL) high-performance computing center, the measured on-site WUE is 0.7 liters per kilowatt-hour (L/kWh) using warm-water liquid cooling with a thermosyphon—a heat-transfer system; relying solely on cooling towers would push WUE toward 1.4 L/kWh. When the local grid’s water intensity is included, WUE Source rises to roughly 5 L/kWh for Colorado’s mix—showing that “off-site” water use can dwarf on-site consumption. 

The jump in the WUE Source when Colorado’s electricity mix is factored in occurs because thermoelectric generation still withdraws—and, more importantly for scarcity, consumes—large volumes of freshwater. The US Geological Survey’s reanalysis of 2008–2020 thermoelectric water use provides plant-level withdrawal and consumption estimates, and the Energy Information Administration’s cooling-water series shows that withdrawals remain in the tens of trillions of gallons per year, even as intensity declines with the retirement of once-through cooled units and the changing generation mix. 

Design Tradeoffs in Data Center Cooling 

Data centers that rely on evaporative cooling sit in the same conceptual family as thermoelectric power plants: both exchange heat with the atmosphere by evaporating water. Just as power plants can move from once-through to recirculating or dry systems to lower withdrawals or consumption, data centers can trade energy for water through air-side economizers, dry coolers, or thermosyphons, or shift to reclaimed water to reduce stress on potable supplies. Federal efficiency guidance for data centers and ASHRAE’s data-center standards formalize these trade-offs and provide operators with targets for both energy and water. 

The economic picture is nuanced. Electricity dominates operating expenses, but water costs are not trivial where utilities impose seasonal scarcity pricing, groundwater extraction fees, or on-site treatment requirements. The more material cost often appears in capital and siting decisions: selecting a design that avoids evaporative cooling (for example, liquid cooling with dry rejection) may increase capital expenditures but can de-risk permits, speed approvals, and broaden the feasible map—especially where public agencies have tightened large-user review processes. Tucson’s 2025 ordinance, for example, subjects users above roughly 7.4 million gallons per month to council review and an enforceable conservation plan. That kind of process risk interacts directly with schedules for AI capacity additions. 

Security and resilience considerations cut both ways. Large data center campuses can strain local systems during heat waves, when cooling demand and river temperatures peak. Yet the same engineering features that harden facilities against grid disruptions—on-site thermal storage, non-potable water connections, and redundant heat rejection—can also reduce their community footprint. Over time, as grids add wind and solar and retire once-through cooled units, the indirect water intensity of computing will fall. But those system-wide gains can be overwhelmed locally if facilities rely on potable evaporative cooling in arid basins. In the end, location still dominates. There are some positive signs: Some major projects, like Crusoe’s Abilene, Texas facility, are using closed-loop non-evaporative cooling systems.

The Water Regulatory Gap for Data Centers

Regulation is just starting to catch up. Federally, the Clean Water Act’s Section 316(b) governs cooling-water intake structures for facilities that withdraw more than two million gallons per day directly from “waters of the United States”—a threshold many hyperscale sites would meet if they drew from a river or lake. But most data centers buy municipal water; in that case, 316(b) applies to the utility, not the campus. Where data centers discharge blowdown or other process water directly, the National Pollutant Discharge Elimination System (NPDES) permits determine effluent limits. Furthermore, as projects concentrate in arid metros, several cities are adding local guardrails—ranging from conservation-plan requirements for very large users to discouraging or prohibiting potable water for cooling—an early sign that for AI infrastructure, water—not megawatts—may become the decisive factor in siting and approval

What to Do Before Water Becomes the Bottleneck 

But more needs to be done before water becomes the bottleneck in data center growth. First, regulators and operators should accelerate the adoption of advanced cooling technologies. Emerging systems—direct-to-chip liquid cooling, thermosyphons, hybrid dry coolers, and closed-loop two-phase or immersion designs—can cut on-site water use by an order of magnitude and, in some climates, nearly eliminate evaporative cooling. Second, disclosure should be mandatory with the right boundaries. Reporting should include both WUE and WUE Source, plus a clear accounting of reclaimed versus potable volumes. Third, where feasible, cooling should be tied to reclaimed-water districts or industrial reuse systems, reducing competition with residential demand. Fourth, water should be priced to signal scarcity. Water is inexpensive relative to power, but scarcity pricing and seasonal allocation can steer designs toward dry or hybrid systems without mandating a single technology. Finally, pairing AI campuses with low-water electricity (wind, solar, hydro imports, or transmission access) and liquid-cooling designs that enable dry rejection can deliver both reliability and measurable water savings 

If the United States wants to sustain the pace of AI investment without triggering local backlash or hard permitting stops, water must be treated as a first-order infrastructure variable, not an afterthought. The path forward is not to slow AI or freeze data center growth, but to align engineering, siting, and policy with physical reality. Data centers can be designed to trade energy for water, to use non-potable supplies, and to pair with power sources that sharply reduce indirect consumption—but only if those choices are made early and rewarded consistently. The lesson from the power sector is clear: constraints ignored at scale become crises. If water remains invisible in AI planning, it will quietly decide where the digital economy can—and cannot—grow.

About the Authors: Morgan Bazilian and Brandon Owens

Morgan D. Bazilian is the director of the Payne Institute and professor at the Colorado School of Mines, with over 30 years of experience in global energy security and investment. A former World Bank lead energy specialist and senior diplomat at the UN, he has held roles in the Irish government. 

Brandon N. Owens is a clean energy innovation executive and global thought leader at the nexus of energy, artificial intelligence, and institutional governance. His career spans work at the National Renewable Energy Laboratory, General Electric, and S&P. He is the author of The Wind Power Story (2019) and the forthcoming book Cognitive Infrastructure (2025). He can be reached at [email protected]

Image: dTosh/shutterstock

Source link

Related Posts

1 of 846