Articles tagged with "liquid-cooling"
India’s TCS gets TPG to fund half of $2B AI data center project
Tata Consultancy Services (TCS) has partnered with private equity firm TPG to secure $1 billion funding for the first half of a $2 billion multi-year project called “HyperVault,” aimed at building a network of gigawatt-scale, liquid-cooled, high-density AI data centers across India. This initiative addresses the country’s significant gap between its large data generation—nearly 20% of global data—and its limited data center capacity, which currently accounts for only about 3% of the global total. The new data centers will support advanced AI workloads and are designed to meet the growing demand for AI compute power amid rapid adoption of AI technologies in India. However, the project faces challenges related to resource constraints, including water scarcity, power supply, and land availability, especially in urban hubs like Mumbai, Bengaluru, and Chennai where data center concentration is high. Liquid cooling, while necessary for managing the heat from power-intensive AI GPUs, raises concerns about water usage, with estimates suggesting a
energydata-centersAI-infrastructureliquid-coolingpower-consumptionwater-scarcitycloud-computingLiquid Loops & Urban Warmth: The Next Frontier in Data Center Efficiency - CleanTechnica
The article from CleanTechnica highlights the significant opportunity to improve data center efficiency by capturing and repurposing the vast amounts of heat they generate. Traditionally, data centers have treated heat as a waste product, using energy-intensive air cooling systems that consume 20–40% of their power just to maintain safe operating temperatures. However, with the rise of hyperscale data centers and AI workloads, there is growing interest in transforming this heat from a liability into a valuable resource. Liquid cooling technologies, such as direct-to-chip and immersion cooling, enable servers to operate at higher outlet temperatures (50–60 °C), making the waste heat suitable for integration with modern district heating networks. This approach is already being implemented in northern Europe, where dense district heating infrastructure allows data centers to supply thermal energy to residential heating demands. Examples include Meta’s data center in Odense, Denmark, which provides about 100,000 MWh annually to the local grid, and Microsoft’s Azure facilities in Finland, delivering 250
energydata-centersliquid-coolingheat-recoverydistrict-heatingthermal-energyenergy-efficiencyNew Microsoft datacenter mimics 'one massive AI supercomputer'
Microsoft has unveiled Fairwater, a new datacenter in Mt. Pleasant, Wisconsin, designed to function as “one massive AI supercomputer” with 10 times the performance of today’s fastest supercomputers. Spanning 315 acres and comprising three buildings totaling 1.2 million square feet, Fairwater is built specifically to power AI workloads using hundreds of thousands of NVIDIA GPUs interconnected in high-density clusters. The facility employs NVIDIA GB200 servers with 72 GPUs per rack linked via NVLink for high-bandwidth communication and pooled memory, enabling processing speeds of up to 865,000 tokens per second. This architecture allows the datacenter to operate as a single global supercomputer rather than isolated machines, minimizing network latency through a two-story layout that reduces physical distances between racks. In addition to Fairwater, Microsoft is constructing similar hyperscale AI datacenters in Narvik, Norway, and the U.K., with plans to use NVIDIA’s upcoming GB300 chips. The Wisconsin facility features a closed
energydatacenterAI-supercomputerNVIDIA-GPUshigh-performance-computingliquid-coolingMicrosoft-FairwaterOracle to back massive 1.4-gigawatt gas-powered data center in US
Oracle is investing heavily in AI-focused cloud computing with the development of a massive 1.4-gigawatt data center campus in Shackelford County, Texas. The site, called Frontier and developed by Vantage Data Centers, will span 1,200 acres and include 10 data centers totaling 3.7 million square feet. Designed to support ultra-high-density racks and liquid cooling for next-generation GPU workloads, the campus aims to meet the growing demand for AI computing power. Construction is underway, with the first building expected to be operational in the second half of 2026. Oracle plans to operate the facility primarily using gas-powered generators rather than waiting for utility grid connections, reflecting the urgency to bring these data centers online despite the environmental concerns associated with gas turbine emissions. Oracle has transformed from a traditional database software company into a major cloud services provider focused on AI computing, securing significant deals such as hosting TikTok’s U.S. traffic and powering Elon Musk’s xAI. The company
energydata-centercloud-computingAIgas-powerliquid-coolinghigh-density-racksChina's data centers are pushing cooling to the limit
China’s rapid expansion in AI computing power has led to a significant increase in data center energy consumption and heat generation, pushing traditional air cooling methods to their limits. High-power AI chips, such as Huawei’s Ascend 910B and 910C, consume substantial energy, resulting in power densities per rack exceeding 15 kW and sometimes approaching 30 kW. This intense heat output has made air cooling inefficient due to increased noise, energy use, and maintenance challenges. Consequently, China is increasingly adopting liquid cooling technologies, especially cold plate liquid cooling, which offers efficient heat dissipation and easier retrofitting compared to immersion cooling. The liquid-cooled server market in China reached $2.37 billion in 2024, growing 67% year-over-year, with projections to hit $16.2 billion by 2029. This growth is driven by national strategies like “East Data West Computing” and policies promoting green data centers with power usage effectiveness (PUE) targets below 1
energydata-centerscooling-technologyliquid-coolingAI-computingpower-usage-effectivenessChina-technologyOpenAI to launch AI data center in Norway, its first in Europe
OpenAI announced plans to launch Stargate Norway, its first AI data center in Europe, in partnership with British AI cloud infrastructure provider Nscale and Norwegian energy firm Aker. The data center will be a 50/50 joint venture between Nscale and Aker, with OpenAI as an off-taker purchasing capacity from the facility. Located near Narvik, Norway, the site will leverage the region’s abundant hydropower, cool climate, and mature industrial base to run entirely on renewable energy. The initial phase will deliver 230 megawatts (MW) of capacity, expandable to 290 MW, and is expected to operate 100,000 Nvidia GPUs by the end of 2026. The facility will incorporate advanced cooling technology and reuse excess heat to support low-carbon enterprises locally. This initiative aligns with Europe’s broader push for AI sovereignty, data sovereignty, and sustainable infrastructure, as the EU recently announced multi-billion euro investments to build AI factories and enhance compute power within the bloc.
energydata-centerAI-infrastructurerenewable-powerhydropowerliquid-coolingNvidia-GPUsDell unveils AI supercomputing system with Nvidia's advanced chips
Dell has unveiled a powerful AI supercomputing system built on Nvidia’s latest GB300 platform, marking the industry’s first deployment of such systems. Delivered to CoreWeave, an AI cloud service provider, these systems feature Dell Integrated Racks equipped with 72 Blackwell Ultra GPUs, 36 Arm-based 72-core Grace CPUs, and 36 BlueField DPUs per rack. Designed for maximum AI training and inference performance, these high-power systems require liquid cooling. CoreWeave, which counts top AI firms like OpenAI among its clients, benefits from the enhanced capabilities of the GB300 chips to accelerate training and deployment of larger, more complex AI models. This deployment underscores the growing competitive gap in AI infrastructure, where access to cutting-edge chips like Nvidia’s GB300 series offers significant advantages amid rapidly increasing AI training demands and tightening U.S. export controls on high-end AI chips. The rapid upgrade from the previous GB200 platform to GB300 within seven months highlights the fast pace of innovation and
energysupercomputingAI-chipsNvidia-GB300data-centersliquid-coolinghigh-performance-computingPanasonic Develops a Cooling Water Circulation Pump for Data Centers — Promoting the Strategic Enhancement of the Pump Business - CleanTechnica
Panasonic’s Living Appliances and Solutions Company celebrated the 70th anniversary of its pump business in 2025, marking a significant milestone since its inception in 1955 with home well pumps. Over the decades, Panasonic has expanded its pump applications to include built-in pumps for water heaters, heating appliances, and bathroom equipment, contributing to energy efficiency and environmental friendliness. With cumulative shipments surpassing 53 million units, Panasonic pumps are widely used not only in its own products but also by various manufacturers globally. In response to the growing demand for efficient cooling solutions in data centers—especially driven by the rise of AI technologies and the increasing heat generated by CPUs and GPUs—Panasonic has developed a next-generation cooling water circulation pump tailored for data center cooling systems. This pump integrates advanced simulation technologies to improve performance by 75% (from 40 to 70 L/min) while maintaining a compact size suitable for installation within Coolant Distribution Units (CDUs). Key features include high efficiency, compact housing for
energydata-centerscooling-systemsliquid-coolingPanasonicpump-technologyenergy-efficiency