RIEM News LogoRIEM News

Articles tagged with "AI-infrastructure"

  • Nvidia unveils AI infrastructure spanning chips to space computing

    NVIDIA has introduced a new AI infrastructure centered on its Vera CPU and the Vera Rubin platform, targeting the evolving needs of agentic AI systems and large-scale AI data centers. The Vera CPU, featuring 88 custom Olympus cores and high-bandwidth memory, is designed to efficiently manage thousands of simultaneous AI environments, delivering twice the efficiency and 50% faster performance than traditional rack-scale CPUs. This marks a shift where CPUs play a central role in orchestrating AI workloads alongside GPUs, rather than merely supporting them. Major cloud and infrastructure providers such as Alibaba, Meta, Oracle Cloud Infrastructure, Dell Technologies, and Lenovo plan to deploy systems based on this processor. The Vera Rubin platform integrates seven specialized chips across compute, networking, and storage to create what NVIDIA calls “AI factories,” large-scale facilities capable of generating massive AI token volumes required for modern models. A key configuration, Vera Rubin NVL72, combines 72 GPUs and 36 Vera CPUs in a single rack, delivering up to four times better

    energyAI-infrastructureCPUsGPUsdata-centershigh-performance-computingNvidia
  • Meta debuts new AI silicon to power platform recommendations

    Meta Platforms has unveiled a roadmap for four new custom AI chips under its MTIA (Meta Training and Inference Accelerator) program, aimed at enhancing AI capabilities across its platforms like Facebook and Instagram. The first chip, MTIA 300, is already deployed to power content ranking and recommendation algorithms. The subsequent chips—MTIA 400, 450, and 500—are designed to handle AI inference workloads, with improvements in memory capacity and low-precision AI processing. Meta plans to release these chips roughly every six months over the next two years, reflecting its rapid expansion of AI infrastructure and data center footprint. Built on the open-source RISC-V architecture with design support from Broadcom and fabricated by TSMC, these chips enable Meta to optimize AI workloads more efficiently than general-purpose processors. Starting with the MTIA 400, Meta is also designing entire computing systems around the chips, including advanced cooling solutions, to support large-scale AI operations. This iterative development approach allows Meta to quickly adapt

    energymaterialsAI-chipssemiconductor-manufacturingdata-centerscustom-siliconAI-infrastructure
  • Samsung debuts first pouch solid-state battery prototype for humanoid robots

    Samsung SDI has unveiled its first pouch-type all-solid-state battery prototype designed specifically for emerging physical AI systems such as humanoid robots. This battery will be showcased publicly for the first time at the InterBattery 2026 exhibition in Seoul. Unlike traditional lithium-ion batteries that use liquid electrolytes, Samsung’s solid-state battery replaces the liquid with solid materials, enhancing safety, energy density, and compactness. The pouch design caters to compact devices requiring lightweight components and flexible integration, addressing the unique power demands of robots that experience sudden spikes in energy consumption while maintaining reliability. Beyond robotics, Samsung SDI is also presenting advanced battery technologies aimed at AI infrastructure, including a high-power prismatic battery for uninterruptible power supply (UPS) systems in data centers. This U8A1 battery uses lithium manganese oxide chemistry to deliver improved safety, higher power output, and better space efficiency—about 33% more efficient than previous models—helping stabilize power during sudden AI computing demand spikes. Samsung SDI

    robotenergysolid-state-batteryAI-infrastructurebattery-technologyhumanoid-robotsenergy-storage-systems
  • Sandberg, Clegg join Nscale board as this ‘Stargate Norway’ startup hits $14.6B valuation

    Nscale, a Norway-based AI infrastructure startup, has reached a $14.6 billion valuation following a major funding round described as the largest in European history. This round, backed by prominent investors including Blue Owl, Dell, Nvidia, Nokia, Goldman Sachs, and JPMorgan, supports Nscale’s vertically integrated approach spanning energy, data centers, compute, and orchestration software. The company plans to leverage this capital to accelerate AI infrastructure development across Europe, North America, and Asia, expand its engineering and operations teams, and enhance its platform. Nscale also raised debt financing secured by GPUs to support its cluster deployments in Europe. The startup’s board has been strengthened with high-profile additions such as former Meta COO Sheryl Sandberg, former Yahoo president Susan Decker, and former UK deputy prime minister Nick Clegg. Nscale is closely associated with the “Stargate Norway” project, an ambitious AI infrastructure initiative involving OpenAI and led by Aker’s joint venture, which is now fully

    energydata-centersAI-infrastructurerenewable-energywaste-heat-reusecloud-computingvertical-integration
  • Owner of ICE detention facility sees big opportunity in AI man camps

    The article discusses the growing use of temporary worker housing known as "man camps" to accommodate the large influx of laborers needed for constructing AI data centers in the U.S. A notable example is in Dickens County, Texas, where a former Bitcoin mining facility is being converted into a massive 1.6 gigawatt data center. Workers there live in gray housing units equipped with amenities such as a gym, laundromat, game rooms, and an on-demand steak cafeteria. Target Hospitality, a company specializing in such accommodations, has secured contracts worth $132 million to build and manage this camp, which could eventually house over 1,000 workers. The company views the data center construction boom as its most significant growth opportunity to date. Additionally, the article briefly references Target Hospitality’s involvement in operating a detention center in Texas for families held by Immigration and Customs Enforcement (ICE). This facility has faced legal scrutiny over poor conditions, including allegations of worm-infested and moldy food and inadequate accommodations for

    energydata-centersAI-infrastructuretemporary-housingBitcoin-miningworkforce-accommodationconstruction
  • OpenAI, Oracle abandon 2 GW AI data center expansion in Texas

    OpenAI and Oracle have decided to abandon their planned expansion of the Abilene, Texas AI data center from 1.2 gigawatts to 2 gigawatts, according to Bloomberg. This move follows prolonged financing negotiations and changing infrastructure demands for AI development. The Abilene site is part of the Stargate initiative, a large-scale joint venture involving OpenAI, Oracle, and SoftBank aimed at building advanced AI computing infrastructure. Despite halting the expansion, construction continues at the existing facility, and both companies remain committed to operating within the Stargate project, which still represents one of the most ambitious AI infrastructure efforts in the U.S. The decision to pause the expansion was influenced by financing challenges and shifting demand forecasts, prompting OpenAI to reassess its near-term infrastructure needs. Meanwhile, Meta Platforms is reportedly exploring leasing the additional capacity originally planned for the Abilene site. NVIDIA, a key supplier of AI processors, played a role in directing Meta toward this opportunity, emphasizing the

    energydata-centerAI-infrastructureOpenAIOracleNVIDIAMeta-Platforms
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive financial investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Meta, Oracle, Microsoft, Google, and OpenAI leading the charge. The article traces the origins of this surge to Microsoft’s landmark $1 billion investment in OpenAI in 2019, which established Microsoft as OpenAI’s exclusive cloud provider and laid the foundation for a partnership that has since grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, the model of AI companies aligning closely with specific cloud providers has become standard, with Amazon investing $8 billion in Anthropic and Google Cloud acting as a primary computing partner for others. Oracle’s emergence as a major AI infrastructure player is underscored by two blockbuster deals with OpenAI: a $30 billion cloud services contract revealed in mid-

    energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAI
  • US lab's new facility to tackle rising grid strain from data centers

    The U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) has launched the Next-Generation Data Centers Institute (NGDCI) to address the rapidly increasing energy demands of AI data centers. Currently, data centers consume over 4% of U.S. electricity, but this figure is projected to rise to as much as 17% by 2030, largely driven by the intensive power requirements of AI workloads. NGDCI aims to develop innovative science and technology solutions—including advanced cooling, power management, and integrated grid operation—to ensure that future AI infrastructure remains secure, efficient, and reliable without compromising grid stability. Building on ORNL’s existing MEGA-DC project, which models the economic and technical impacts of data center growth on infrastructure, NGDCI envisions data centers as adaptable national assets that can enhance grid resilience. The institute will collaborate with major industry partners such as AMD, NVIDIA, and Carrier Energy to co-design power-aware architectures and grid-supportive systems. Lever

    energydata-centersAI-infrastructuregrid-resiliencepower-managementcooling-technologiesOak-Ridge-National-Laboratory
  • The true cost and future of AI

    The article "The true cost and future of AI" highlights that while artificial intelligence marks a transformative technological shift comparable to the internet, its broader economic, social, and environmental costs are substantial and multifaceted. Drawing on data from leading institutions, the report reveals that despite projected global AI spending surpassing $2 trillion by 2026, AI’s direct contribution to U.S. GDP growth is negligible once imports are considered. Major tech companies plan massive investments in AI infrastructure, yet many executives report minimal operational impact. Labor markets face significant disruption, with an estimated 92 million jobs at risk by 2030 and notable employment declines among young workers in AI-exposed roles. Additionally, AI’s expansion drives up electricity prices, inflates e-commerce costs, and intensifies competition for critical resources like phosphates, affecting food affordability. The semiconductor supply chain is under severe strain, experiencing the largest supply-demand imbalance in decades, leading to soaring DRAM prices and a shrinking smartphone market, particularly in lower-cost segments

    energyartificial-intelligencesemiconductor-supply-chaindata-centersenvironmental-impacttechnology-policyAI-infrastructure
  • The public opposition to AI infrastructure is heating up

    The article discusses growing public opposition across the United States to the rapid expansion of data centers driven by the AI boom. This backlash has prompted several states and localities to consider or enact temporary bans on new data center construction to assess their environmental and economic impacts. Notably, New York proposed a three-year moratorium on new data center permits statewide, while cities like New Orleans and Madison, Wisconsin, have already paused new developments following public protests. This resistance spans the political spectrum, with figures such as Florida Governor Ron DeSantis, Vermont Senator Bernie Sanders, and Arizona Governor Katie Hobbs supporting measures to limit data center growth, reflecting widespread populist concerns about the tech industry's footprint. Despite the pushback, major tech companies including Amazon, Google, Meta, and Microsoft plan to significantly increase capital expenditures on data center infrastructure to meet growing AI compute demands. Polling indicates that nearly half of respondents oppose new data centers in their communities, though many remain undecided, suggesting public opinion could still shift. In

    energydata-centersAI-infrastructurecloud-computingenvironmental-impacttech-industrylegislative-policy
  • Meta strikes up to $100B AMD chip deal as it chases ‘personal superintelligence’

    Meta has entered a multiyear agreement with AMD to potentially purchase up to $100 billion worth of AMD chips, including the MI540 GPU series and the latest generation of CPUs. This deal is significant enough to drive around six gigawatts of data center power demand. As part of the agreement, AMD issued Meta a performance-based warrant for up to 160 million shares—about 10% of AMD—at $0.01 each, with vesting tied to milestones and AMD’s stock price reaching $600 (currently around $196.60). The partnership reflects Meta’s strategic move to diversify its AI compute infrastructure beyond Nvidia, which has long dominated the AI chip market. Meta CEO Mark Zuckerberg described the collaboration as a key step toward achieving “personal superintelligence,” an AI vision aimed at creating systems that deeply understand and empower individuals in daily life. Meta plans to invest heavily in AI infrastructure over the coming years, including a $10 billion data center project in Indiana designed to support 1

    energydata-centersAMD-chipsAI-infrastructureGPUsCPUsMeta
  • All the important news from the ongoing India AI Impact Summit

    The ongoing India AI Impact Summit, a four-day event attracting 250,000 visitors, aims to boost AI investment and innovation in India. It features participation from major AI and tech leaders including OpenAI, Anthropic, Nvidia, Microsoft, Google, and prominent Indian figures like Reliance Chairman Mukesh Ambani. Key announcements include India earmarking a significant fund to invest in AI and advanced manufacturing startups nationwide. OpenAI CEO Sam Altman highlighted India as the second-largest user of ChatGPT globally, with a large student base. Several Indian AI startups secured major investments: Blackstone acquired a majority stake in Neysa, which plans to raise $600 million in debt and expand GPU deployment, while Bengaluru-based C2i raised funds for data center power solutions. Anthropic announced its first Indian office in Bengaluru and plans to deploy AI tools in telecommunications, reflecting India’s growing role in the AI ecosystem. Industry perspectives at the summit revealed concerns and opportunities: HCL’s CEO Vineet Nayyar emphasized profitability

    energyAI-infrastructuredata-centersadvanced-manufacturingsmart-glassestelecommunicationsIndian-startups
  • UAE’s G42 teams up with Cerebras to deploy 8 exaflops of compute in India

    Abu Dhabi-based technology firm G42 has partnered with U.S. chipmaker Cerebras to deploy a new supercomputer system in India delivering 8 exaflops of computing power. The system will be hosted in India, complying with local data residency, security, and compliance regulations, and aims to provide advanced AI computing resources to educational institutions, government bodies, and small and medium enterprises. This initiative is designed to bolster India’s sovereign AI infrastructure, enabling local researchers and innovators to develop AI technologies while maintaining full data sovereignty. The project also involves Abu Dhabi’s Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and India’s Centre for Development of Advanced Computing (C-DAC). This deployment marks a significant enhancement in India’s computational capacity and AI capabilities, accelerating the training and inference of large-scale AI models tailored to Indian needs. The collaboration builds on previous efforts such as the release of a Hindi-English large language model based on Meta’s Llama 3.1 by MBZU

    energyAI-infrastructuresupercomputingdata-centerscloud-computingIndia-technologysovereign-AI
  • General Catalyst commits $5B to India over five years

    General Catalyst, a Silicon Valley venture firm managing over $43 billion in assets, has committed to investing $5 billion in India over the next five years, significantly increasing its previous allocation of $500 million to $1 billion. This expansion follows its merger with local venture firm Venture Highway and aims to bolster startups in artificial intelligence, healthcare, defense technology, fintech, and consumer technology. The announcement was made at the India AI Impact Summit in New Delhi, highlighting India's emergence as a major AI investment hub with ambitions to attract over $200 billion in AI infrastructure investments within two years. The firm’s CEO, Hemant Taneja, emphasized India's potential to build the next generation of global platform companies, citing the country’s large digital infrastructure, vast market, and skilled talent pool as key advantages. General Catalyst focuses on large-scale AI deployment rather than frontier model development and has already invested in Indian startups across various sectors, including Zepto and Jeh Aerospace. The firm plans to support companies from early stages through public

    energyAI-infrastructuredata-centerscloud-computinginvestmenttechnology-startupsIndia
  • OpenAI taps Tata for 100MW AI data center capacity in India, eyes 1GW

    OpenAI has entered a strategic partnership with India’s Tata Group to secure 100 megawatts (MW) of AI-ready data center capacity, with plans to scale up to 1 gigawatt (GW) over time. This collaboration, part of OpenAI’s Stargate project, aims to build advanced AI infrastructure and accelerate enterprise AI adoption globally, with India as a key growth market. OpenAI will be the first customer of Tata Consultancy Services’ (TCS) HyperVault data center business, leveraging the local capacity to run its most advanced AI models domestically. This will reduce latency for Indian users and ensure compliance with data residency and security regulations, which is critical for enterprises handling sensitive data under India’s data localization laws. Beyond infrastructure, the partnership includes deploying ChatGPT Enterprise across Tata’s workforce, starting with hundreds of thousands of TCS employees, marking one of the largest enterprise AI rollouts globally. TCS will also use OpenAI’s Codex tools to standardize AI-native software

    energydata-centersAI-infrastructureOpenAITata-Groupenterprise-AIpower-consumption
  • Meta signs multiyear NVIDIA deal to power next-gen AI data centers

    Meta and NVIDIA have entered a multiyear, multigenerational partnership to develop hyperscale AI infrastructure across Meta’s on-premises data centers and cloud environments. This collaboration involves deploying large volumes of NVIDIA CPUs—including Arm-based Grace and future Vera processors—and millions of Blackwell and Rubin GPUs to scale AI training and inference systems that power Meta’s global platforms. The partnership emphasizes deep codesign across compute, networking, and software, integrating NVIDIA Spectrum-X Ethernet switches into Meta’s Facebook Open Switching System to enhance network efficiency and throughput for AI clusters. The goal is to build data centers optimized for training large frontier AI models and running inference at scale, supporting personalization and recommendation systems used by billions of Meta users. The deal also focuses on improving energy efficiency and performance per watt, with Meta undertaking the first large-scale rollout of Grace CPUs and planning potential large-scale adoption of Vera CPUs by 2027. Meta will deploy NVIDIA GB300-based systems and unify its architecture across data centers and NVIDIA Cloud Partner environments to

    energyAI-infrastructuredata-centersNVIDIA-GPUsenergy-efficiencynetworking-technologyArm-CPUs
  • AI-driven nuclear push targets 2x faster builds, 50% lower costs

    The Idaho National Laboratory (INL) and NVIDIA have partnered to leverage artificial intelligence (AI) to accelerate the deployment of advanced nuclear reactors in the U.S., aiming to halve development timelines and reduce operational costs by over 50%. This collaboration, part of the Department of Energy’s Genesis Mission, focuses on creating a scientific computing platform to drive energy innovation and national security. The initiative, codenamed Prometheus, employs AI tools—including generative AI models, digital twins, and agent-based workflows—to streamline the design, licensing, manufacturing, construction, and operation of nuclear reactors. By automating traditionally time-consuming engineering processes while maintaining human oversight, the partnership expects to at least double the speed of nuclear reactor deployment. Central to the effort is the use of digital twins trained on decades of nuclear data and experimental operations at INL, enabling simulation and validation of reactor systems before physical construction. NVIDIA contributes its AI infrastructure and GPU-accelerated computing to enhance nuclear simulation codes, reducing computation times and expanding modeling

    energynuclear-energyartificial-intelligenceadvanced-reactorsdigital-twinsAI-infrastructureenergy-innovation
  • India bids to attract over $200B in AI infrastructure investment by 2028

    India aims to attract over $200 billion in artificial intelligence (AI) infrastructure investment by 2028, positioning itself as a global hub for AI computing and applications. Announced by IT Minister Ashwini Vaishnaw at the government-backed AI Impact Summit in New Delhi, the initiative includes tax incentives, state-backed venture capital, and policy support to draw more global AI value chain activities to the country. Major U.S. tech firms like Amazon, Google, and Microsoft have already committed about $70 billion to expand AI and cloud infrastructure in India, providing a strong foundation for further investment. The government’s efforts also feature long-term tax relief for export-oriented cloud services and a ₹100 billion ($1.1 billion) venture program targeting high-risk sectors such as AI and advanced manufacturing. India plans to expand its shared AI compute capacity under the IndiaAI Mission by adding 20,000 GPUs to the existing 38,000, with a second phase focusing on research, innovation, and broader access to

    energyAI-infrastructuredata-centerscloud-computinginvestmenttechnology-policyadvanced-manufacturing
  • Adani pledges $100B to build AI data centers as India seeks bigger role in the global AI race

    The Adani Group has announced a $100 billion investment over the next decade to build AI-specialized data centers across India, signaling the country’s intent to become a significant player in the global AI infrastructure race. These data centers will be powered by renewable energy and are expected to catalyze an additional $150 billion in related investments, creating a $250 billion AI infrastructure ecosystem by 2035. The initiative aligns with India’s expanding digital economy and renewable energy capacity, positioning the nation as an attractive destination for AI infrastructure development. Adani plans to leverage its existing data-center platform and partnerships with global tech giants like Google and Microsoft, with large-scale campuses already underway in Visakhapatnam and Noida, and future projects planned for Hyderabad and Pune. Central to Adani’s strategy is the integration of renewable energy, with the company’s 30-gigawatt Khavda renewable project supplying carbon-neutral power to the data centers. The group also intends to invest $55 billion in expanding renewable

    energydata-centersAI-infrastructurerenewable-energybattery-storageIndiaAdani-Group
  • As AI data centers hit power limits, Peak XV backs Indian startup C2i to fix the bottleneck

    The article highlights a significant shift in AI data center scaling challenges, where power efficiency, rather than raw compute capacity, is becoming the primary bottleneck. Peak XV Partners has invested $15 million in a Series A round for C2i Semiconductors, an Indian startup developing integrated, plug-and-play power solutions aimed at reducing energy losses in AI infrastructure. C2i’s technology focuses on redesigning power delivery from the data center grid directly to GPUs, addressing inefficiencies in power conversion that currently waste 15-20% of energy. By integrating power conversion, control, and packaging into a unified platform, C2i aims to cut end-to-end energy losses by about 10%, which translates to substantial savings in power consumption, cooling costs, and overall operational expenses. Founded in 2024 by former Texas Instruments executives, C2i is preparing to validate its first silicon designs with major data center operators and hyperscalers by mid-2026. The startup has grown to a team of

    energydata-centerspower-efficiencyAI-infrastructuresemiconductorsenergy-consumptionpower-conversion
  • Meta starts construction on $10B, 1GW AI data hub in Indiana

    Meta has commenced construction on a massive $10 billion data center campus in Lebanon, Indiana, designed to deliver 1 gigawatt (GW) of capacity. This represents one of Meta’s largest infrastructure investments and will be its second data center in Indiana. The facility aims to support both Meta’s core digital platforms and rapidly expanding AI workloads, reflecting the growing demand for high-density, gigawatt-scale computing infrastructure. The campus is planned with long-term scalability and technological evolution in mind, allowing Meta to consolidate services and AI operations without needing separate facilities. The project is expected to create over 4,000 construction jobs at its peak and approximately 300 permanent positions once operational. Meta is also investing in the local community through workforce development initiatives via the Boone County Career Collaborative and committing $1 million annually for 20 years to assist local families with energy bills. Additionally, Meta will fund improvements to local infrastructure, including water systems, roads, and utilities, with over $120 million earmarked for these upgrades

    energydata-centerAI-infrastructureclean-energywater-conservationsustainabilityenergy-efficiency
  • xAI lays out interplanetary ambitions in public all-hands

    xAI publicly shared a 45-minute all-hands meeting video revealing key updates about its organizational changes, product roadmap, and future ambitions. CEO Elon Musk explained that recent layoffs, which included a significant portion of the founding team, were part of a necessary restructuring to support rapid company growth. The company is now organized into four main teams focused on the Grok chatbot, app coding system, Imagine video generator, and the Macrohard project—a broad initiative aimed at simulating computer use and modeling corporations, with ambitions for AI-designed rocket engines. The meeting also highlighted xAI and X platform metrics, with X reportedly surpassing $1 billion in annual recurring subscription revenue, driven by a holiday marketing push. The Imagine tool is said to generate 50 million videos daily and over 6 billion images in the past month, though some of this content includes controversial AI-generated explicit images. The most notable vision Musk shared involves space-based AI infrastructure, including a moon-based factory for AI satellites and a lunar mass driver

    energyAI-satellitesspace-based-data-centerslunar-mass-driverinterplanetary-energy-captureAI-infrastructurerenewable-energy-sources
  • Benchmark raises $225M in special funds to double down on Cerebras

    Cerebras Systems, an AI chipmaker known for its uniquely large Wafer Scale Engine chip, recently raised $1 billion in new funding at a valuation nearly three times higher than six months prior. A significant portion of this round—at least $225 million—came from Benchmark Capital, one of Cerebras’ earliest investors. Benchmark created two special funds, named ‘Benchmark Infrastructure,’ specifically to support this investment, reflecting strong confidence in Cerebras’ technology and growth potential. The company’s flagship chip, measuring about 8.5 inches per side and containing 4 trillion transistors, is nearly the size of an entire silicon wafer, enabling 900,000 specialized cores to work in parallel. This design allows AI inference tasks to run over 20 times faster than traditional GPU-based systems by eliminating data transfer bottlenecks. Cerebras is gaining traction in the AI infrastructure market, highlighted by a recent multi-year deal with OpenAI to provide 750 megawatts of computing power through

    materialssemiconductorAI-chipwafer-scale-enginecomputing-powerAI-infrastructureprocessor-technology
  • Exclusive: Positron raises $230M Series B to take on Nvidia’s AI chips

    Positron, a semiconductor startup based in Reno, has raised $230 million in a Series B funding round led by investors including the Qatar Investment Authority (QIA), the country’s sovereign wealth fund. This funding comes amid growing efforts by hyperscalers and AI companies, such as OpenAI, to reduce dependence on Nvidia’s AI chips. Qatar’s investment aligns with its broader strategy to develop sovereign AI infrastructure and position itself as a leading AI services hub in the Middle East, supported by a $20 billion AI infrastructure initiative announced in September. Positron, founded three years ago, has now raised over $300 million in total capital, including a previous $75 million round last year. The company’s first-generation chip, Atlas, manufactured in Arizona, reportedly matches the performance of Nvidia’s H100 GPUs while consuming less than a third of the power. Positron focuses on AI inference hardware—used to run AI models in real-world applications—rather than training large language models, addressing the

    energysemiconductorAI-chipsAI-infrastructurecomputing-powerinference-hardwaretechnology-startups
  • Elon Musk links SpaceX and xAI in a record-setting merger to boost AI

    SpaceX has officially acquired xAI, merging two of Elon Musk’s leading ventures to form a potentially world-leading private company. This union combines SpaceX’s expertise in rockets and satellites with xAI’s rapid advancements in artificial intelligence, aligning with growing global demand for computing power. Musk highlighted this merger as a significant new phase in their joint mission, emphasizing the strategic focus on leveraging AI to advance space operations. The deal reflects the substantial valuations of both companies—SpaceX at approximately $800 billion and xAI at around $230 billion—underscoring strong investor confidence in space and AI innovation. Financial pressures in the AI sector, particularly the high costs of powering and cooling large-scale AI models, have driven the integration. By bringing xAI under its umbrella, SpaceX gains greater control over AI development and deployment, while xAI benefits from SpaceX’s infrastructure, capital, and launch capabilities. Musk noted that relocating AI computing efforts to space could address the immense power and cooling demands of terrestrial data centers

    energyartificial-intelligenceSpaceXdata-centerscomputing-powersatellite-technologyAI-infrastructure
  • SpaceX seeks approval for solar-powered orbital data centers for AI

    SpaceX, led by Elon Musk, has filed a request with the FCC to launch up to one million solar-powered satellites designed to serve as orbital data centers for artificial intelligence (AI). These satellites would leverage constant solar energy and natural vacuum cooling in low-Earth orbit (500-2,000 km altitude) to overcome the significant electricity and water consumption challenges faced by terrestrial AI infrastructure. The move aims to reduce environmental impact and operational costs while enabling AI growth beyond the limitations of Earth’s power grids. This filing coincides with SpaceX’s ongoing talks to merge with Musk’s AI startup, xAI, potentially positioning SpaceX ahead of competitors like Google, Meta, and OpenAI. The project’s feasibility depends heavily on SpaceX’s Starship rocket, which promises dramatically lower launch costs and the capacity to deliver millions of tons of payload to orbit annually. By securing FCC approval for a large satellite fleet, SpaceX aims to meet the anticipated demand from a billion AI users and establish space as the most

    energysolar-powerorbital-data-centersSpaceXAI-infrastructuresatellitesStarship-rockets
  • India offers zero taxes through 2047 to lure global AI workloads

    India has introduced a significant tax incentive to attract global AI workloads by offering foreign cloud providers zero taxes through 2047 on revenues from services sold outside India, provided these services are run from Indian data centers. Announced by Finance Minister Nirmala Sitharaman in the annual budget, this tax holiday aims to position India as a competitive hub for AI computing investment amid a global surge in demand for cloud infrastructure. The budget also includes a 15% cost-plus safe harbour tax provision for Indian data-center operators serving related foreign entities. However, sales to Indian customers will be taxed domestically through local resellers. This move aligns with major investments by global tech giants such as Google, Microsoft, and Amazon, who have collectively pledged tens of billions of dollars to expand AI and cloud infrastructure in India. Domestic players like Digital Connexion and Adani Group are also investing heavily in large-scale AI-focused data center projects, signaling strong interest from both international and local investors. Despite these positive developments, challenges remain,

    energydata-centersAI-infrastructurecloud-computingIndiainvestmentpower-shortages
  • World's first fleet drilling robot cuts data center build times

    DEWALT, a U.S.-based power equipment maker owned by Stanley Black & Decker, has partnered with August Robotics to introduce the world’s first fleet-capable robot designed for downward concrete drilling. This robotic system targets a critical bottleneck in data center construction by automating the labor-intensive task of drilling thousands of precision holes needed to anchor server racks and support overhead mechanical, electrical, and plumbing systems. The robot operates autonomously and can work in fleets, allowing multiple units to drill simultaneously across large sites. According to DEWALT, the system drills up to 10 times faster than traditional methods, potentially reducing overall construction timelines by as much as 80 weeks while improving jobsite safety and cutting costs per hole. The robotic drilling system is already being piloted with one of the world’s largest hyperscalers and has completed work across 10 data center construction phases, achieving 99.97 percent accuracy in hole location and depth over more than 90,000 drilled holes. This high

    roboticsconstruction-automationdata-centerdrilling-robotautonomous-robotsAI-infrastructurefleet-robotics
  • OpenAI signs deal, worth $10 billion, for compute from Cerebras

    OpenAI has entered a multi-year agreement with AI chipmaker Cerebras, securing 750 megawatts of compute power from 2026 through 2028 in a deal valued at over $10 billion. This partnership aims to accelerate AI processing speeds, enabling faster response times for OpenAI’s customers by leveraging Cerebras’s specialized AI chips, which the company claims outperform traditional GPU-based systems like those from Nvidia. The enhanced compute capacity is expected to support real-time AI inference, which Cerebras CEO Andrew Feldman likens to the transformative impact broadband had on the internet. Cerebras, which gained prominence following the AI surge sparked by ChatGPT’s 2022 launch, has been expanding despite postponing its IPO multiple times. The company is reportedly in talks to raise an additional $1 billion at a $22 billion valuation. OpenAI’s strategy involves diversifying its compute infrastructure to optimize performance across different workloads, with Cerebras providing a dedicated low-latency inference solution. This collaboration is

    energyAI-chipscompute-powerdata-centershigh-performance-computingsemiconductor-technologyAI-infrastructure
  • Microsoft pledges water-positive AI data centers, full power payments

    Microsoft has launched its Community First AI Infrastructure initiative to address environmental and economic concerns linked to the rapid expansion of its U.S. AI data centers. The company commits to preventing increases in residential electricity prices and avoiding strain on local water supplies caused by its facilities. Key pledges include paying electricity rates that fully cover the costs imposed by data centers, funding necessary grid upgrades, and collaborating early with utilities to plan power needs. Microsoft has already supported nearly eight gigawatts of new electricity generation in the Midwest, exceeding its current regional consumption, and aims to push for rate structures that prevent residential customers from subsidizing data center growth. On water usage, Microsoft plans to reduce data center water use intensity by 40% by 2030, relying on closed-loop cooling systems and minimizing potable water use. The company will fund water infrastructure improvements where local systems face capacity limits and has committed over $25 million for water and sewer upgrades near a Virginia data center. Additionally, Microsoft pledges to replenish more water than it

    energydata-centersAI-infrastructurewater-conservationelectricity-gridsustainable-technologyMicrosoft
  • Microsoft announces glut of new data centers but says it won’t let your electricity bill go up

    Microsoft has announced a significant expansion of its AI data center infrastructure, reaffirming its commitment to build new facilities despite growing local opposition and activism against data center projects across the U.S. In response to community concerns, the company pledged a “community-first” approach, promising to be a “good neighbor” by ensuring that its electricity consumption does not increase local residents’ power bills. Microsoft plans to collaborate closely with utility companies and regulatory bodies to pay rates that fully cover its share of the local grid’s costs, thereby preventing the financial burden from being passed on to residential customers. Additionally, Microsoft committed to creating jobs in the communities hosting its data centers and minimizing water usage, addressing two major points of contention around data center development. These promises come amid heightened political and public scrutiny, with numerous activist groups mobilizing against data center expansions and some projects already canceled or delayed due to community backlash. The company’s assurances also align with recent statements from political leaders emphasizing the importance of protecting consumers from increased utility costs linked to

    energydata-centerselectricityinfrastructuresustainabilityMicrosoftAI-infrastructure
  • Mark Zuckerberg says Meta is launching its own AI infrastructure initiative

    Meta is launching a major AI infrastructure initiative aimed at significantly expanding its capacity to support advanced AI models and products. CEO Mark Zuckerberg announced plans to build tens of gigawatts of power capacity this decade, scaling to hundreds of gigawatts over time, emphasizing that this infrastructure will be a strategic advantage for the company. This expansion reflects the growing energy demands of AI technologies, which could lead to a substantial increase in electricity consumption in the U.S. over the next decade. To lead this effort, Zuckerberg named three key executives: Santosh Janardhan, head of global infrastructure, who will oversee technical architecture, software, silicon development, and data center operations; Daniel Gross, who will manage long-term capacity strategy, supplier partnerships, and business planning; and Dina Powell McCormick, responsible for government relations and financing. This initiative places Meta in direct competition with other tech giants like Microsoft and Alphabet, who are also investing heavily in AI-ready cloud infrastructure.

    energyAI-infrastructuredata-centerspower-consumptioncloud-computingMetatechnology-investment
  • Meta makes nuclear reactor history with 6.6 GW energy deal to power AI

    Meta has made a historic move by securing up to 6.6 gigawatts (GW) of nuclear energy through agreements with Oklo, TerraPower, and Vistra, positioning itself as one of the largest corporate purchasers of nuclear power in U.S. history. This energy will provide the reliable, carbon-free electricity needed to power Meta’s next-generation AI infrastructure, including its Prometheus supercluster in Ohio. The initiative reflects Meta’s strategic shift toward advanced nuclear technologies to meet the substantial energy demands of AI development, aiming to support America’s leadership in AI while promoting clean energy. The partnerships cover three key areas: TerraPower, founded by Bill Gates, will develop Natrium reactors generating up to 690 MW by 2032, with rights to additional units totaling 2.8 GW plus 1.2 GW of energy storage; Oklo will advance an advanced nuclear campus in Ohio with up to 1.2 GW of power from Aurora Powerhouse fast reactors by 2030;

    energynuclear-energyMetaAI-infrastructureTerraPowerOkloclean-energy
  • Beating the bottleneck: how Point2 plans to unleash AI performance

    The article discusses Point2 Technology’s innovative approach to overcoming a critical bottleneck in AI infrastructure: the limitations of data movement within computing systems. As AI workloads grow rapidly, GPUs have advanced significantly faster than the physical interconnects (cables and connections) that link them, causing bandwidth, power, and latency challenges in data centers. Point2’s solution, led by CEO Dr. Sean Park and explained by VP David Kuo, is a novel interconnect technology called E-Tube, which transmits radio frequency (RF) signals through a plastic waveguide rather than relying on traditional copper cables or optical fibers. This approach avoids copper’s physical limitations and the high power, cost, and reliability issues associated with optics. Point2’s RF-over-plastic technology offers substantial advantages for the dominant data center use case of short-range connections (10–20 meters). Unlike optics, which is designed for long distances but comes with significant penalties, E-Tube behaves like copper in terms of economics and ease of

    materialsenergyAI-infrastructuredata-transmissionradio-frequencyplastic-waveguideinterconnect-technology
  • Panasonic’s AI Strategy Enters the Implementation Phase: Real-World Impact for Better Future Showcased at CES 2026 - CleanTechnica

    At CES 2026, Panasonic Group showcased the real-world implementation of its AI strategy, initially announced the previous year, under the theme “The Future We Make.” The exhibition highlighted Panasonic’s advancements in AI infrastructure, particularly focusing on data centers, AI-based B2B solutions, and environmentally focused Green Transformation (GX) technologies. These innovations address the growing computational demands and operational challenges of AI data centers, including stable power supply, heat management, uninterrupted operation, and cybersecurity. Panasonic demonstrated several key technologies to support data center evolution. These include high-performance liquid cooling pumps and compressors designed to efficiently manage heat generated by high-density AI servers, improving lifespan and reducing environmental impact through compatibility with next-generation refrigerants. Additionally, Panasonic Energy offers energy storage systems integrated into server racks to stabilize power supply, provide backup during outages, and optimize energy use with peak shaving functions. The company also developed highly reliable components like conductive polymer aluminum electrolytic capacitors to enhance power circuit stability and performance under demanding conditions, supporting

    energyAI-infrastructuredata-centerscooling-technologypower-supplyenvironmental-solutionsPanasonic
  • New open-source map charts the scale of US AI datacenters buildout

    A non-profit research institute, Epoch AI, is using open-source intelligence—including satellite imagery, construction permits, and regulatory filings—to map the rapid expansion of AI datacenters across the United States. Their interactive map provides detailed estimates on cost, ownership, and power consumption of these facilities, offering rare transparency into an industry growing faster than public oversight. For example, Epoch AI highlights Meta’s “Prometheus” datacenter in New Albany, Ohio, estimating it has cost $18 billion and consumes 691 megawatts of power, reflecting Meta’s strategic pivot toward AI infrastructure. Epoch AI’s methodology centers on analyzing cooling infrastructure visible in satellite images, as modern AI datacenters generate extreme heat requiring extensive external cooling units. By counting and measuring fans and cooling systems, they estimate energy use, which in turn informs compute capacity and construction cost estimates. However, these estimates carry uncertainty due to variable fan configurations and speeds. The map currently covers about 15% of global AI compute capacity as of November

    energydatacentersAI-infrastructurepower-consumptioncooling-systemssatellite-imageryenergy-efficiency
  • Credo Releases 2025 Environmental, Social, and Governance (ESG) Report - CleanTechnica

    Credo Technology Group Holding Ltd (NASDAQ: CRDO), a leader in secure, high-speed connectivity solutions, has published its 2025 Environmental, Social, and Governance (ESG) Report, detailing its progress on key ESG priorities. The report emphasizes Credo’s commitment to responsible growth through strong governance, accountability, and innovation focused on energy-efficient product development. In 2025, Credo advanced connectivity technologies that reduce waste and power consumption, particularly supporting AI data centers and hyperscale environments. The company also enhanced its Code of Business Conduct and Ethics, expanded employee health, safety, and professional development programs, and grew community partnerships via its Credo Cares initiative. Credo’s product portfolio reflects its leadership in energy-efficient interconnect solutions for data center infrastructure, addressing increasing operational demands while minimizing environmental impact. The company’s innovations include Serializer/Deserializer (SerDes) and Digital Signal Processor (DSP) technologies that enable faster, more reliable, and scalable connectivity for optical and electrical Ethernet applications ranging

    energyconnectivity-solutionsdata-centersenergy-efficiencyAI-infrastructurehigh-speed-connectivitysustainable-technology
  • The year data centers went from backend to center stage

    The article highlights the dramatic rise in public awareness and activism surrounding data centers in the United States as of 2025. Once largely invisible and confined to the tech industry, data centers have become a focal point of protests and political debate due to their rapid expansion driven by the booming AI and cloud computing sectors. Over the past year, 142 activist groups across 24 states have mobilized against new data center developments, citing concerns about environmental impact, energy consumption, and strain on local power grids. This surge in activism reflects the industry's exponential growth, with construction spending on data centers increasing by 331% since 2021, fueled by major tech companies like Google, Meta, Microsoft, and Amazon, as well as government initiatives promoting AI infrastructure. The backlash is evident nationwide, with communities in Michigan, Wisconsin, and Southern California actively opposing proposed data centers, often on environmental and quality-of-life grounds. Activists like Danny Candejas of MediaJustice report growing grassroots organizing efforts, suggesting that resistance to

    energydata-centerscloud-computingAI-infrastructurepower-gridtechnology-activismtech-industry
  • Alphabet to buy Intersect Power to bypass energy grid bottlenecks

    Alphabet, Google's parent company, has agreed to acquire Intersect Power, a developer of data centers and clean energy projects, including taking on the company’s debt. This acquisition aims to help Alphabet expand its power generation capacity to support new data centers without depending on local utilities, which are currently struggling to meet the growing energy demands driven by AI companies. Alphabet had previously held a minority stake in Intersect Power following a strategic funding round led by Google and TPG Rise Climate, targeting $20 billion in total investment by 2030. The deal covers Intersect Power’s future development projects but excludes its existing operations, which will be acquired by other investors and managed separately. Intersect’s upcoming data parks, located near renewable energy sources like wind, solar, and battery storage, are expected to begin operations late next year and be fully completed by 2027. Google will be the primary user of these facilities, though the campuses are designed as industrial parks that can also host other companies’ AI chip operations.

    energyclean-energydata-centerspower-generationrenewable-energybattery-storageAI-infrastructure
  • Full Page Open Letter Calls on Amazon, Google, Meta, & Microsoft to Stop Fueling Climate Change with Data Center Demands - CleanTechnica

    A full-page open letter published in the Indianapolis Star urges the CEOs of Amazon, Google, Meta, and Microsoft to power their expanding data centers with clean energy rather than fossil fuels. The letter highlights that these tech giants, as major electricity customers, should pressure utilities to commit to no new natural gas plants and to retire coal plants promptly. This call comes amid a surge of AI data center proposals in Indiana, where utilities have responded by planning new gas plants or delaying coal plant closures, actions that could increase energy costs for local residents and businesses. The letter is supported by various environmental and community organizations, including the Sierra Club, Hoosier Environmental Council, and Amazon Employees for Climate Justice. Representatives from these groups emphasize that continued reliance on fossil fuels for powering data centers undermines the companies’ own climate commitments and unfairly burdens Indiana communities with higher energy bills and pollution. They stress the urgent need for Big Tech to invest in renewable energy infrastructure to create a more efficient, resilient, and affordable electric grid,

    energydata-centersclimate-changerenewable-energydecarbonizationAI-infrastructureclean-energy
  • Google’s answer to the AI arms race — promote the guy behind its data center tech

    Google has appointed Amin Vahdat as its chief technologist for AI infrastructure, a newly created role reporting directly to CEO Sundar Pichai. This move underscores the critical importance of AI infrastructure as Google plans to significantly increase its capital expenditures by the end of 2025. Vahdat, a computer scientist with a PhD from UC Berkeley, has been instrumental in building Google’s AI backbone over the past 15 years, focusing on large-scale computer efficiency. Before joining Google in 2010, he held academic positions at Duke University and UC San Diego. Vahdat’s contributions include leading the development of Google’s seventh-generation TPU (Ironwood), which delivers 42.5 exaflops of compute power—far surpassing the world’s top supercomputer at the time. He has also overseen the creation of the Jupiter network, a high-speed internal network with bandwidth capable of supporting simultaneous video calls for the entire global population, and has played a key role in Google’s

    energydata-centersAI-infrastructureTPU-chipscloud-computingnetwork-technologyserver-management
  • Environmental groups call for halt to new data center construction

    Environmental groups, including Food and Water Watch, Friends of the Earth, and Greenpeace, are urging Congress to impose a national moratorium on the approval and construction of new data centers. Their concerns center on the rapidly increasing electricity and water consumption driven by the expansion of data centers supporting AI and cryptocurrency activities. They warn that this growth is largely unregulated and threatens economic, environmental, climate, and water security across the United States. Electricity prices have already seen significant increases this year, with the most substantial impacts expected in states like Virginia, Pennsylvania, Ohio, Illinois, and New Jersey, where data center capacity is projected to grow the most. Energy demand from data centers is anticipated to nearly triple from 40 gigawatts today to 106 gigawatts by 2035, with much of this expansion occurring in rural areas. The rapid growth of data centers has sparked public protests, such as those at DTE’s headquarters in Detroit, where the utility seeks approval to supply electricity to a 1.

    energydata-centerselectricity-consumptionenvironmental-impactAI-infrastructurerenewable-energyenergy-demand
  • How small modular reactors work and why they matter in AI energy surge

    The article discusses the rapidly increasing electricity demand from data centers driven by artificial intelligence (AI) infrastructure, which is projected to grow about 15% annually through 2030, far outpacing other sectors. This surge has intensified the search for stable, carbon-free power sources in the U.S., with nuclear energy gaining renewed attention. Among nuclear options, small modular reactors (SMRs) are highlighted as promising due to their smaller size, factory-based manufacturing, and ability to be sited closer to energy consumers, reducing transmission losses. Over 80 SMR designs are in development globally, with some near-term deployable models expected to begin construction before 2030 and commercial operation by the mid-2030s. However, long-term radioactive waste management plans remain unresolved. SMRs occupy a middle ground between large conventional nuclear reactors and microreactors, typically producing up to 300 megawatts of electricity from reactor cores about 3 meters wide and 6 meters tall, on sites around

    energysmall-modular-reactorsnuclear-energycarbon-free-powerdata-centersAI-infrastructureelectricity-consumption
  • AWS is spending $50B build AI infrastructure for the US government

    Amazon Web Services (AWS) has announced a $50 billion investment to build specialized AI high-performance computing infrastructure tailored for U.S. government agencies. This initiative aims to significantly enhance federal access to AWS AI services, including Amazon SageMaker, model customization tools, Amazon Bedrock, model deployment, and Anthropic’s Claude chatbot. The project will add 1.3 gigawatts of computing power, with construction of new data centers expected to begin in 2026. AWS CEO Matt Garman emphasized that this investment will transform how federal agencies utilize supercomputing, accelerating critical missions such as cybersecurity and drug discovery, while removing technological barriers that have previously limited government AI adoption. AWS has a long history of working with the U.S. government, having started building cloud infrastructure for federal use in 2011. It launched the first air-gapped commercial cloud for classified workloads in 2014 and introduced the AWS Secret Region in 2017, which supports all security classification levels. This new AI infrastructure

    energyAI-infrastructurecloud-computinghigh-performance-computinggovernment-technologydata-centerssupercomputing
  • India’s TCS gets TPG to fund half of $2B AI data center project

    Tata Consultancy Services (TCS) has partnered with private equity firm TPG to secure $1 billion funding for the first half of a $2 billion multi-year project called “HyperVault,” aimed at building a network of gigawatt-scale, liquid-cooled, high-density AI data centers across India. This initiative addresses the country’s significant gap between its large data generation—nearly 20% of global data—and its limited data center capacity, which currently accounts for only about 3% of the global total. The new data centers will support advanced AI workloads and are designed to meet the growing demand for AI compute power amid rapid adoption of AI technologies in India. However, the project faces challenges related to resource constraints, including water scarcity, power supply, and land availability, especially in urban hubs like Mumbai, Bengaluru, and Chennai where data center concentration is high. Liquid cooling, while necessary for managing the heat from power-intensive AI GPUs, raises concerns about water usage, with estimates suggesting a

    energydata-centersAI-infrastructureliquid-coolingpower-consumptionwater-scarcitycloud-computing
  • Anthropic announces $50 billion data center plan

    Anthropic announced a significant $50 billion partnership with U.K.-based neocloud provider Fluidstack to build new data centers across Texas and New York, scheduled to come online throughout 2026. This investment aims to support the intense compute demands of Anthropic’s Claude AI models and advance AI capabilities that can accelerate scientific discovery and solve complex problems. CEO Dario Amodei emphasized the need for robust infrastructure to sustain frontier AI development. While Anthropic’s $50 billion commitment is substantial, it is smaller compared to competitors’ infrastructure investments, such as Meta’s $600 billion data center plan over three years and the $500 billion Stargate partnership involving SoftBank, OpenAI, and Oracle. The surge in AI infrastructure spending has raised concerns about a potential AI bubble. The deal also marks a major milestone for Fluidstack, a relatively young neocloud company founded in 2017, which has quickly become a preferred vendor in the AI sector with partnerships including Meta, Black Forest Labs, and

    energydata-centerscloud-computingAI-infrastructurecompute-powerneocloudtechnology-investment
  • A better way of thinking about the AI bubble 

    The article discusses the concept of an AI bubble, emphasizing that tech bubbles need not be catastrophic but rather reflect overinvestment where supply outpaces demand. A key challenge in assessing the AI bubble lies in the mismatch between the rapid development of AI software and the slow, complex process of building and powering data centers. Since data centers take years to complete and depend on evolving technologies in energy, semiconductors, and power transmission, predicting future supply needs is difficult. Large-scale investments are already underway, with companies like Oracle, Softbank, and Meta committing hundreds of billions of dollars to AI infrastructure, highlighting the scale of current bets on AI’s growth. Despite this massive investment, demand for AI services remains uncertain. A recent McKinsey survey shows that while most companies use AI in some capacity, few have integrated it extensively or seen significant business impact, indicating many are still cautious about scaling AI adoption. Infrastructure challenges also pose risks: Microsoft CEO Satya Nadella noted that data center space, rather than chip

    energydata-centersAI-infrastructuresemiconductor-designpower-transmissioncloud-servicestechnology-investment
  • OpenAI asked Trump administration to expand Chips Act tax credit to cover data centers

    OpenAI has formally requested that the U.S. government expand the scope of the Advanced Manufacturing Investment Credit (AMIC), part of the Biden administration’s Chips Act, to include not only semiconductor fabrication but also electrical grid components, AI servers, and AI data centers. In a letter from OpenAI’s chief global affairs officer Chris Lehane to the White House’s science and technology policy director Michael Kratsios, the company argued that broadening AMIC coverage would reduce capital costs, lower investment risks, and attract private funding to accelerate AI infrastructure development in the U.S. Additionally, OpenAI urged the government to speed up permitting and environmental reviews for such projects and to establish a strategic reserve of critical raw materials like copper, aluminum, and rare earth minerals necessary for AI infrastructure. The letter, initially sent in late October but gaining wider attention following public comments by OpenAI executives, clarifies that while the company discussed loan guarantees in the context of semiconductor fabs, it does not seek government backstops or

    energydata-centerssemiconductor-fabricationAI-infrastructureraw-materialsChips-Actgovernment-policy
  • Google plans orbital AI data centers powered directly by sunlight

    Google has announced Project Suncatcher, an ambitious research initiative aiming to develop orbital AI data centers powered directly by solar energy. The project envisions constellations of satellites equipped with Google’s Tensor Processing Units (TPUs) operating in sun-synchronous low-Earth orbits to harness nearly continuous sunlight, enabling highly scalable AI computing beyond Earth’s energy and resource constraints. These satellites would be interconnected via high-bandwidth free-space optical links, potentially reaching multi-terabit per second data transfer rates, to form a tightly clustered “AI constellation” capable of handling large-scale machine learning workloads. Key technical challenges addressed include maintaining high data transmission rates between satellites flying just hundreds of meters apart using dense wavelength-division multiplexing (DWDM) and spatial multiplexing, as well as ensuring radiation resilience of the compute hardware. Google’s TPU v6e chips have demonstrated strong resistance to radiation in proton beam tests. The project is still in early research stages, with plans to launch two prototype satellites by early 202

    energysolar-powersatellite-technologyAI-infrastructurespace-based-computingmachine-learningoptical-communication
  • LG founder’s grandson, production firm partner up to bring AI to filmmaking

    The article discusses a new joint venture called Utopai East, formed between investment firm Stock Farm Road (SFR) and AI film and television production company Utopai Studios, aimed at integrating AI technologies into filmmaking. SFR, co-founded by Brian Koo (grandson of LG Group’s founder) and Amin Badr-El-Din, provides capital, industry expertise, and contacts, while Utopai contributes AI technology, workflow, and infrastructure. The venture focuses on building the necessary data centers and energy infrastructure to support AI-driven content production, with plans to co-produce films and TV shows and expand Korean intellectual property to international audiences. Production will start using existing infrastructure, with the first AI-assisted content expected next year. Both partners emphasize that AI is intended to enhance creativity and efficiency rather than replace human roles such as writing, directing, and acting. They highlight that all AI models and datasets are fully licensed to respect creators’ rights. The goal is to use AI to lower costs,

    energyAI-infrastructuredata-centersfilmmaking-technologyAI-in-mediaproduction-efficiencyKorean-IP
  • Microsoft inks $9.7B deal with Australia’s IREN for AI cloud capacity

    Microsoft has secured a significant $9.7 billion, five-year contract with Australia-based IREN to expand its AI cloud computing capacity. This deal grants Microsoft access to advanced compute infrastructure equipped with Nvidia GB300 GPUs, which will be deployed in phases through 2026 at IREN’s facility in Childress, Texas, designed to support up to 750 megawatts of capacity. Separately, IREN is investing about $5.8 billion in GPUs and equipment from Dell to support this infrastructure expansion. The agreement follows Microsoft’s recent launch of AI models optimized for reasoning, agentic AI systems, and multi-modal generative AI, reflecting the company's efforts to meet growing demand for AI services. Microsoft has also previously acquired approximately 200,000 Nvidia GB300 GPUs for data centers in Europe and the U.S. IREN, originally a bitcoin-mining firm, has pivoted successfully to AI workloads, leveraging its extensive GPU resources. CEO Daniel Roberts anticipates that the Microsoft contract will utilize only

    energycloud-computingAI-infrastructureGPUsdata-centersMicrosoftNvidia
  • Meta has an AI product problem 

    Meta is investing heavily in AI, spending billions on talent and infrastructure, including building two massive data centers and planning up to $600 billion in U.S. infrastructure spending over three years. This aggressive investment led to a $7 billion year-over-year increase in operating expenses and nearly $20 billion in capital expenses in the latest quarter. Despite these expenditures, Meta has yet to generate significant revenue from its AI efforts, causing investor concern and a sharp decline in its stock price—dropping 12% and wiping out over $200 billion in market value shortly after earnings were reported. During the earnings call, CEO Mark Zuckerberg emphasized that the spending was just beginning and framed it as necessary to develop frontier AI models with novel capabilities that could unlock massive future opportunities. However, he was unable to provide concrete revenue forecasts or product timelines, leaving analysts and investors uncertain about the near-term payoff. Unlike competitors such as Google, Nvidia, and OpenAI—who also invest heavily in AI but have fast-growing, revenue-gener

    energydata-centersAI-infrastructurecapital-expenditurecompute-resourcesMetatechnology-investment
  • Nvidia expands AI ties with Hyundai, Samsung, SK, Naver

    Nvidia CEO Jensen Huang is visiting South Korea to announce expanded collaborations with major Korean technology companies—Hyundai Motor, Samsung, SK Group, and Naver—alongside the South Korean government. This partnership aims to significantly boost South Korea’s AI infrastructure and physical AI capabilities, with the country securing over 260,000 of Nvidia’s latest GPUs. Approximately 50,000 GPUs will support public initiatives, including a national AI data center, while the remaining GPUs will be allocated to leading companies to drive AI innovation in manufacturing and industry-specific AI model development. This move follows recent U.S. technology agreements with Japan and South Korea to enhance cooperation on emerging technologies such as AI, semiconductors, quantum computing, biotech, and 6G. Key collaborations include Samsung and Nvidia’s joint effort to build an AI Megafactory that integrates AI across semiconductor, mobile device, and robotics manufacturing using over 50,000 Nvidia GPUs and the Omniverse platform. They are also co-developing AI

    AIroboticssmart-factoriesautonomous-mobilitysemiconductor-manufacturingAI-infrastructureGPU-technology
  • Space data centers could satiate 165% surge in AI power hunger

    Researchers from NTU Singapore have proposed placing data centers in low Earth orbit (LEO) as a sustainable solution to meet the rapidly growing energy demands of AI computing. These space-based data centers would leverage the natural radiative cooling of the cold space environment and harness virtually unlimited solar energy, enabling net-zero carbon emissions. This approach addresses the challenges faced on Earth, such as high real estate costs in dense urban areas like Singapore and the significant energy and water consumption required for cooling terrestrial data centers. The team outlined two deployment strategies: orbital edge data centers, which process raw data on satellites equipped with AI accelerators to reduce transmission loads, and orbital cloud data centers, consisting of satellite constellations with servers, broadband links, solar panels, and radiative coolers to perform advanced computing tasks from space. Importantly, these concepts rely on existing launch and satellite technologies, making them feasible today. Given projections that AI-driven energy demand could surge by 165% by 2030, this innovative use of

    energysolar-energydata-centersspace-technologysustainable-computingAI-infrastructureradiative-cooling
  • Benjamin Lee on why AI needs better infrastructure, not just bigger models

    Benjamin Lee, a professor of Electrical and Systems Engineering at the University of Pennsylvania, emphasizes that the rapid growth of AI requires smarter infrastructure and energy-aware design rather than just bigger models. Lee’s expertise spans hardware design, infrastructure strategy, and energy policy, and he highlights the unsustainable pace at which data centers are expanding—often outstripping the availability of clean energy. He stresses that energy consumption must be treated as a core design metric in AI development, not an afterthought, to ensure long-term sustainability. Lee traces his career motivation back to an undergraduate course on computer organization that revealed the complexities of hardware-software interaction, leading him to focus on energy efficiency in computing. He points out a common misconception among engineers and policymakers: the belief that current AI applications like chatbots justify massive infrastructure investments. Instead, he argues that tech companies are building energy and data center infrastructure with future, yet-to-be-imagined AI capabilities in mind. While there was initial optimism about powering data centers with renewables

    energyAI-infrastructuredata-centersenergy-efficiencysustainable-computingprocessor-architecturerenewable-energy
  • NVIDIA partners with Uber to deploy AVs starting in 2027 - The Robot Report

    NVIDIA has announced a strategic partnership with Uber to deploy a large-scale level 4 autonomous vehicle (AV) mobility network starting in 2027. This network will leverage Uber’s robotaxi and autonomous delivery fleets, powered by NVIDIA’s DRIVE AGX Hyperion 10 platform and DRIVE AV software, which are designed to enable software-defined, level 4-ready vehicles. NVIDIA aims to support Uber in scaling its autonomous fleet to 100,000 vehicles globally over time, with development involving NVIDIA, Uber, and other ecosystem partners. Additionally, the companies are collaborating on a data factory accelerated by NVIDIA’s Cosmos world foundation model to curate and process data critical for AV development. The NVIDIA DRIVE AGX Hyperion 10 platform serves as a modular, customizable reference architecture combining a production computer and sensor suite that automakers can use to build level 4-capable vehicles. It features the NVIDIA DRIVE AGX Thor system-on-a-chip based on the Blackwell architecture, delivering over 2,000 FP4

    robotautonomous-vehiclesNVIDIA-DRIVE-AGXlevel-4-autonomyrobotaxiAI-infrastructuremobility-network
  • Nscale inks massive AI infrastructure deal with Microsoft

    Nscale, an AI cloud provider founded in 2024, has secured a major deal to deploy approximately 200,000 Nvidia GB300 GPUs across data centers in Europe and the U.S. This deployment will occur through Nscale’s own operations and a joint venture with investor Aker. Key locations include a Texas data center leased by Ionic Digital, which will receive 104,000 GPUs over 12 to 18 months, with plans to expand capacity to 1.2 gigawatts. Additional deployments include 12,600 GPUs at the Start Campus in Sines, Portugal (starting Q1 2026), 23,000 GPUs at Nscale’s Loughton, England campus (starting 2027), and 52,000 GPUs at Microsoft’s AI campus in Narvik, Norway. This deal builds on prior collaborations with Microsoft and Aker involving data centers in Norway and the UK. Josh Payne, Nscale’s founder and CEO, emphasized that this agreement positions Nscale as

    energyAI-infrastructuredata-centersGPUssustainabilitycloud-computingtechnology-investment
  • Meta partners up with Arm to scale AI efforts

    Meta has partnered with semiconductor design company Arm to enhance its AI systems amid a significant infrastructure expansion. The collaboration will see Meta’s ranking and recommendation systems transition to Arm’s technology, leveraging Arm’s strengths in low-power, efficient AI deployments. Meta’s head of infrastructure, Santosh Janardhan, emphasized that this partnership aims to scale AI innovation to over 3 billion users. Arm CEO Rene Haas highlighted the focus on performance-per-watt efficiency as critical for the next era of AI. This multi-year partnership coincides with Meta’s massive investments in AI infrastructure, including projects like “Prometheus,” a data center expected to deliver multiple gigawatts of power by 2027 in Ohio, and “Hyperion,” a 2,250-acre data center campus in Louisiana projected to provide 5 gigawatts of computational power by 2030. Unlike other recent AI infrastructure deals, Meta and Arm are not exchanging ownership stakes or physical infrastructure. This contrasts with Nvidia’s extensive investments in AI firms such

    energyAI-infrastructuredata-centerssemiconductorpower-consumptioncloud-computingMeta
  • Google to invest $15B in Indian AI infrastructure hub

    Google announced a $15 billion investment to establish a 1-gigawatt data center and AI hub in Visakhapatnam, Andhra Pradesh, India, over the next five years through 2030. This marks Google's largest investment in India and its biggest outside the U.S. The AI hub will be part of a global network spanning 12 countries and will offer a full suite of AI solutions, including custom Tensor Processing Units (TPUs), access to AI models like Gemini, and support for consumer services such as Google Search, YouTube, Gmail, and Google Ads. Google is partnering with Indian telecom Bharti Airtel and AdaniConneX to build the data center and subsea cable infrastructure, positioning Visakhapatnam as a global connectivity hub and digital backbone for India. The investment comes amid growing Indian government efforts to promote local alternatives to U.S. tech giants like Google, with initiatives encouraging “swadeshi” or “made in India” products and services. Despite these

    energydata-centerAI-infrastructurecloud-computingsubsea-cableconnectivity-hubIndia-investment
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive investment and infrastructure buildup fueling the current AI boom, emphasizing the enormous computing power required to train and run AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. The piece details key deals, starting with Microsoft’s landmark $1 billion investment in OpenAI in 2019, which established Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership now valued at nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of close collaboration between AI firms and cloud providers has become standard, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for other AI ventures. Oracle’s emergence as a major AI infrastructure player is underscored by its unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in mid-2025

    energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAI
  • AMD to supply 6GW of compute capacity to OpenAI in chip deal worth tens of billions

    AMD has entered a multi-year chip supply agreement with OpenAI that could generate tens of billions in revenue and significantly boost AMD’s presence in the AI sector. Under the deal, AMD will provide OpenAI with 6 gigawatts of compute capacity using multiple generations of its Instinct GPUs, beginning with the Instinct MI450 GPU, which is expected to be deployed in the second half of 2026. AMD claims the MI450 will outperform comparable Nvidia GPUs through hardware and software enhancements developed with OpenAI’s collaboration. Currently, OpenAI already uses AMD’s MI355X and MI300X GPUs for AI inference tasks due to their high memory capacity and bandwidth. In addition to supplying chips, AMD has granted OpenAI the option to purchase up to 160 million shares of AMD stock, representing a 10% stake. The stock vesting is tied to the deployment milestones of the compute capacity and AMD’s stock price, with the final tranche vesting if AMD shares reach $600. Following the

    energyAI-computeGPUsdata-centerschip-supplysemiconductorAI-infrastructure
  • OpenAI ropes in Samsung, SK Hynix to source memory chips for Stargate

    OpenAI has entered into agreements with South Korean memory chip giants Samsung Electronics and SK Hynix to supply DRAM wafers for its Stargate AI infrastructure project and to build AI data centers in South Korea. The deals, formalized through letters of intent following a high-profile meeting involving OpenAI CEO Sam Altman and South Korean leadership, will see Samsung and SK Hynix scale production to deliver up to 900,000 high-bandwidth memory DRAM chips monthly—more than doubling the current industry capacity for such chips. This move is part of OpenAI’s broader strategy to rapidly expand its compute capacity for AI development. These agreements come amid a flurry of recent investments and partnerships aimed at boosting OpenAI’s compute power. Notably, Nvidia committed to providing OpenAI access to over 10 gigawatts of AI training compute, while OpenAI also partnered with SoftBank, Oracle, and SK Telecom to increase its total compute capacity to 7 gigawatts and develop AI data centers

    materialsmemory-chipsDRAMAI-infrastructuredata-centersSamsungSK-Hynix
  • NVIDIA unveils brain-and-body stack to train next-gen humanoids

    NVIDIA has introduced a comprehensive robotics stack aimed at advancing humanoid robot development by integrating physics simulation, AI reasoning, and infrastructure within its Isaac Lab platform. Central to this update are the open-source, GPU-accelerated Newton Physics Engine and the Isaac GR00T N1.6 robot foundation model. Newton, co-developed with Google DeepMind and Disney Research and managed by the Linux Foundation, enables highly realistic simulations of complex physical interactions—such as walking on uneven terrain or handling fragile objects—facilitating safer and more reliable transfer of robot skills from simulation to real-world environments. Early adopters include leading academic and industry robotics groups. Isaac GR00T N1.6 incorporates NVIDIA’s Cosmos Reason, a vision-language reasoning model designed for physical AI, which enhances humanoid robots’ ability to interpret ambiguous instructions, leverage prior knowledge, and generalize across tasks. This model supports simultaneous movement and object manipulation, tackling advanced challenges like opening heavy doors. Developers can fine-tune GR00T

    roboticshumanoid-robotsNVIDIA-IsaacNewton-Physics-EngineAI-infrastructurerobot-simulationphysical-AI
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. Central to this surge was Microsoft’s initial $1 billion investment in OpenAI in 2019, which positioned Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider relationships has become common, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for various AI firms. Oracle has emerged as a major player in AI infrastructure through unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in 2025 and a staggering $300 billion five-year compute power

    energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAI
  • What’s behind the massive AI data center headlines?

    The article discusses the recent surge in massive AI data center investments in Silicon Valley, driven primarily by the needs of OpenAI and its partners. Nvidia announced significant infrastructure commitments, while OpenAI revealed plans to expand capacity through collaborations with Oracle and Softbank, adding gigawatts of new power to support future versions of ChatGPT. These individual deals are enormous, but collectively they highlight Silicon Valley’s intense efforts to provide OpenAI with the computational resources required to train and operate increasingly powerful AI models. OpenAI also introduced a new AI feature called Pulse, which operates independently of the ChatGPT app and is currently available only to its $200-per-month Pro subscribers due to limited server capacity. The company aims to expand such features to a broader user base but is constrained by the availability of AI data centers. The article raises the question of whether the hundreds of billions of dollars being invested in AI infrastructure to support OpenAI’s ambitions are justified by the value of features like Pulse. The piece also alludes to broader

    energydata-centersAI-infrastructurepower-consumptioncloud-computingserver-capacitySilicon-Valley-investments
  • OpenAI is building five new Stargate data centers with Oracle and SoftBank

    OpenAI is expanding its AI infrastructure by building five new Stargate data centers in collaboration with Oracle and SoftBank. Three of these centers are being developed with Oracle and are located in Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed Midwest location. The remaining two centers are being developed with SoftBank, situated in Lordstown, Ohio, and Milam County, Texas. This expansion is part of OpenAI’s broader strategy to enhance its capacity for training and deploying more advanced AI models. Additionally, OpenAI recently announced a deal to acquire AI processors from a chipmaker, which will support further development of its AI data center network. The new Stargate data centers underscore OpenAI’s commitment to scaling its infrastructure to meet growing computational demands.

    energydata-centersAI-infrastructurechipmakerstechnology-partnershipscloud-computingenergy-efficiency
  • NVIDIA investing $100B in OpenAI data centers for next-gen AI

    OpenAI and NVIDIA have entered a landmark partnership, with NVIDIA committing up to $100 billion to build massive AI data centers that will deploy at least 10 gigawatts of compute power using millions of NVIDIA GPUs. The first gigawatt of this capacity is expected to go live in the second half of 2026 on NVIDIA’s upcoming Vera Rubin platform. NVIDIA CEO Jensen Huang described the collaboration as a “next leap forward” for both companies, highlighting that the 10 gigawatts equate to roughly 4 to 5 million GPUs—double the number shipped by NVIDIA last year. This massive infrastructure investment underscores the deep ties between the two companies and their joint efforts to power the next era of AI intelligence. OpenAI CEO Sam Altman emphasized that compute infrastructure is central to OpenAI’s mission and will form the foundation of the future economy. He noted the challenge of balancing research, product development, and scaling infrastructure, promising significant developments in the coming months. OpenAI cofounder Greg

    energydata-centersAI-infrastructureNVIDIAOpenAIGPUscompute-power
  • The billion-dollar infrastructure deals powering the AI boom

    The article highlights the massive financial investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang projects that $3 to $4 trillion will be spent on AI infrastructure by 2030, with significant contributions from AI companies themselves. Major tech players such as Microsoft, OpenAI, Meta, Oracle, Google, and Amazon are heavily investing in cloud services, data centers, and specialized hardware to support AI training and deployment. These efforts are straining power grids and pushing the limits of existing data center capacities. A pivotal moment in the AI infrastructure race was Microsoft’s initial $1 billion investment in OpenAI, which secured Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has since grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider deals has become common, with Amazon investing $8 billion in Anthropic and Nvidia committing $100 billion to

    energyAI-infrastructurecloud-computingdata-centerspower-gridsNvidiaMicrosoft-Azure
  • Big Tech Dreams of Putting Data Centers in Space

    The article discusses the growing energy demands and environmental impacts of terrestrial data centers, particularly those supporting artificial intelligence, which could increase electricity consumption by 165% by 2030 and rely heavily on fossil fuels. In response, prominent tech figures like OpenAI CEO Sam Altman, Jeff Bezos, and Eric Schmidt are exploring the concept of placing data centers in space to leverage continuous solar power and reduce pollution on Earth. Altman envisions ambitious projects such as a Dyson sphere of data centers around the sun, though such megastructures face enormous resource and feasibility challenges. More immediate efforts are underway by startups like Starcloud, Axiom, and Lonestar Data Systems, which have secured funding to develop space-based data center technologies. Scientific advances support the potential viability of orbital data centers. Caltech professor Ali Hajimiri, involved in the Space Solar Power Project, has patented concepts for space-based computational systems and proposed lightweight solar power solutions that could generate electricity more cheaply than Earth-based systems. However, significant

    energydata-centersspace-technologysolar-powerAI-infrastructuresustainabilityspace-based-energy
  • NVIDIA invests $5B in Intel, launches joint AI and PC chip venture

    NVIDIA is investing $5 billion in Intel, becoming one of its largest shareholders and forming a strategic partnership to jointly develop future data center and PC chips. This collaboration aims to combine Intel’s x86 CPU architecture with NVIDIA’s AI and GPU technologies, with Intel building custom CPUs for NVIDIA’s AI infrastructure and manufacturing x86 system-on-chips integrated with NVIDIA RTX GPU chiplets for high-performance personal computers. The deal provides a significant boost to Intel, which has struggled in recent years, as evidenced by a 23% surge in its stock price following the announcement. The partnership leverages the strengths of both companies: Intel’s foundational x86 architecture, manufacturing capabilities, and advanced packaging, alongside NVIDIA’s AI leadership and CUDA architecture. Analysts view NVIDIA’s involvement as a pivotal moment for Intel, repositioning it from an AI laggard to a key player in AI infrastructure. The collaboration also has competitive implications, potentially challenging rivals like AMD and TSMC, which currently manufactures NVIDIA’s top processors. The

    semiconductorsAI-chipsNVIDIAInteldata-centersPC-processorsAI-infrastructure
  • Why the Oracle-OpenAI deal caught Wall Street by surprise

    The recent surprise deal between OpenAI and Oracle caught Wall Street off guard but underscores Oracle’s continuing significance in AI infrastructure despite its legacy status. OpenAI’s willingness to commit substantial funds—reportedly around $60 billion annually for compute and custom AI chip development—signals its aggressive scaling strategy and desire to diversify infrastructure providers to mitigate risk. Industry experts highlight that OpenAI is assembling a comprehensive global AI supercomputing foundation, which could give it a competitive edge. Oracle’s involvement, while unexpected to some given its perceived diminished role compared to cloud giants like Google, Microsoft, and AWS, is explained by its proven capabilities in delivering large-scale, high-performance infrastructure, including supporting TikTok’s U.S. operations. However, key details about the deal remain unclear, particularly regarding how OpenAI will finance and power its massive compute needs. The company is burning through billions annually despite growing revenues from ChatGPT and other products, raising questions about sustainability. Energy sourcing is a critical concern since data centers are projected to

    energyAI-infrastructurecloud-computingsupercomputingdata-centerspower-consumptionOpenAI
  • OpenAI to launch AI data center in Norway, its first in Europe

    OpenAI announced plans to launch Stargate Norway, its first AI data center in Europe, in partnership with British AI cloud infrastructure provider Nscale and Norwegian energy firm Aker. The data center will be a 50/50 joint venture between Nscale and Aker, with OpenAI as an off-taker purchasing capacity from the facility. Located near Narvik, Norway, the site will leverage the region’s abundant hydropower, cool climate, and mature industrial base to run entirely on renewable energy. The initial phase will deliver 230 megawatts (MW) of capacity, expandable to 290 MW, and is expected to operate 100,000 Nvidia GPUs by the end of 2026. The facility will incorporate advanced cooling technology and reuse excess heat to support low-carbon enterprises locally. This initiative aligns with Europe’s broader push for AI sovereignty, data sovereignty, and sustainable infrastructure, as the EU recently announced multi-billion euro investments to build AI factories and enhance compute power within the bloc.

    energydata-centerAI-infrastructurerenewable-powerhydropowerliquid-coolingNvidia-GPUs
  • Meta to spend up to $72B on AI infrastructure in 2025 as compute arms race escalates

    Meta announced plans to dramatically increase its investment in AI infrastructure in 2025, with capital expenditures expected to reach between $66 billion and $72 billion—an increase of about $30 billion compared to the previous year. This spending surge will focus on expanding data centers, servers, and other physical infrastructure to support the company’s AI ambitions. Meta expects this aggressive investment trend to continue into 2026, emphasizing that developing leading AI infrastructure will be a core competitive advantage for building superior AI models and products. Key projects include two major AI superclusters, such as the Prometheus cluster in Ohio, which aims to achieve 1 gigawatt of compute power by 2026. The company’s infrastructure expansion has raised concerns locally, with some projects, like the one in Newton County, Georgia, reportedly causing water shortages for residents due to high resource consumption. Additionally, Meta is investing heavily in talent acquisition, particularly for its new Superintelligence Labs unit, which focuses on AI research and development. CEO

    energyAI-infrastructuredata-centerscompute-powerMetasuperclusterscapital-expenditure
  • AI May Gobble Up Every Available Electron In Its Quest To Sell Us More Stuff - CleanTechnica

    The article discusses the significant federal funding—$90 billion—pledged by the U.S. government, redirected from social programs and renewable energy subsidies, to support major tech companies like Google, Microsoft, Meta, and Amazon in building AI infrastructure. This investment aims to secure American dominance in artificial intelligence but raises concerns about the massive electricity demand such data centers will require. Analysts predict that by 2030, data centers could consume up to 10% or more of all U.S. electricity, potentially driving up energy costs for ordinary Americans by 50% or higher. The article critiques this allocation of resources amid ongoing social needs and questions the sustainability of such energy consumption. Additionally, the article highlights OpenAI’s continued expansion, including a $500 billion investment commitment to build 10 gigawatts of AI infrastructure, further emphasizing the scale of AI’s energy appetite. While some innovations, like the Energy Dome technology from an Italian startup partnering with Google, offer promising ways to store renewable energy for longer periods

    energyAI-infrastructuredata-centerselectricity-consumptionrenewable-energyfederal-fundingpower-demand
  • Trump’s AI strategy trades guardrails for growth in race against China

    The Trump administration released its AI Action Plan, marking a significant departure from the Biden administration’s more cautious stance on AI risks. The new strategy prioritizes rapid AI infrastructure development, deregulation, and national security to compete with China, emphasizing growth over guardrails. Key elements include expanding data centers—even on federal lands and during critical energy grid periods—while downplaying efforts to mitigate AI-related harms. The plan also proposes workforce upskilling and local partnerships to create jobs tied to AI infrastructure, positioning these investments as essential to a “new golden age of human flourishing.” Authored by Trump’s technology and AI experts, many from Silicon Valley, the plan reflects input from over 10,000 public comments but remains a broad blueprint rather than a detailed roadmap. It includes efforts to limit state-level AI regulations by threatening to withhold federal funding and empowering the FCC to challenge state rules that affect communications infrastructure. On the federal level, the administration seeks to identify and remove regulations that impede AI innovation. Dereg

    energyAI-infrastructuredata-centersderegulationtechnology-policynational-securityinnovation
  • Sam Altman-backed Oklo to cool AI data centers with new nuclear tech

    Oklo, a nuclear technology company backed by Sam Altman, has partnered with Vertiv, a leader in digital infrastructure, to develop an integrated power and cooling system for hyperscale and colocation data centers. This system will leverage Oklo’s small modular reactors (SMRs) to generate steam and electricity, combined with Vertiv’s thermal management technology, aiming to optimize both power and cooling efficiently and sustainably. The collaboration seeks to address common data center challenges such as high energy demand, reliance on power grids, and environmental impact by providing a reliable, carbon-free energy source that can be located near data centers for improved performance and scalability. The partnership comes amid the rapid growth of AI and high-performance computing, which significantly increases power consumption in data centers. Oklo’s SMRs are designed for flexibility and quick adaptation to changing energy needs, enabling continuous, stable power supply critical for data center operations. By integrating power generation and cooling solutions from the outset, Oklo and Vertiv aim to enhance energy efficiency

    energynuclear-energydata-centerscooling-technologysmall-modular-reactorsAI-infrastructurepower-efficiency
  • Meta is reportedly using actual tents to build data centers

    Meta is accelerating its efforts to build AI infrastructure by using unconventional methods to construct data centers quickly. According to reports, the company is employing actual tents and ultra-light structures, along with prefabricated power and cooling modules, to expedite the deployment of computing capacity. This approach prioritizes speed over aesthetics or redundancy, reflecting Meta’s urgent need to catch up with competitors like OpenAI, xAI, and Google in the race for superintelligence technology. One notable project is Meta’s Hyperion data center, which a company spokesperson confirmed will be located in Louisiana. The facility is expected to reach a capacity of 2 gigawatts by 2030, underscoring Meta’s commitment to rapidly scaling its AI compute resources. The absence of traditional backup generators, such as diesel units, further highlights the focus on swift, efficient construction rather than conventional data center design norms. Overall, Meta’s strategy signals a shift toward innovative, speed-driven infrastructure development to support its AI ambitions.

    energydata-centersMetaAI-infrastructurepower-modulescooling-technologysupercomputing
  • Zuckerberg bets big on AI with first gigawatt superclusters plan

    Meta Platforms, led by CEO Mark Zuckerberg, is making a significant investment in artificial intelligence infrastructure by planning to build some of the world’s largest AI superclusters. The company announced that its first supercluster, Prometheus, will launch in 2026, with additional multi-gigawatt clusters like Hyperion—designed to scale up to five gigawatts of compute capacity—also in development. These superclusters aim to handle massive AI model training workloads, helping Meta compete with rivals such as OpenAI and Google in areas like generative AI, computer vision, and robotics. According to industry reports, Meta is on track to be the first AI lab to deploy a supercluster exceeding one gigawatt, marking a major escalation in the AI arms race. Alongside infrastructure expansion, Meta is aggressively investing in AI talent and research. The company recently launched Meta Superintelligence Labs, led by former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman, consolidating top AI

    energyAI-superclustersMetahigh-performance-computingdata-centersgigawatt-scale-computingAI-infrastructure
  • OpenAI’s planned data center in Abu Dhabi would be bigger than Monaco

    energydata-centerAI-infrastructurepower-consumptionAbu-DhabiOpenAIG42