Articles tagged with "AI-infrastructure"
Elon Musk links SpaceX and xAI in a record-setting merger to boost AI
SpaceX has officially acquired xAI, merging two of Elon Musk’s leading ventures to form a potentially world-leading private company. This union combines SpaceX’s expertise in rockets and satellites with xAI’s rapid advancements in artificial intelligence, aligning with growing global demand for computing power. Musk highlighted this merger as a significant new phase in their joint mission, emphasizing the strategic focus on leveraging AI to advance space operations. The deal reflects the substantial valuations of both companies—SpaceX at approximately $800 billion and xAI at around $230 billion—underscoring strong investor confidence in space and AI innovation. Financial pressures in the AI sector, particularly the high costs of powering and cooling large-scale AI models, have driven the integration. By bringing xAI under its umbrella, SpaceX gains greater control over AI development and deployment, while xAI benefits from SpaceX’s infrastructure, capital, and launch capabilities. Musk noted that relocating AI computing efforts to space could address the immense power and cooling demands of terrestrial data centers
energyartificial-intelligenceSpaceXdata-centerscomputing-powersatellite-technologyAI-infrastructureSpaceX seeks approval for solar-powered orbital data centers for AI
SpaceX, led by Elon Musk, has filed a request with the FCC to launch up to one million solar-powered satellites designed to serve as orbital data centers for artificial intelligence (AI). These satellites would leverage constant solar energy and natural vacuum cooling in low-Earth orbit (500-2,000 km altitude) to overcome the significant electricity and water consumption challenges faced by terrestrial AI infrastructure. The move aims to reduce environmental impact and operational costs while enabling AI growth beyond the limitations of Earth’s power grids. This filing coincides with SpaceX’s ongoing talks to merge with Musk’s AI startup, xAI, potentially positioning SpaceX ahead of competitors like Google, Meta, and OpenAI. The project’s feasibility depends heavily on SpaceX’s Starship rocket, which promises dramatically lower launch costs and the capacity to deliver millions of tons of payload to orbit annually. By securing FCC approval for a large satellite fleet, SpaceX aims to meet the anticipated demand from a billion AI users and establish space as the most
energysolar-powerorbital-data-centersSpaceXAI-infrastructuresatellitesStarship-rocketsIndia offers zero taxes through 2047 to lure global AI workloads
India has introduced a significant tax incentive to attract global AI workloads by offering foreign cloud providers zero taxes through 2047 on revenues from services sold outside India, provided these services are run from Indian data centers. Announced by Finance Minister Nirmala Sitharaman in the annual budget, this tax holiday aims to position India as a competitive hub for AI computing investment amid a global surge in demand for cloud infrastructure. The budget also includes a 15% cost-plus safe harbour tax provision for Indian data-center operators serving related foreign entities. However, sales to Indian customers will be taxed domestically through local resellers. This move aligns with major investments by global tech giants such as Google, Microsoft, and Amazon, who have collectively pledged tens of billions of dollars to expand AI and cloud infrastructure in India. Domestic players like Digital Connexion and Adani Group are also investing heavily in large-scale AI-focused data center projects, signaling strong interest from both international and local investors. Despite these positive developments, challenges remain,
energydata-centersAI-infrastructurecloud-computingIndiainvestmentpower-shortagesWorld's first fleet drilling robot cuts data center build times
DEWALT, a U.S.-based power equipment maker owned by Stanley Black & Decker, has partnered with August Robotics to introduce the world’s first fleet-capable robot designed for downward concrete drilling. This robotic system targets a critical bottleneck in data center construction by automating the labor-intensive task of drilling thousands of precision holes needed to anchor server racks and support overhead mechanical, electrical, and plumbing systems. The robot operates autonomously and can work in fleets, allowing multiple units to drill simultaneously across large sites. According to DEWALT, the system drills up to 10 times faster than traditional methods, potentially reducing overall construction timelines by as much as 80 weeks while improving jobsite safety and cutting costs per hole. The robotic drilling system is already being piloted with one of the world’s largest hyperscalers and has completed work across 10 data center construction phases, achieving 99.97 percent accuracy in hole location and depth over more than 90,000 drilled holes. This high
roboticsconstruction-automationdata-centerdrilling-robotautonomous-robotsAI-infrastructurefleet-roboticsOpenAI signs deal, worth $10 billion, for compute from Cerebras
OpenAI has entered a multi-year agreement with AI chipmaker Cerebras, securing 750 megawatts of compute power from 2026 through 2028 in a deal valued at over $10 billion. This partnership aims to accelerate AI processing speeds, enabling faster response times for OpenAI’s customers by leveraging Cerebras’s specialized AI chips, which the company claims outperform traditional GPU-based systems like those from Nvidia. The enhanced compute capacity is expected to support real-time AI inference, which Cerebras CEO Andrew Feldman likens to the transformative impact broadband had on the internet. Cerebras, which gained prominence following the AI surge sparked by ChatGPT’s 2022 launch, has been expanding despite postponing its IPO multiple times. The company is reportedly in talks to raise an additional $1 billion at a $22 billion valuation. OpenAI’s strategy involves diversifying its compute infrastructure to optimize performance across different workloads, with Cerebras providing a dedicated low-latency inference solution. This collaboration is
energyAI-chipscompute-powerdata-centershigh-performance-computingsemiconductor-technologyAI-infrastructureMicrosoft pledges water-positive AI data centers, full power payments
Microsoft has launched its Community First AI Infrastructure initiative to address environmental and economic concerns linked to the rapid expansion of its U.S. AI data centers. The company commits to preventing increases in residential electricity prices and avoiding strain on local water supplies caused by its facilities. Key pledges include paying electricity rates that fully cover the costs imposed by data centers, funding necessary grid upgrades, and collaborating early with utilities to plan power needs. Microsoft has already supported nearly eight gigawatts of new electricity generation in the Midwest, exceeding its current regional consumption, and aims to push for rate structures that prevent residential customers from subsidizing data center growth. On water usage, Microsoft plans to reduce data center water use intensity by 40% by 2030, relying on closed-loop cooling systems and minimizing potable water use. The company will fund water infrastructure improvements where local systems face capacity limits and has committed over $25 million for water and sewer upgrades near a Virginia data center. Additionally, Microsoft pledges to replenish more water than it
energydata-centersAI-infrastructurewater-conservationelectricity-gridsustainable-technologyMicrosoftMicrosoft announces glut of new data centers but says it won’t let your electricity bill go up
Microsoft has announced a significant expansion of its AI data center infrastructure, reaffirming its commitment to build new facilities despite growing local opposition and activism against data center projects across the U.S. In response to community concerns, the company pledged a “community-first” approach, promising to be a “good neighbor” by ensuring that its electricity consumption does not increase local residents’ power bills. Microsoft plans to collaborate closely with utility companies and regulatory bodies to pay rates that fully cover its share of the local grid’s costs, thereby preventing the financial burden from being passed on to residential customers. Additionally, Microsoft committed to creating jobs in the communities hosting its data centers and minimizing water usage, addressing two major points of contention around data center development. These promises come amid heightened political and public scrutiny, with numerous activist groups mobilizing against data center expansions and some projects already canceled or delayed due to community backlash. The company’s assurances also align with recent statements from political leaders emphasizing the importance of protecting consumers from increased utility costs linked to
energydata-centerselectricityinfrastructuresustainabilityMicrosoftAI-infrastructureMark Zuckerberg says Meta is launching its own AI infrastructure initiative
Meta is launching a major AI infrastructure initiative aimed at significantly expanding its capacity to support advanced AI models and products. CEO Mark Zuckerberg announced plans to build tens of gigawatts of power capacity this decade, scaling to hundreds of gigawatts over time, emphasizing that this infrastructure will be a strategic advantage for the company. This expansion reflects the growing energy demands of AI technologies, which could lead to a substantial increase in electricity consumption in the U.S. over the next decade. To lead this effort, Zuckerberg named three key executives: Santosh Janardhan, head of global infrastructure, who will oversee technical architecture, software, silicon development, and data center operations; Daniel Gross, who will manage long-term capacity strategy, supplier partnerships, and business planning; and Dina Powell McCormick, responsible for government relations and financing. This initiative places Meta in direct competition with other tech giants like Microsoft and Alphabet, who are also investing heavily in AI-ready cloud infrastructure.
energyAI-infrastructuredata-centerspower-consumptioncloud-computingMetatechnology-investmentMeta makes nuclear reactor history with 6.6 GW energy deal to power AI
Meta has made a historic move by securing up to 6.6 gigawatts (GW) of nuclear energy through agreements with Oklo, TerraPower, and Vistra, positioning itself as one of the largest corporate purchasers of nuclear power in U.S. history. This energy will provide the reliable, carbon-free electricity needed to power Meta’s next-generation AI infrastructure, including its Prometheus supercluster in Ohio. The initiative reflects Meta’s strategic shift toward advanced nuclear technologies to meet the substantial energy demands of AI development, aiming to support America’s leadership in AI while promoting clean energy. The partnerships cover three key areas: TerraPower, founded by Bill Gates, will develop Natrium reactors generating up to 690 MW by 2032, with rights to additional units totaling 2.8 GW plus 1.2 GW of energy storage; Oklo will advance an advanced nuclear campus in Ohio with up to 1.2 GW of power from Aurora Powerhouse fast reactors by 2030;
energynuclear-energyMetaAI-infrastructureTerraPowerOkloclean-energyBeating the bottleneck: how Point2 plans to unleash AI performance
The article discusses Point2 Technology’s innovative approach to overcoming a critical bottleneck in AI infrastructure: the limitations of data movement within computing systems. As AI workloads grow rapidly, GPUs have advanced significantly faster than the physical interconnects (cables and connections) that link them, causing bandwidth, power, and latency challenges in data centers. Point2’s solution, led by CEO Dr. Sean Park and explained by VP David Kuo, is a novel interconnect technology called E-Tube, which transmits radio frequency (RF) signals through a plastic waveguide rather than relying on traditional copper cables or optical fibers. This approach avoids copper’s physical limitations and the high power, cost, and reliability issues associated with optics. Point2’s RF-over-plastic technology offers substantial advantages for the dominant data center use case of short-range connections (10–20 meters). Unlike optics, which is designed for long distances but comes with significant penalties, E-Tube behaves like copper in terms of economics and ease of
materialsenergyAI-infrastructuredata-transmissionradio-frequencyplastic-waveguideinterconnect-technologyPanasonic’s AI Strategy Enters the Implementation Phase: Real-World Impact for Better Future Showcased at CES 2026 - CleanTechnica
At CES 2026, Panasonic Group showcased the real-world implementation of its AI strategy, initially announced the previous year, under the theme “The Future We Make.” The exhibition highlighted Panasonic’s advancements in AI infrastructure, particularly focusing on data centers, AI-based B2B solutions, and environmentally focused Green Transformation (GX) technologies. These innovations address the growing computational demands and operational challenges of AI data centers, including stable power supply, heat management, uninterrupted operation, and cybersecurity. Panasonic demonstrated several key technologies to support data center evolution. These include high-performance liquid cooling pumps and compressors designed to efficiently manage heat generated by high-density AI servers, improving lifespan and reducing environmental impact through compatibility with next-generation refrigerants. Additionally, Panasonic Energy offers energy storage systems integrated into server racks to stabilize power supply, provide backup during outages, and optimize energy use with peak shaving functions. The company also developed highly reliable components like conductive polymer aluminum electrolytic capacitors to enhance power circuit stability and performance under demanding conditions, supporting
energyAI-infrastructuredata-centerscooling-technologypower-supplyenvironmental-solutionsPanasonicNew open-source map charts the scale of US AI datacenters buildout
A non-profit research institute, Epoch AI, is using open-source intelligence—including satellite imagery, construction permits, and regulatory filings—to map the rapid expansion of AI datacenters across the United States. Their interactive map provides detailed estimates on cost, ownership, and power consumption of these facilities, offering rare transparency into an industry growing faster than public oversight. For example, Epoch AI highlights Meta’s “Prometheus” datacenter in New Albany, Ohio, estimating it has cost $18 billion and consumes 691 megawatts of power, reflecting Meta’s strategic pivot toward AI infrastructure. Epoch AI’s methodology centers on analyzing cooling infrastructure visible in satellite images, as modern AI datacenters generate extreme heat requiring extensive external cooling units. By counting and measuring fans and cooling systems, they estimate energy use, which in turn informs compute capacity and construction cost estimates. However, these estimates carry uncertainty due to variable fan configurations and speeds. The map currently covers about 15% of global AI compute capacity as of November
energydatacentersAI-infrastructurepower-consumptioncooling-systemssatellite-imageryenergy-efficiencyCredo Releases 2025 Environmental, Social, and Governance (ESG) Report - CleanTechnica
Credo Technology Group Holding Ltd (NASDAQ: CRDO), a leader in secure, high-speed connectivity solutions, has published its 2025 Environmental, Social, and Governance (ESG) Report, detailing its progress on key ESG priorities. The report emphasizes Credo’s commitment to responsible growth through strong governance, accountability, and innovation focused on energy-efficient product development. In 2025, Credo advanced connectivity technologies that reduce waste and power consumption, particularly supporting AI data centers and hyperscale environments. The company also enhanced its Code of Business Conduct and Ethics, expanded employee health, safety, and professional development programs, and grew community partnerships via its Credo Cares initiative. Credo’s product portfolio reflects its leadership in energy-efficient interconnect solutions for data center infrastructure, addressing increasing operational demands while minimizing environmental impact. The company’s innovations include Serializer/Deserializer (SerDes) and Digital Signal Processor (DSP) technologies that enable faster, more reliable, and scalable connectivity for optical and electrical Ethernet applications ranging
energyconnectivity-solutionsdata-centersenergy-efficiencyAI-infrastructurehigh-speed-connectivitysustainable-technologyThe year data centers went from backend to center stage
The article highlights the dramatic rise in public awareness and activism surrounding data centers in the United States as of 2025. Once largely invisible and confined to the tech industry, data centers have become a focal point of protests and political debate due to their rapid expansion driven by the booming AI and cloud computing sectors. Over the past year, 142 activist groups across 24 states have mobilized against new data center developments, citing concerns about environmental impact, energy consumption, and strain on local power grids. This surge in activism reflects the industry's exponential growth, with construction spending on data centers increasing by 331% since 2021, fueled by major tech companies like Google, Meta, Microsoft, and Amazon, as well as government initiatives promoting AI infrastructure. The backlash is evident nationwide, with communities in Michigan, Wisconsin, and Southern California actively opposing proposed data centers, often on environmental and quality-of-life grounds. Activists like Danny Candejas of MediaJustice report growing grassroots organizing efforts, suggesting that resistance to
energydata-centerscloud-computingAI-infrastructurepower-gridtechnology-activismtech-industryAlphabet to buy Intersect Power to bypass energy grid bottlenecks
Alphabet, Google's parent company, has agreed to acquire Intersect Power, a developer of data centers and clean energy projects, including taking on the company’s debt. This acquisition aims to help Alphabet expand its power generation capacity to support new data centers without depending on local utilities, which are currently struggling to meet the growing energy demands driven by AI companies. Alphabet had previously held a minority stake in Intersect Power following a strategic funding round led by Google and TPG Rise Climate, targeting $20 billion in total investment by 2030. The deal covers Intersect Power’s future development projects but excludes its existing operations, which will be acquired by other investors and managed separately. Intersect’s upcoming data parks, located near renewable energy sources like wind, solar, and battery storage, are expected to begin operations late next year and be fully completed by 2027. Google will be the primary user of these facilities, though the campuses are designed as industrial parks that can also host other companies’ AI chip operations.
energyclean-energydata-centerspower-generationrenewable-energybattery-storageAI-infrastructureFull Page Open Letter Calls on Amazon, Google, Meta, & Microsoft to Stop Fueling Climate Change with Data Center Demands - CleanTechnica
A full-page open letter published in the Indianapolis Star urges the CEOs of Amazon, Google, Meta, and Microsoft to power their expanding data centers with clean energy rather than fossil fuels. The letter highlights that these tech giants, as major electricity customers, should pressure utilities to commit to no new natural gas plants and to retire coal plants promptly. This call comes amid a surge of AI data center proposals in Indiana, where utilities have responded by planning new gas plants or delaying coal plant closures, actions that could increase energy costs for local residents and businesses. The letter is supported by various environmental and community organizations, including the Sierra Club, Hoosier Environmental Council, and Amazon Employees for Climate Justice. Representatives from these groups emphasize that continued reliance on fossil fuels for powering data centers undermines the companies’ own climate commitments and unfairly burdens Indiana communities with higher energy bills and pollution. They stress the urgent need for Big Tech to invest in renewable energy infrastructure to create a more efficient, resilient, and affordable electric grid,
energydata-centersclimate-changerenewable-energydecarbonizationAI-infrastructureclean-energyGoogle’s answer to the AI arms race — promote the guy behind its data center tech
Google has appointed Amin Vahdat as its chief technologist for AI infrastructure, a newly created role reporting directly to CEO Sundar Pichai. This move underscores the critical importance of AI infrastructure as Google plans to significantly increase its capital expenditures by the end of 2025. Vahdat, a computer scientist with a PhD from UC Berkeley, has been instrumental in building Google’s AI backbone over the past 15 years, focusing on large-scale computer efficiency. Before joining Google in 2010, he held academic positions at Duke University and UC San Diego. Vahdat’s contributions include leading the development of Google’s seventh-generation TPU (Ironwood), which delivers 42.5 exaflops of compute power—far surpassing the world’s top supercomputer at the time. He has also overseen the creation of the Jupiter network, a high-speed internal network with bandwidth capable of supporting simultaneous video calls for the entire global population, and has played a key role in Google’s
energydata-centersAI-infrastructureTPU-chipscloud-computingnetwork-technologyserver-managementEnvironmental groups call for halt to new data center construction
Environmental groups, including Food and Water Watch, Friends of the Earth, and Greenpeace, are urging Congress to impose a national moratorium on the approval and construction of new data centers. Their concerns center on the rapidly increasing electricity and water consumption driven by the expansion of data centers supporting AI and cryptocurrency activities. They warn that this growth is largely unregulated and threatens economic, environmental, climate, and water security across the United States. Electricity prices have already seen significant increases this year, with the most substantial impacts expected in states like Virginia, Pennsylvania, Ohio, Illinois, and New Jersey, where data center capacity is projected to grow the most. Energy demand from data centers is anticipated to nearly triple from 40 gigawatts today to 106 gigawatts by 2035, with much of this expansion occurring in rural areas. The rapid growth of data centers has sparked public protests, such as those at DTE’s headquarters in Detroit, where the utility seeks approval to supply electricity to a 1.
energydata-centerselectricity-consumptionenvironmental-impactAI-infrastructurerenewable-energyenergy-demandHow small modular reactors work and why they matter in AI energy surge
The article discusses the rapidly increasing electricity demand from data centers driven by artificial intelligence (AI) infrastructure, which is projected to grow about 15% annually through 2030, far outpacing other sectors. This surge has intensified the search for stable, carbon-free power sources in the U.S., with nuclear energy gaining renewed attention. Among nuclear options, small modular reactors (SMRs) are highlighted as promising due to their smaller size, factory-based manufacturing, and ability to be sited closer to energy consumers, reducing transmission losses. Over 80 SMR designs are in development globally, with some near-term deployable models expected to begin construction before 2030 and commercial operation by the mid-2030s. However, long-term radioactive waste management plans remain unresolved. SMRs occupy a middle ground between large conventional nuclear reactors and microreactors, typically producing up to 300 megawatts of electricity from reactor cores about 3 meters wide and 6 meters tall, on sites around
energysmall-modular-reactorsnuclear-energycarbon-free-powerdata-centersAI-infrastructureelectricity-consumptionAWS is spending $50B build AI infrastructure for the US government
Amazon Web Services (AWS) has announced a $50 billion investment to build specialized AI high-performance computing infrastructure tailored for U.S. government agencies. This initiative aims to significantly enhance federal access to AWS AI services, including Amazon SageMaker, model customization tools, Amazon Bedrock, model deployment, and Anthropic’s Claude chatbot. The project will add 1.3 gigawatts of computing power, with construction of new data centers expected to begin in 2026. AWS CEO Matt Garman emphasized that this investment will transform how federal agencies utilize supercomputing, accelerating critical missions such as cybersecurity and drug discovery, while removing technological barriers that have previously limited government AI adoption. AWS has a long history of working with the U.S. government, having started building cloud infrastructure for federal use in 2011. It launched the first air-gapped commercial cloud for classified workloads in 2014 and introduced the AWS Secret Region in 2017, which supports all security classification levels. This new AI infrastructure
energyAI-infrastructurecloud-computinghigh-performance-computinggovernment-technologydata-centerssupercomputingIndia’s TCS gets TPG to fund half of $2B AI data center project
Tata Consultancy Services (TCS) has partnered with private equity firm TPG to secure $1 billion funding for the first half of a $2 billion multi-year project called “HyperVault,” aimed at building a network of gigawatt-scale, liquid-cooled, high-density AI data centers across India. This initiative addresses the country’s significant gap between its large data generation—nearly 20% of global data—and its limited data center capacity, which currently accounts for only about 3% of the global total. The new data centers will support advanced AI workloads and are designed to meet the growing demand for AI compute power amid rapid adoption of AI technologies in India. However, the project faces challenges related to resource constraints, including water scarcity, power supply, and land availability, especially in urban hubs like Mumbai, Bengaluru, and Chennai where data center concentration is high. Liquid cooling, while necessary for managing the heat from power-intensive AI GPUs, raises concerns about water usage, with estimates suggesting a
energydata-centersAI-infrastructureliquid-coolingpower-consumptionwater-scarcitycloud-computingAnthropic announces $50 billion data center plan
Anthropic announced a significant $50 billion partnership with U.K.-based neocloud provider Fluidstack to build new data centers across Texas and New York, scheduled to come online throughout 2026. This investment aims to support the intense compute demands of Anthropic’s Claude AI models and advance AI capabilities that can accelerate scientific discovery and solve complex problems. CEO Dario Amodei emphasized the need for robust infrastructure to sustain frontier AI development. While Anthropic’s $50 billion commitment is substantial, it is smaller compared to competitors’ infrastructure investments, such as Meta’s $600 billion data center plan over three years and the $500 billion Stargate partnership involving SoftBank, OpenAI, and Oracle. The surge in AI infrastructure spending has raised concerns about a potential AI bubble. The deal also marks a major milestone for Fluidstack, a relatively young neocloud company founded in 2017, which has quickly become a preferred vendor in the AI sector with partnerships including Meta, Black Forest Labs, and
energydata-centerscloud-computingAI-infrastructurecompute-powerneocloudtechnology-investmentA better way of thinking about the AI bubble
The article discusses the concept of an AI bubble, emphasizing that tech bubbles need not be catastrophic but rather reflect overinvestment where supply outpaces demand. A key challenge in assessing the AI bubble lies in the mismatch between the rapid development of AI software and the slow, complex process of building and powering data centers. Since data centers take years to complete and depend on evolving technologies in energy, semiconductors, and power transmission, predicting future supply needs is difficult. Large-scale investments are already underway, with companies like Oracle, Softbank, and Meta committing hundreds of billions of dollars to AI infrastructure, highlighting the scale of current bets on AI’s growth. Despite this massive investment, demand for AI services remains uncertain. A recent McKinsey survey shows that while most companies use AI in some capacity, few have integrated it extensively or seen significant business impact, indicating many are still cautious about scaling AI adoption. Infrastructure challenges also pose risks: Microsoft CEO Satya Nadella noted that data center space, rather than chip
energydata-centersAI-infrastructuresemiconductor-designpower-transmissioncloud-servicestechnology-investmentOpenAI asked Trump administration to expand Chips Act tax credit to cover data centers
OpenAI has formally requested that the U.S. government expand the scope of the Advanced Manufacturing Investment Credit (AMIC), part of the Biden administration’s Chips Act, to include not only semiconductor fabrication but also electrical grid components, AI servers, and AI data centers. In a letter from OpenAI’s chief global affairs officer Chris Lehane to the White House’s science and technology policy director Michael Kratsios, the company argued that broadening AMIC coverage would reduce capital costs, lower investment risks, and attract private funding to accelerate AI infrastructure development in the U.S. Additionally, OpenAI urged the government to speed up permitting and environmental reviews for such projects and to establish a strategic reserve of critical raw materials like copper, aluminum, and rare earth minerals necessary for AI infrastructure. The letter, initially sent in late October but gaining wider attention following public comments by OpenAI executives, clarifies that while the company discussed loan guarantees in the context of semiconductor fabs, it does not seek government backstops or
energydata-centerssemiconductor-fabricationAI-infrastructureraw-materialsChips-Actgovernment-policyGoogle plans orbital AI data centers powered directly by sunlight
Google has announced Project Suncatcher, an ambitious research initiative aiming to develop orbital AI data centers powered directly by solar energy. The project envisions constellations of satellites equipped with Google’s Tensor Processing Units (TPUs) operating in sun-synchronous low-Earth orbits to harness nearly continuous sunlight, enabling highly scalable AI computing beyond Earth’s energy and resource constraints. These satellites would be interconnected via high-bandwidth free-space optical links, potentially reaching multi-terabit per second data transfer rates, to form a tightly clustered “AI constellation” capable of handling large-scale machine learning workloads. Key technical challenges addressed include maintaining high data transmission rates between satellites flying just hundreds of meters apart using dense wavelength-division multiplexing (DWDM) and spatial multiplexing, as well as ensuring radiation resilience of the compute hardware. Google’s TPU v6e chips have demonstrated strong resistance to radiation in proton beam tests. The project is still in early research stages, with plans to launch two prototype satellites by early 202
energysolar-powersatellite-technologyAI-infrastructurespace-based-computingmachine-learningoptical-communicationLG founder’s grandson, production firm partner up to bring AI to filmmaking
The article discusses a new joint venture called Utopai East, formed between investment firm Stock Farm Road (SFR) and AI film and television production company Utopai Studios, aimed at integrating AI technologies into filmmaking. SFR, co-founded by Brian Koo (grandson of LG Group’s founder) and Amin Badr-El-Din, provides capital, industry expertise, and contacts, while Utopai contributes AI technology, workflow, and infrastructure. The venture focuses on building the necessary data centers and energy infrastructure to support AI-driven content production, with plans to co-produce films and TV shows and expand Korean intellectual property to international audiences. Production will start using existing infrastructure, with the first AI-assisted content expected next year. Both partners emphasize that AI is intended to enhance creativity and efficiency rather than replace human roles such as writing, directing, and acting. They highlight that all AI models and datasets are fully licensed to respect creators’ rights. The goal is to use AI to lower costs,
energyAI-infrastructuredata-centersfilmmaking-technologyAI-in-mediaproduction-efficiencyKorean-IPMicrosoft inks $9.7B deal with Australia’s IREN for AI cloud capacity
Microsoft has secured a significant $9.7 billion, five-year contract with Australia-based IREN to expand its AI cloud computing capacity. This deal grants Microsoft access to advanced compute infrastructure equipped with Nvidia GB300 GPUs, which will be deployed in phases through 2026 at IREN’s facility in Childress, Texas, designed to support up to 750 megawatts of capacity. Separately, IREN is investing about $5.8 billion in GPUs and equipment from Dell to support this infrastructure expansion. The agreement follows Microsoft’s recent launch of AI models optimized for reasoning, agentic AI systems, and multi-modal generative AI, reflecting the company's efforts to meet growing demand for AI services. Microsoft has also previously acquired approximately 200,000 Nvidia GB300 GPUs for data centers in Europe and the U.S. IREN, originally a bitcoin-mining firm, has pivoted successfully to AI workloads, leveraging its extensive GPU resources. CEO Daniel Roberts anticipates that the Microsoft contract will utilize only
energycloud-computingAI-infrastructureGPUsdata-centersMicrosoftNvidiaMeta has an AI product problem
Meta is investing heavily in AI, spending billions on talent and infrastructure, including building two massive data centers and planning up to $600 billion in U.S. infrastructure spending over three years. This aggressive investment led to a $7 billion year-over-year increase in operating expenses and nearly $20 billion in capital expenses in the latest quarter. Despite these expenditures, Meta has yet to generate significant revenue from its AI efforts, causing investor concern and a sharp decline in its stock price—dropping 12% and wiping out over $200 billion in market value shortly after earnings were reported. During the earnings call, CEO Mark Zuckerberg emphasized that the spending was just beginning and framed it as necessary to develop frontier AI models with novel capabilities that could unlock massive future opportunities. However, he was unable to provide concrete revenue forecasts or product timelines, leaving analysts and investors uncertain about the near-term payoff. Unlike competitors such as Google, Nvidia, and OpenAI—who also invest heavily in AI but have fast-growing, revenue-gener
energydata-centersAI-infrastructurecapital-expenditurecompute-resourcesMetatechnology-investmentNvidia expands AI ties with Hyundai, Samsung, SK, Naver
Nvidia CEO Jensen Huang is visiting South Korea to announce expanded collaborations with major Korean technology companies—Hyundai Motor, Samsung, SK Group, and Naver—alongside the South Korean government. This partnership aims to significantly boost South Korea’s AI infrastructure and physical AI capabilities, with the country securing over 260,000 of Nvidia’s latest GPUs. Approximately 50,000 GPUs will support public initiatives, including a national AI data center, while the remaining GPUs will be allocated to leading companies to drive AI innovation in manufacturing and industry-specific AI model development. This move follows recent U.S. technology agreements with Japan and South Korea to enhance cooperation on emerging technologies such as AI, semiconductors, quantum computing, biotech, and 6G. Key collaborations include Samsung and Nvidia’s joint effort to build an AI Megafactory that integrates AI across semiconductor, mobile device, and robotics manufacturing using over 50,000 Nvidia GPUs and the Omniverse platform. They are also co-developing AI
AIroboticssmart-factoriesautonomous-mobilitysemiconductor-manufacturingAI-infrastructureGPU-technologySpace data centers could satiate 165% surge in AI power hunger
Researchers from NTU Singapore have proposed placing data centers in low Earth orbit (LEO) as a sustainable solution to meet the rapidly growing energy demands of AI computing. These space-based data centers would leverage the natural radiative cooling of the cold space environment and harness virtually unlimited solar energy, enabling net-zero carbon emissions. This approach addresses the challenges faced on Earth, such as high real estate costs in dense urban areas like Singapore and the significant energy and water consumption required for cooling terrestrial data centers. The team outlined two deployment strategies: orbital edge data centers, which process raw data on satellites equipped with AI accelerators to reduce transmission loads, and orbital cloud data centers, consisting of satellite constellations with servers, broadband links, solar panels, and radiative coolers to perform advanced computing tasks from space. Importantly, these concepts rely on existing launch and satellite technologies, making them feasible today. Given projections that AI-driven energy demand could surge by 165% by 2030, this innovative use of
energysolar-energydata-centersspace-technologysustainable-computingAI-infrastructureradiative-coolingBenjamin Lee on why AI needs better infrastructure, not just bigger models
Benjamin Lee, a professor of Electrical and Systems Engineering at the University of Pennsylvania, emphasizes that the rapid growth of AI requires smarter infrastructure and energy-aware design rather than just bigger models. Lee’s expertise spans hardware design, infrastructure strategy, and energy policy, and he highlights the unsustainable pace at which data centers are expanding—often outstripping the availability of clean energy. He stresses that energy consumption must be treated as a core design metric in AI development, not an afterthought, to ensure long-term sustainability. Lee traces his career motivation back to an undergraduate course on computer organization that revealed the complexities of hardware-software interaction, leading him to focus on energy efficiency in computing. He points out a common misconception among engineers and policymakers: the belief that current AI applications like chatbots justify massive infrastructure investments. Instead, he argues that tech companies are building energy and data center infrastructure with future, yet-to-be-imagined AI capabilities in mind. While there was initial optimism about powering data centers with renewables
energyAI-infrastructuredata-centersenergy-efficiencysustainable-computingprocessor-architecturerenewable-energyNVIDIA partners with Uber to deploy AVs starting in 2027 - The Robot Report
NVIDIA has announced a strategic partnership with Uber to deploy a large-scale level 4 autonomous vehicle (AV) mobility network starting in 2027. This network will leverage Uber’s robotaxi and autonomous delivery fleets, powered by NVIDIA’s DRIVE AGX Hyperion 10 platform and DRIVE AV software, which are designed to enable software-defined, level 4-ready vehicles. NVIDIA aims to support Uber in scaling its autonomous fleet to 100,000 vehicles globally over time, with development involving NVIDIA, Uber, and other ecosystem partners. Additionally, the companies are collaborating on a data factory accelerated by NVIDIA’s Cosmos world foundation model to curate and process data critical for AV development. The NVIDIA DRIVE AGX Hyperion 10 platform serves as a modular, customizable reference architecture combining a production computer and sensor suite that automakers can use to build level 4-capable vehicles. It features the NVIDIA DRIVE AGX Thor system-on-a-chip based on the Blackwell architecture, delivering over 2,000 FP4
robotautonomous-vehiclesNVIDIA-DRIVE-AGXlevel-4-autonomyrobotaxiAI-infrastructuremobility-networkNscale inks massive AI infrastructure deal with Microsoft
Nscale, an AI cloud provider founded in 2024, has secured a major deal to deploy approximately 200,000 Nvidia GB300 GPUs across data centers in Europe and the U.S. This deployment will occur through Nscale’s own operations and a joint venture with investor Aker. Key locations include a Texas data center leased by Ionic Digital, which will receive 104,000 GPUs over 12 to 18 months, with plans to expand capacity to 1.2 gigawatts. Additional deployments include 12,600 GPUs at the Start Campus in Sines, Portugal (starting Q1 2026), 23,000 GPUs at Nscale’s Loughton, England campus (starting 2027), and 52,000 GPUs at Microsoft’s AI campus in Narvik, Norway. This deal builds on prior collaborations with Microsoft and Aker involving data centers in Norway and the UK. Josh Payne, Nscale’s founder and CEO, emphasized that this agreement positions Nscale as
energyAI-infrastructuredata-centersGPUssustainabilitycloud-computingtechnology-investmentMeta partners up with Arm to scale AI efforts
Meta has partnered with semiconductor design company Arm to enhance its AI systems amid a significant infrastructure expansion. The collaboration will see Meta’s ranking and recommendation systems transition to Arm’s technology, leveraging Arm’s strengths in low-power, efficient AI deployments. Meta’s head of infrastructure, Santosh Janardhan, emphasized that this partnership aims to scale AI innovation to over 3 billion users. Arm CEO Rene Haas highlighted the focus on performance-per-watt efficiency as critical for the next era of AI. This multi-year partnership coincides with Meta’s massive investments in AI infrastructure, including projects like “Prometheus,” a data center expected to deliver multiple gigawatts of power by 2027 in Ohio, and “Hyperion,” a 2,250-acre data center campus in Louisiana projected to provide 5 gigawatts of computational power by 2030. Unlike other recent AI infrastructure deals, Meta and Arm are not exchanging ownership stakes or physical infrastructure. This contrasts with Nvidia’s extensive investments in AI firms such
energyAI-infrastructuredata-centerssemiconductorpower-consumptioncloud-computingMetaGoogle to invest $15B in Indian AI infrastructure hub
Google announced a $15 billion investment to establish a 1-gigawatt data center and AI hub in Visakhapatnam, Andhra Pradesh, India, over the next five years through 2030. This marks Google's largest investment in India and its biggest outside the U.S. The AI hub will be part of a global network spanning 12 countries and will offer a full suite of AI solutions, including custom Tensor Processing Units (TPUs), access to AI models like Gemini, and support for consumer services such as Google Search, YouTube, Gmail, and Google Ads. Google is partnering with Indian telecom Bharti Airtel and AdaniConneX to build the data center and subsea cable infrastructure, positioning Visakhapatnam as a global connectivity hub and digital backbone for India. The investment comes amid growing Indian government efforts to promote local alternatives to U.S. tech giants like Google, with initiatives encouraging “swadeshi” or “made in India” products and services. Despite these
energydata-centerAI-infrastructurecloud-computingsubsea-cableconnectivity-hubIndia-investmentThe billion-dollar infrastructure deals powering the AI boom
The article highlights the massive investment and infrastructure buildup fueling the current AI boom, emphasizing the enormous computing power required to train and run AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. The piece details key deals, starting with Microsoft’s landmark $1 billion investment in OpenAI in 2019, which established Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership now valued at nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of close collaboration between AI firms and cloud providers has become standard, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for other AI ventures. Oracle’s emergence as a major AI infrastructure player is underscored by its unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in mid-2025
energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAIAMD to supply 6GW of compute capacity to OpenAI in chip deal worth tens of billions
AMD has entered a multi-year chip supply agreement with OpenAI that could generate tens of billions in revenue and significantly boost AMD’s presence in the AI sector. Under the deal, AMD will provide OpenAI with 6 gigawatts of compute capacity using multiple generations of its Instinct GPUs, beginning with the Instinct MI450 GPU, which is expected to be deployed in the second half of 2026. AMD claims the MI450 will outperform comparable Nvidia GPUs through hardware and software enhancements developed with OpenAI’s collaboration. Currently, OpenAI already uses AMD’s MI355X and MI300X GPUs for AI inference tasks due to their high memory capacity and bandwidth. In addition to supplying chips, AMD has granted OpenAI the option to purchase up to 160 million shares of AMD stock, representing a 10% stake. The stock vesting is tied to the deployment milestones of the compute capacity and AMD’s stock price, with the final tranche vesting if AMD shares reach $600. Following the
energyAI-computeGPUsdata-centerschip-supplysemiconductorAI-infrastructureOpenAI ropes in Samsung, SK Hynix to source memory chips for Stargate
OpenAI has entered into agreements with South Korean memory chip giants Samsung Electronics and SK Hynix to supply DRAM wafers for its Stargate AI infrastructure project and to build AI data centers in South Korea. The deals, formalized through letters of intent following a high-profile meeting involving OpenAI CEO Sam Altman and South Korean leadership, will see Samsung and SK Hynix scale production to deliver up to 900,000 high-bandwidth memory DRAM chips monthly—more than doubling the current industry capacity for such chips. This move is part of OpenAI’s broader strategy to rapidly expand its compute capacity for AI development. These agreements come amid a flurry of recent investments and partnerships aimed at boosting OpenAI’s compute power. Notably, Nvidia committed to providing OpenAI access to over 10 gigawatts of AI training compute, while OpenAI also partnered with SoftBank, Oracle, and SK Telecom to increase its total compute capacity to 7 gigawatts and develop AI data centers
materialsmemory-chipsDRAMAI-infrastructuredata-centersSamsungSK-HynixNVIDIA unveils brain-and-body stack to train next-gen humanoids
NVIDIA has introduced a comprehensive robotics stack aimed at advancing humanoid robot development by integrating physics simulation, AI reasoning, and infrastructure within its Isaac Lab platform. Central to this update are the open-source, GPU-accelerated Newton Physics Engine and the Isaac GR00T N1.6 robot foundation model. Newton, co-developed with Google DeepMind and Disney Research and managed by the Linux Foundation, enables highly realistic simulations of complex physical interactions—such as walking on uneven terrain or handling fragile objects—facilitating safer and more reliable transfer of robot skills from simulation to real-world environments. Early adopters include leading academic and industry robotics groups. Isaac GR00T N1.6 incorporates NVIDIA’s Cosmos Reason, a vision-language reasoning model designed for physical AI, which enhances humanoid robots’ ability to interpret ambiguous instructions, leverage prior knowledge, and generalize across tasks. This model supports simultaneous movement and object manipulation, tackling advanced challenges like opening heavy doors. Developers can fine-tune GR00T
roboticshumanoid-robotsNVIDIA-IsaacNewton-Physics-EngineAI-infrastructurerobot-simulationphysical-AIThe billion-dollar infrastructure deals powering the AI boom
The article highlights the massive investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. Central to this surge was Microsoft’s initial $1 billion investment in OpenAI in 2019, which positioned Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider relationships has become common, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for various AI firms. Oracle has emerged as a major player in AI infrastructure through unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in 2025 and a staggering $300 billion five-year compute power
energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAIWhat’s behind the massive AI data center headlines?
The article discusses the recent surge in massive AI data center investments in Silicon Valley, driven primarily by the needs of OpenAI and its partners. Nvidia announced significant infrastructure commitments, while OpenAI revealed plans to expand capacity through collaborations with Oracle and Softbank, adding gigawatts of new power to support future versions of ChatGPT. These individual deals are enormous, but collectively they highlight Silicon Valley’s intense efforts to provide OpenAI with the computational resources required to train and operate increasingly powerful AI models. OpenAI also introduced a new AI feature called Pulse, which operates independently of the ChatGPT app and is currently available only to its $200-per-month Pro subscribers due to limited server capacity. The company aims to expand such features to a broader user base but is constrained by the availability of AI data centers. The article raises the question of whether the hundreds of billions of dollars being invested in AI infrastructure to support OpenAI’s ambitions are justified by the value of features like Pulse. The piece also alludes to broader
energydata-centersAI-infrastructurepower-consumptioncloud-computingserver-capacitySilicon-Valley-investmentsOpenAI is building five new Stargate data centers with Oracle and SoftBank
OpenAI is expanding its AI infrastructure by building five new Stargate data centers in collaboration with Oracle and SoftBank. Three of these centers are being developed with Oracle and are located in Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed Midwest location. The remaining two centers are being developed with SoftBank, situated in Lordstown, Ohio, and Milam County, Texas. This expansion is part of OpenAI’s broader strategy to enhance its capacity for training and deploying more advanced AI models. Additionally, OpenAI recently announced a deal to acquire AI processors from a chipmaker, which will support further development of its AI data center network. The new Stargate data centers underscore OpenAI’s commitment to scaling its infrastructure to meet growing computational demands.
energydata-centersAI-infrastructurechipmakerstechnology-partnershipscloud-computingenergy-efficiencyNVIDIA investing $100B in OpenAI data centers for next-gen AI
OpenAI and NVIDIA have entered a landmark partnership, with NVIDIA committing up to $100 billion to build massive AI data centers that will deploy at least 10 gigawatts of compute power using millions of NVIDIA GPUs. The first gigawatt of this capacity is expected to go live in the second half of 2026 on NVIDIA’s upcoming Vera Rubin platform. NVIDIA CEO Jensen Huang described the collaboration as a “next leap forward” for both companies, highlighting that the 10 gigawatts equate to roughly 4 to 5 million GPUs—double the number shipped by NVIDIA last year. This massive infrastructure investment underscores the deep ties between the two companies and their joint efforts to power the next era of AI intelligence. OpenAI CEO Sam Altman emphasized that compute infrastructure is central to OpenAI’s mission and will form the foundation of the future economy. He noted the challenge of balancing research, product development, and scaling infrastructure, promising significant developments in the coming months. OpenAI cofounder Greg
energydata-centersAI-infrastructureNVIDIAOpenAIGPUscompute-powerThe billion-dollar infrastructure deals powering the AI boom
The article highlights the massive financial investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang projects that $3 to $4 trillion will be spent on AI infrastructure by 2030, with significant contributions from AI companies themselves. Major tech players such as Microsoft, OpenAI, Meta, Oracle, Google, and Amazon are heavily investing in cloud services, data centers, and specialized hardware to support AI training and deployment. These efforts are straining power grids and pushing the limits of existing data center capacities. A pivotal moment in the AI infrastructure race was Microsoft’s initial $1 billion investment in OpenAI, which secured Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has since grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider deals has become common, with Amazon investing $8 billion in Anthropic and Nvidia committing $100 billion to
energyAI-infrastructurecloud-computingdata-centerspower-gridsNvidiaMicrosoft-AzureBig Tech Dreams of Putting Data Centers in Space
The article discusses the growing energy demands and environmental impacts of terrestrial data centers, particularly those supporting artificial intelligence, which could increase electricity consumption by 165% by 2030 and rely heavily on fossil fuels. In response, prominent tech figures like OpenAI CEO Sam Altman, Jeff Bezos, and Eric Schmidt are exploring the concept of placing data centers in space to leverage continuous solar power and reduce pollution on Earth. Altman envisions ambitious projects such as a Dyson sphere of data centers around the sun, though such megastructures face enormous resource and feasibility challenges. More immediate efforts are underway by startups like Starcloud, Axiom, and Lonestar Data Systems, which have secured funding to develop space-based data center technologies. Scientific advances support the potential viability of orbital data centers. Caltech professor Ali Hajimiri, involved in the Space Solar Power Project, has patented concepts for space-based computational systems and proposed lightweight solar power solutions that could generate electricity more cheaply than Earth-based systems. However, significant
energydata-centersspace-technologysolar-powerAI-infrastructuresustainabilityspace-based-energyNVIDIA invests $5B in Intel, launches joint AI and PC chip venture
NVIDIA is investing $5 billion in Intel, becoming one of its largest shareholders and forming a strategic partnership to jointly develop future data center and PC chips. This collaboration aims to combine Intel’s x86 CPU architecture with NVIDIA’s AI and GPU technologies, with Intel building custom CPUs for NVIDIA’s AI infrastructure and manufacturing x86 system-on-chips integrated with NVIDIA RTX GPU chiplets for high-performance personal computers. The deal provides a significant boost to Intel, which has struggled in recent years, as evidenced by a 23% surge in its stock price following the announcement. The partnership leverages the strengths of both companies: Intel’s foundational x86 architecture, manufacturing capabilities, and advanced packaging, alongside NVIDIA’s AI leadership and CUDA architecture. Analysts view NVIDIA’s involvement as a pivotal moment for Intel, repositioning it from an AI laggard to a key player in AI infrastructure. The collaboration also has competitive implications, potentially challenging rivals like AMD and TSMC, which currently manufactures NVIDIA’s top processors. The
semiconductorsAI-chipsNVIDIAInteldata-centersPC-processorsAI-infrastructureWhy the Oracle-OpenAI deal caught Wall Street by surprise
The recent surprise deal between OpenAI and Oracle caught Wall Street off guard but underscores Oracle’s continuing significance in AI infrastructure despite its legacy status. OpenAI’s willingness to commit substantial funds—reportedly around $60 billion annually for compute and custom AI chip development—signals its aggressive scaling strategy and desire to diversify infrastructure providers to mitigate risk. Industry experts highlight that OpenAI is assembling a comprehensive global AI supercomputing foundation, which could give it a competitive edge. Oracle’s involvement, while unexpected to some given its perceived diminished role compared to cloud giants like Google, Microsoft, and AWS, is explained by its proven capabilities in delivering large-scale, high-performance infrastructure, including supporting TikTok’s U.S. operations. However, key details about the deal remain unclear, particularly regarding how OpenAI will finance and power its massive compute needs. The company is burning through billions annually despite growing revenues from ChatGPT and other products, raising questions about sustainability. Energy sourcing is a critical concern since data centers are projected to
energyAI-infrastructurecloud-computingsupercomputingdata-centerspower-consumptionOpenAIOpenAI to launch AI data center in Norway, its first in Europe
OpenAI announced plans to launch Stargate Norway, its first AI data center in Europe, in partnership with British AI cloud infrastructure provider Nscale and Norwegian energy firm Aker. The data center will be a 50/50 joint venture between Nscale and Aker, with OpenAI as an off-taker purchasing capacity from the facility. Located near Narvik, Norway, the site will leverage the region’s abundant hydropower, cool climate, and mature industrial base to run entirely on renewable energy. The initial phase will deliver 230 megawatts (MW) of capacity, expandable to 290 MW, and is expected to operate 100,000 Nvidia GPUs by the end of 2026. The facility will incorporate advanced cooling technology and reuse excess heat to support low-carbon enterprises locally. This initiative aligns with Europe’s broader push for AI sovereignty, data sovereignty, and sustainable infrastructure, as the EU recently announced multi-billion euro investments to build AI factories and enhance compute power within the bloc.
energydata-centerAI-infrastructurerenewable-powerhydropowerliquid-coolingNvidia-GPUsMeta to spend up to $72B on AI infrastructure in 2025 as compute arms race escalates
Meta announced plans to dramatically increase its investment in AI infrastructure in 2025, with capital expenditures expected to reach between $66 billion and $72 billion—an increase of about $30 billion compared to the previous year. This spending surge will focus on expanding data centers, servers, and other physical infrastructure to support the company’s AI ambitions. Meta expects this aggressive investment trend to continue into 2026, emphasizing that developing leading AI infrastructure will be a core competitive advantage for building superior AI models and products. Key projects include two major AI superclusters, such as the Prometheus cluster in Ohio, which aims to achieve 1 gigawatt of compute power by 2026. The company’s infrastructure expansion has raised concerns locally, with some projects, like the one in Newton County, Georgia, reportedly causing water shortages for residents due to high resource consumption. Additionally, Meta is investing heavily in talent acquisition, particularly for its new Superintelligence Labs unit, which focuses on AI research and development. CEO
energyAI-infrastructuredata-centerscompute-powerMetasuperclusterscapital-expenditureAI May Gobble Up Every Available Electron In Its Quest To Sell Us More Stuff - CleanTechnica
The article discusses the significant federal funding—$90 billion—pledged by the U.S. government, redirected from social programs and renewable energy subsidies, to support major tech companies like Google, Microsoft, Meta, and Amazon in building AI infrastructure. This investment aims to secure American dominance in artificial intelligence but raises concerns about the massive electricity demand such data centers will require. Analysts predict that by 2030, data centers could consume up to 10% or more of all U.S. electricity, potentially driving up energy costs for ordinary Americans by 50% or higher. The article critiques this allocation of resources amid ongoing social needs and questions the sustainability of such energy consumption. Additionally, the article highlights OpenAI’s continued expansion, including a $500 billion investment commitment to build 10 gigawatts of AI infrastructure, further emphasizing the scale of AI’s energy appetite. While some innovations, like the Energy Dome technology from an Italian startup partnering with Google, offer promising ways to store renewable energy for longer periods
energyAI-infrastructuredata-centerselectricity-consumptionrenewable-energyfederal-fundingpower-demandTrump’s AI strategy trades guardrails for growth in race against China
The Trump administration released its AI Action Plan, marking a significant departure from the Biden administration’s more cautious stance on AI risks. The new strategy prioritizes rapid AI infrastructure development, deregulation, and national security to compete with China, emphasizing growth over guardrails. Key elements include expanding data centers—even on federal lands and during critical energy grid periods—while downplaying efforts to mitigate AI-related harms. The plan also proposes workforce upskilling and local partnerships to create jobs tied to AI infrastructure, positioning these investments as essential to a “new golden age of human flourishing.” Authored by Trump’s technology and AI experts, many from Silicon Valley, the plan reflects input from over 10,000 public comments but remains a broad blueprint rather than a detailed roadmap. It includes efforts to limit state-level AI regulations by threatening to withhold federal funding and empowering the FCC to challenge state rules that affect communications infrastructure. On the federal level, the administration seeks to identify and remove regulations that impede AI innovation. Dereg
energyAI-infrastructuredata-centersderegulationtechnology-policynational-securityinnovationSam Altman-backed Oklo to cool AI data centers with new nuclear tech
Oklo, a nuclear technology company backed by Sam Altman, has partnered with Vertiv, a leader in digital infrastructure, to develop an integrated power and cooling system for hyperscale and colocation data centers. This system will leverage Oklo’s small modular reactors (SMRs) to generate steam and electricity, combined with Vertiv’s thermal management technology, aiming to optimize both power and cooling efficiently and sustainably. The collaboration seeks to address common data center challenges such as high energy demand, reliance on power grids, and environmental impact by providing a reliable, carbon-free energy source that can be located near data centers for improved performance and scalability. The partnership comes amid the rapid growth of AI and high-performance computing, which significantly increases power consumption in data centers. Oklo’s SMRs are designed for flexibility and quick adaptation to changing energy needs, enabling continuous, stable power supply critical for data center operations. By integrating power generation and cooling solutions from the outset, Oklo and Vertiv aim to enhance energy efficiency
energynuclear-energydata-centerscooling-technologysmall-modular-reactorsAI-infrastructurepower-efficiencyMeta is reportedly using actual tents to build data centers
Meta is accelerating its efforts to build AI infrastructure by using unconventional methods to construct data centers quickly. According to reports, the company is employing actual tents and ultra-light structures, along with prefabricated power and cooling modules, to expedite the deployment of computing capacity. This approach prioritizes speed over aesthetics or redundancy, reflecting Meta’s urgent need to catch up with competitors like OpenAI, xAI, and Google in the race for superintelligence technology. One notable project is Meta’s Hyperion data center, which a company spokesperson confirmed will be located in Louisiana. The facility is expected to reach a capacity of 2 gigawatts by 2030, underscoring Meta’s commitment to rapidly scaling its AI compute resources. The absence of traditional backup generators, such as diesel units, further highlights the focus on swift, efficient construction rather than conventional data center design norms. Overall, Meta’s strategy signals a shift toward innovative, speed-driven infrastructure development to support its AI ambitions.
energydata-centersMetaAI-infrastructurepower-modulescooling-technologysupercomputingZuckerberg bets big on AI with first gigawatt superclusters plan
Meta Platforms, led by CEO Mark Zuckerberg, is making a significant investment in artificial intelligence infrastructure by planning to build some of the world’s largest AI superclusters. The company announced that its first supercluster, Prometheus, will launch in 2026, with additional multi-gigawatt clusters like Hyperion—designed to scale up to five gigawatts of compute capacity—also in development. These superclusters aim to handle massive AI model training workloads, helping Meta compete with rivals such as OpenAI and Google in areas like generative AI, computer vision, and robotics. According to industry reports, Meta is on track to be the first AI lab to deploy a supercluster exceeding one gigawatt, marking a major escalation in the AI arms race. Alongside infrastructure expansion, Meta is aggressively investing in AI talent and research. The company recently launched Meta Superintelligence Labs, led by former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman, consolidating top AI
energyAI-superclustersMetahigh-performance-computingdata-centersgigawatt-scale-computingAI-infrastructureOpenAI’s planned data center in Abu Dhabi would be bigger than Monaco
energydata-centerAI-infrastructurepower-consumptionAbu-DhabiOpenAIG42