Articles tagged with "high-performance-computing"
New US supercomputer to speed up nuclear reactor modeling, deployment
Idaho National Laboratory (INL) has launched its latest supercomputer, Teton, which quadruples the lab’s high-performance computing capacity and ranks as the 85th most powerful supercomputer globally according to TOP500. Teton, powered by AMD’s EPYC 9005 “Turin” processors and featuring 1,024 compute nodes with nearly 400,000 CPU cores, delivers 20.8 quadrillion calculations per second—four times the performance of its predecessor, Sawtooth, while occupying only one-third the physical space. This significant upgrade enables researchers to run complex modeling and simulation codes much faster, reducing computational tasks from days to hours. Designed specifically to accelerate nuclear reactor design and deployment, Teton supports the US Department of Energy’s Nuclear Science User Facilities (NSUF) by providing the computational power necessary for advanced reactor research. Its capabilities facilitate thousands of simulations to create Reduced Order Models (ROMs), which serve as accurate digital twins for optimizing reactor designs, speeding up
energynuclear-energysupercomputerhigh-performance-computingreactor-modelingsimulationAITaiwan builds 20-qubit quantum computer in domestic R&D push
Researchers at Taiwan’s Academia Sinica have developed a 20-qubit superconducting quantum computer entirely through domestic research and fabrication efforts, marking a significant advancement from their earlier 5-qubit system introduced in 2023. This new platform, now accessible to local researchers, demonstrates Taiwan’s capability to produce larger-scale, stable quantum chips suitable for complex quantum simulations and testing. The project leveraged semiconductor manufacturing expertise to overcome challenges in qubit uniformity, coupling precision, and interference, employing techniques like laser trimming and chip stacking to enhance performance and reduce crosstalk. A major breakthrough of the 20-qubit system is the substantial increase in qubit coherence time—from 15–30 microseconds in the previous model to 530 microseconds—allowing quantum states to remain stable for longer periods, which is critical for practical quantum computing. This improvement reflects tighter control over fabrication, packaging, and noise reduction, addressing the sensitivity of superconducting qubits to electromagnetic disturbances. Academia Sinica plans to further
quantum-computingsuperconducting-qubitsquantum-chip-fabricationmaterial-discoveryhigh-performance-computingsemiconductor-manufacturingquantum-simulationMicrosoft debuts Maia 200 AI chip promising 3x inference performance
Microsoft has unveiled the Maia 200, its second-generation AI chip specifically designed to optimize inference workloads—the continuous process of serving AI responses—addressing the rising costs associated with running large AI models at scale. Building on the Maia 100 introduced in 2023, the Maia 200 significantly boosts performance, featuring over 100 billion transistors and delivering more than 10 petaflops of compute at 4-bit precision (around 5 petaflops at 8-bit). The chip emphasizes speed, stability, and power efficiency, incorporating a large amount of fast SRAM memory to reduce latency during repeated queries, which is critical for handling spikes in user traffic in AI services like chatbots and copilots. Microsoft has deployed the Maia 200 in data centers in Iowa, with plans for further deployment in Arizona. Strategically, Maia 200 represents Microsoft's effort to reduce dependence on NVIDIA, the dominant player in AI hardware, by offering competitive performance and an alternative ecosystem. Microsoft claims the Maia
AI-chipsemiconductorMicrosoft-Maia-200AI-inferencehigh-performance-computingcloud-infrastructureTSMC-3nm-technologyUS system to cut nuclear fusion simulation time from months to real-time
The Princeton Plasma Physics Laboratory (PPPL) has introduced STELLAR-AI, a new computing platform designed to drastically reduce the time required for nuclear fusion simulations from months to real-time. By integrating artificial intelligence (AI) with high-performance computing, STELLAR-AI connects computing resources directly to experimental devices, enabling real-time data analysis during fusion experiments. The platform’s hardware architecture combines CPUs for standard tasks, GPUs for AI model training, and quantum processing units (QPUs) to handle complex calculations beyond the capabilities of traditional computers. A key experimental partner is the National Spherical Torus Experiment-Upgrade (NSTX-U), which will benefit from a digital twin model to simulate experiments virtually before physical testing. STELLAR-AI supports the U.S. Department of Energy’s Fusion Science and Technology Roadmap, aiming to accelerate the commercialization of fusion power plants through AI-driven design and optimization. Projects under this initiative include StellFoundry, which uses AI to speed up the design of stellarators
energynuclear-fusionAI-in-energyhigh-performance-computingfusion-simulationfusion-energy-researchplasma-physicsUS: Award-winning Monte Carlo code optimizes nuclear reactor designs
OpenMC is a powerful, open-source Monte Carlo simulation software developed collaboratively by the US Department of Energy’s Argonne National Laboratory and MIT. Recently awarded an R&D 100 Award, OpenMC enables researchers to conduct detailed virtual experiments that accelerate innovation in both nuclear fission and fusion reactor designs. By simulating the behavior of neutrons and photons within complex systems, the software helps predict fuel consumption rates and radiation damage, allowing developers to optimize reactor safety and performance without costly physical prototypes. A key strength of OpenMC lies in its ability to leverage high-performance computing resources, including exascale supercomputers like Aurora and Frontier, to perform simulations with unprecedented detail and speed. Its open-source nature fosters widespread adoption and collaboration among universities, private companies, and international researchers. Beyond advancing nuclear energy technologies, OpenMC also supports applications in used nuclear fuel management and radiation protection for medical and space environments. The software’s flexible interface and compatibility with diverse hardware—from personal laptops to supercomputers—make it a
energynuclear-energyMonte-Carlo-simulationOpenMChigh-performance-computingfusion-researchreactor-designOpenAI signs deal, worth $10 billion, for compute from Cerebras
OpenAI has entered a multi-year agreement with AI chipmaker Cerebras, securing 750 megawatts of compute power from 2026 through 2028 in a deal valued at over $10 billion. This partnership aims to accelerate AI processing speeds, enabling faster response times for OpenAI’s customers by leveraging Cerebras’s specialized AI chips, which the company claims outperform traditional GPU-based systems like those from Nvidia. The enhanced compute capacity is expected to support real-time AI inference, which Cerebras CEO Andrew Feldman likens to the transformative impact broadband had on the internet. Cerebras, which gained prominence following the AI surge sparked by ChatGPT’s 2022 launch, has been expanding despite postponing its IPO multiple times. The company is reportedly in talks to raise an additional $1 billion at a $22 billion valuation. OpenAI’s strategy involves diversifying its compute infrastructure to optimize performance across different workloads, with Cerebras providing a dedicated low-latency inference solution. This collaboration is
energyAI-chipscompute-powerdata-centershigh-performance-computingsemiconductor-technologyAI-infrastructureKorea's high-performance computing cluster to get 100-qubit quantum system
A Maryland-based company, IonQ, has finalized an agreement to deliver its next-generation 100-qubit Tempo quantum system to the Korea Institute of Science and Technology Information (KISTI). This system will be integrated into KISTI-6 (“HANGANG”), South Korea’s largest high-performance computing (HPC) cluster, marking the country’s first hybrid quantum-classical onsite integration. The compute cluster will be accessible remotely via a secure private cloud environment, enabling researchers, universities, and enterprises across South Korea to utilize the quantum computing resources. KISTI will lead the development and operation of a quantum computing service platform aimed at supporting both academic and enterprise applications. IonQ has been designated as the primary quantum technology provider, with Megazone Cloud assisting as a leading cloud service and infrastructure provider. This collaboration is seen as a significant advancement for South Korea’s quantum computing capabilities, enabling groundbreaking research and innovation in sectors such as healthcare, finance, and materials science. Both IonQ and KIST
quantum-computinghigh-performance-computingSouth-Korea-technologyquantum-systemscloud-computingresearch-innovationmaterials-scienceSpectra supercomputer tests adaptive chips for nuclear security
Sandia National Laboratories has introduced Spectra, a prototype supercomputer developed in partnership with NextSilicon, designed to revolutionize national security simulations through adaptive, efficiency-focused computing. Unlike traditional CPU and GPU-based systems, Spectra employs 128 Maverick-2 dual-die accelerators—experimental chips that dynamically analyze and prioritize code tasks in real time to enhance performance while reducing power consumption. This approach aims to improve the speed and efficiency of complex simulations critical to maintaining the safety and reliability of the U.S. nuclear deterrent without underground testing. Spectra is the second system under Sandia’s Vanguard program, which tests cutting-edge technologies for potential large-scale deployment. Following the success of Astra, the first Vanguard machine that validated Arm processors for scientific workloads, Spectra seeks to demonstrate the viability of intelligence-driven, adaptive computing architectures. Early benchmarks, including HPCG, LAMMPS, and SPARTA, have shown promising results without requiring users to rewrite applications, potentially lowering the cost and complexity of
energysupercomputingadaptive-chipsnuclear-securityhigh-performance-computingpower-efficiencynational-laboratoriesCan Your Wave Energy Technology Survive the Ocean? - CleanTechnica
The article discusses SEA-Stack, an innovative, free, open-source modeling tool designed to help developers rapidly assess and optimize floating wave energy technologies and other water-based devices. SEA-Stack integrates multiple wave energy modeling capabilities into a single, user-friendly platform, enabling quick simulations ranging from simple design assessments to complex analyses that incorporate intricate ocean physics. Leveraging high-performance computing and machine learning, SEA-Stack is significantly faster—10 to 100 times—than previous tools and can process the latest wave energy data, making it a versatile "Swiss Army knife" for wave energy developers and related marine technology fields. Wave energy devices have strong potential to contribute to a secure and resilient power system by harnessing predictable ocean wave energy, but they face significant engineering challenges due to the harsh ocean environment. Traditional testing methods are costly and risky, as prototypes can fail or underperform when exposed to real ocean conditions. Existing modeling tools are limited in their ability to simulate critical features such as flexible device components, collisions, and
energywave-energyrenewable-energyocean-technologyenergy-modelinghigh-performance-computingmachine-learningAWS is spending $50B build AI infrastructure for the US government
Amazon Web Services (AWS) has announced a $50 billion investment to build specialized AI high-performance computing infrastructure tailored for U.S. government agencies. This initiative aims to significantly enhance federal access to AWS AI services, including Amazon SageMaker, model customization tools, Amazon Bedrock, model deployment, and Anthropic’s Claude chatbot. The project will add 1.3 gigawatts of computing power, with construction of new data centers expected to begin in 2026. AWS CEO Matt Garman emphasized that this investment will transform how federal agencies utilize supercomputing, accelerating critical missions such as cybersecurity and drug discovery, while removing technological barriers that have previously limited government AI adoption. AWS has a long history of working with the U.S. government, having started building cloud infrastructure for federal use in 2011. It launched the first air-gapped commercial cloud for classified workloads in 2014 and introduced the AWS Secret Region in 2017, which supports all security classification levels. This new AI infrastructure
energyAI-infrastructurecloud-computinghigh-performance-computinggovernment-technologydata-centerssupercomputingWorld’s fastest supercomputer runs record-breaking fluid simulation for rocket testing
Researchers at Lawrence Livermore National Laboratory (LLNL) have leveraged the exascale supercomputer El Capitan to perform the largest-ever fluid dynamics simulation, surpassing one quadrillion degrees of freedom in a single computational fluid dynamics (CFD) problem. The simulation modeled turbulent rocket exhaust flows from multiple engines firing simultaneously, a scenario relevant to modern rocket designs like SpaceX’s Super Heavy booster. Using a novel shock-regularization technique called Information Geometric Regularization (IGR), developed by a team including professors from Georgia Tech and NYU, the researchers achieved an 80-fold speedup over previous methods, reduced memory usage by 25 times, and cut energy consumption by more than five times. The simulation utilized all 11,136 nodes and over 44,500 AMD Instinct MI300A Accelerated Processing Units on El Capitan, and was extended on Oak Ridge National Laboratory’s Frontier supercomputer. This breakthrough sets a new benchmark for exascale CFD performance and memory efficiency
energysupercomputerfluid-dynamicsrocket-simulationhigh-performance-computingcomputational-fluid-dynamicsenergy-efficiencyExaflop simulations boost accuracy in quantum materials research
Researchers from the University of Southern California and Lawrence Berkeley National Laboratory have achieved groundbreaking exascale computing speeds to simulate electron behavior in complex quantum materials with unprecedented accuracy. Utilizing three of the world’s most powerful supercomputers—Aurora at Argonne, Frontier at Oak Ridge, and Perlmutter at Berkeley Lab—they pushed the BerkeleyGW open-source code to exceed one exaflop on Frontier and 0.7 exaflops on Aurora. This scale of computation enables detailed modeling of many-body quantum interactions, such as electron-phonon coupling, which are critical for understanding phenomena like superconductivity, conductivity, and optical responses in materials. A key innovation was the development of GW perturbation theory (GWPT) within BerkeleyGW, allowing the integration of quantum interactions into a unified framework and significantly improving simulation fidelity beyond traditional density functional theory (DFT) methods. Aurora’s large memory capacity enabled simulations involving tens of thousands of atoms, previously unattainable at this scale. The team’s decade
quantum-materialsexascale-computingelectron-phonon-couplingsuperconductivityBerkeleyGWhigh-performance-computingmaterials-scienceTwo supercomputers featuring NVIDIA Blackwell land in Japan by 2026
Japan’s RIKEN research institute plans to enhance its scientific computing infrastructure with two new supercomputers powered by NVIDIA’s latest Blackwell-generation GPUs, expected to be operational by spring 2026. Together, these systems will house 2,140 NVIDIA GPUs and focus on advancing AI-driven research, high-performance computing, and quantum technology development. The first supercomputer, equipped with 1,600 GPUs on the GB200 NVL4 platform and connected via NVIDIA’s Quantum-X800 InfiniBand networking, will support AI-accelerated scientific workflows in life sciences, materials research, climate forecasting, manufacturing, and laboratory automation. This system aims to accelerate large-scale AI model training and simulations critical to these fields. The second machine, featuring 540 NVIDIA Blackwell GPUs with the same architecture and networking, is dedicated to quantum computing research. It will not function as a quantum computer but will accelerate the development of quantum algorithms, hybrid quantum-classical simulations, and software to improve quantum hardware usability. This
supercomputingAImaterials-sciencequantum-computingNVIDIA-Blackwellhigh-performance-computingscientific-researchGermany launches 42,000-core ‘Otus’ supercomputer for green research
Germany has launched the ‘Otus’ supercomputer at Paderborn University’s Center for Parallel Computing (PC2), featuring over 42,000 processor cores, 108 GPUs, and a five-petabyte storage system. Developed in partnership with Lenovo and AMD, Otus aims to advance scientific research nationwide by enabling complex simulations that address fundamental and applied challenges, such as atomic-level physical and chemical processes, optimizing shipping routes, improving solar cell efficiency, and developing energy-efficient AI methods. Researchers across Germany can access the system through a competitive proposal process, with the supercomputer operating continuously throughout the year. A key highlight of Otus is its commitment to sustainability: it runs entirely on renewable electricity, uses an indirect free cooling system for year-round efficiency, and repurposes waste heat to warm university buildings. This eco-friendly design contributed to Paderborn University ranking fifth on the global Green500 list of the most energy-efficient supercomputers. Lenovo and AMD emphasized the project’s blend of high performance
energysupercomputingrenewable-energyenergy-efficiencygreen-technologyhigh-performance-computingsustainable-technology6-gigawatt handshake: AMD joins OpenAI’s trillion-dollar AI plan
OpenAI has entered a landmark multi-year agreement with AMD to deploy up to 6 gigawatts of AMD Instinct GPUs, marking one of the largest GPU deployment deals in AI history. The partnership will start with a 1-gigawatt rollout of AMD’s upcoming MI450 GPUs in late 2026 and scale to 6 gigawatts over multiple hardware generations, powering OpenAI’s future AI models and services. This collaboration builds on their existing relationship involving AMD’s MI300X and MI350X GPUs, with both companies committing to jointly advance AI hardware and software through shared technical expertise. Following the announcement, AMD’s stock surged nearly 24%, reflecting strong market confidence. A significant component of the deal includes an equity arrangement whereby OpenAI received a warrant for up to 160 million AMD shares, potentially giving OpenAI about a 10% stake in AMD if fully exercised. The warrant vests in stages tied to deployment milestones and AMD’s stock price. Although the exact financial terms
energyAI-hardwareGPUsAMDOpenAIhigh-performance-computingAI-compute-capacityNew Microsoft datacenter mimics 'one massive AI supercomputer'
Microsoft has unveiled Fairwater, a new datacenter in Mt. Pleasant, Wisconsin, designed to function as “one massive AI supercomputer” with 10 times the performance of today’s fastest supercomputers. Spanning 315 acres and comprising three buildings totaling 1.2 million square feet, Fairwater is built specifically to power AI workloads using hundreds of thousands of NVIDIA GPUs interconnected in high-density clusters. The facility employs NVIDIA GB200 servers with 72 GPUs per rack linked via NVLink for high-bandwidth communication and pooled memory, enabling processing speeds of up to 865,000 tokens per second. This architecture allows the datacenter to operate as a single global supercomputer rather than isolated machines, minimizing network latency through a two-story layout that reduces physical distances between racks. In addition to Fairwater, Microsoft is constructing similar hyperscale AI datacenters in Narvik, Norway, and the U.K., with plans to use NVIDIA’s upcoming GB300 chips. The Wisconsin facility features a closed
energydatacenterAI-supercomputerNVIDIA-GPUshigh-performance-computingliquid-coolingMicrosoft-FairwaterThe titans of tech: Top 10 most powerful supercomputers of 2025
As of June 2025, the global landscape of supercomputing is dominated by exascale machines primarily based in the United States, with significant new entries from Europe and continued presence from cloud and industrial sectors. Leading the pack is El Capitan at Lawrence Livermore National Laboratory, boasting 1.742 exaFLOPS on the LINPACK benchmark and demonstrating balanced performance across scientific workloads. Following closely is Frontier at Oak Ridge National Laboratory, the first-ever exascale supercomputer, now ranked second with 1.353 exaFLOPS, maintaining its role in advanced scientific research. Aurora at Argonne National Laboratory rounds out the top three, achieving just over 1 exaFLOPS and designed to integrate simulation with AI-driven science applications. Europe's fastest system, Germany’s JUPITER Booster, marks a significant milestone by entering the top tier with 793.4 petaFLOPS, powered by NVIDIA Grace-Hopper superchips and InfiniBand networking
energymaterialssupercomputersexascale-computinghigh-performance-computingAMD-EPYCAI-simulationEl Capitan transforms complex physics into jaw-dropping detail
The El Capitan supercomputer, currently the world’s fastest, has revolutionized the simulation of extreme physics events by producing unprecedentedly detailed and high-resolution models. Developed for scientists at Lawrence Livermore National Laboratory (LLNL), El Capitan can simulate phenomena such as shock waves and fluid mixing with remarkable clarity, capturing sub-micron details that traditional computers often miss. For example, researchers used it to model the impact of shock waves on a tin surface, revealing how the metal melts and ejects tiny droplets, including the influence of microscopic surface scratches. This level of fidelity, enabled by advanced physics models and a fine computational mesh, is crucial for advancing applications in physics, national defense, and fusion energy research. A key focus of the research was the Kelvin-Helmholtz instability, a complex fluid dynamic phenomenon occurring when fluids of different densities interact turbulently under extreme conditions. Using LLNL’s MARBL multiphysics code, El Capitan simulated how shockwaves interacting with minute surface rip
energysupercomputingphysics-simulationfusion-energyshock-waveshigh-performance-computingmaterials-scienceHistory of GPU: 1979 arcade chips that boosted gaming, crypto, and AI
The history of GPUs traces back to 1979 arcade machines like Namco’s Galaxian, which featured dedicated graphics hardware capable of independently handling multicolored sprites and tile-map backgrounds without CPU intervention. This innovation proved commercially successful and established specialized graphics chips as essential for immersive interactive experiences. The evolution continued through home consoles such as Atari’s 2600 and Nintendo’s systems, which balanced hardware limitations with clever design, while high-end applications like military flight simulators demonstrated the high cost of advanced visuals before purpose-built GPUs became widespread. The consumer 3D graphics revolution was catalyzed in 1996 by 3dfx’s Voodoo 1 card, which significantly boosted PC gaming performance by offloading 3D rendering from the CPU. This sparked rapid competition, with ATI and NVIDIA advancing the technology. NVIDIA’s 1999 GeForce 256 marked a pivotal moment by integrating transform, lighting, rasterization, and pixel shading into a single chip, coining the term “
robotAIGPUhigh-performance-computingautonomous-vehiclesgraphics-hardwarecryptocurrency-miningUS supercomputer trains AI to for faster nuclear plant licensing
The Oak Ridge National Laboratory (ORNL), under the U.S. Department of Energy, has partnered with AI company Atomic Canyon to accelerate the nuclear power plant licensing process using artificial intelligence. This collaboration, formalized at the Nuclear Opportunities Workshop, aims to leverage ORNL’s Frontier supercomputer—the world’s fastest—to train AI models that can efficiently review and analyze the extensive technical documentation required for nuclear licensing. By utilizing high-performance computing and AI-driven simulations, the partnership seeks to both ensure the safety of nuclear plant designs and significantly reduce the traditionally lengthy licensing timelines overseen by the U.S. Nuclear Regulatory Commission (NRC). Atomic Canyon developed specialized AI models called FERMI, trained on 53 million pages of nuclear documents from the NRC’s ADAMS database, enabling intelligent search and rapid retrieval of relevant information. This approach is intended to streamline regulatory compliance and reporting, helping meet ambitious government deadlines for new nuclear plant approvals. The initiative reflects a broader resurgence in nuclear energy as a reliable, clean power source,
energynuclear-energyartificial-intelligencesupercomputernuclear-licensinghigh-performance-computingenergy-technologyUK powers on supercomputer that runs 21 quintillion operations/sec
The UK has officially powered on its most powerful publicly accessible AI supercomputer, Isambard-AI, located at the University of Bristol. Named after engineer Isambard Kingdom Brunel, the £225 million system can perform 21 exaFLOPs (21 quintillion floating-point operations per second), making it a significant asset for British AI research. Although it ranks 11th globally in processing power, Isambard-AI is a major step for the UK, supporting public-sector projects aimed at addressing climate change, enhancing NHS services, and driving medical and technological innovation. The supercomputer operates primarily on nuclear-powered electricity and costs nearly £1 million monthly to run, with the government emphasizing its long-term benefits, including regional development through AI Growth Zones in Scotland and Wales. Isambard-AI is already enabling impactful research projects, such as developing AI models to predict human behavior in real time using wearable cameras, which could improve safety in high-risk environments like construction sites and crowd management during
energysupercomputerAInuclear-powerhigh-performance-computingUK-technologycomputational-power$20 million AI system Nexus to fast-track scientific innovation in US
The U.S. National Science Foundation has awarded $20 million to Georgia Tech and partners to build Nexus, a cutting-edge AI supercomputer designed to accelerate scientific innovation nationwide. Expected to be operational by spring 2026, Nexus will deliver over 400 quadrillion operations per second, with 330 terabytes of memory and 10 petabytes of flash storage. This computing power surpasses the combined calculation capacity of 8 billion humans and is tailored specifically for artificial intelligence and high-performance computing workloads. Nexus aims to address complex challenges in fields such as drug discovery, clean energy, climate modeling, and robotics. Unlike traditional supercomputers, Nexus emphasizes broad accessibility and user-friendly interfaces, allowing researchers from diverse institutions across the U.S. to apply for access through the NSF. The system will be part of a national collaboration linking Georgia Tech with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign via a high-speed network, creating a shared infrastructure to democratize AI tools. Up
AIsupercomputingrobotics-innovationclean-energyhigh-performance-computingscientific-discoveryartificial-intelligenceGermany's JUPITER becomes fourth fastest supercomputer in the world
Germany’s JUPITER supercomputer, located at the Jülich Supercomputing Centre (JSC), has become the fourth fastest supercomputer globally and the fastest in Europe. This achievement was supported by a collaboration with Georgia Tech, where Assistant Professor Spencer Bryngelson accessed JUPITER through the JUPITER Research and Early Access Program (JUREAP). Bryngelson’s Multi-Component Flow Code (MFC) was tested on JUPITER to study the behavior of droplets subjected to high-velocity shockwaves, a complex fluid dynamics problem with significant engineering implications, especially for supersonic and hypersonic aerospace applications. The simulations revealed how droplets deform and break apart under shockwaves, providing valuable insights that help reduce risks and costs associated with physical testing. The MFC project, part of the broader Exascale Multiphysics Flows (ExaMFlow) collaboration between Georgia Tech and JSC, demonstrated strong performance on JUPITER’s key components—the JUWELS
energysupercomputinghigh-performance-computingsimulationsaerospace-engineeringfluid-dynamicsexascale-computingZuckerberg bets big on AI with first gigawatt superclusters plan
Meta Platforms, led by CEO Mark Zuckerberg, is making a significant investment in artificial intelligence infrastructure by planning to build some of the world’s largest AI superclusters. The company announced that its first supercluster, Prometheus, will launch in 2026, with additional multi-gigawatt clusters like Hyperion—designed to scale up to five gigawatts of compute capacity—also in development. These superclusters aim to handle massive AI model training workloads, helping Meta compete with rivals such as OpenAI and Google in areas like generative AI, computer vision, and robotics. According to industry reports, Meta is on track to be the first AI lab to deploy a supercluster exceeding one gigawatt, marking a major escalation in the AI arms race. Alongside infrastructure expansion, Meta is aggressively investing in AI talent and research. The company recently launched Meta Superintelligence Labs, led by former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman, consolidating top AI
energyAI-superclustersMetahigh-performance-computingdata-centersgigawatt-scale-computingAI-infrastructureDell unveils AI supercomputing system with Nvidia's advanced chips
Dell has unveiled a powerful AI supercomputing system built on Nvidia’s latest GB300 platform, marking the industry’s first deployment of such systems. Delivered to CoreWeave, an AI cloud service provider, these systems feature Dell Integrated Racks equipped with 72 Blackwell Ultra GPUs, 36 Arm-based 72-core Grace CPUs, and 36 BlueField DPUs per rack. Designed for maximum AI training and inference performance, these high-power systems require liquid cooling. CoreWeave, which counts top AI firms like OpenAI among its clients, benefits from the enhanced capabilities of the GB300 chips to accelerate training and deployment of larger, more complex AI models. This deployment underscores the growing competitive gap in AI infrastructure, where access to cutting-edge chips like Nvidia’s GB300 series offers significant advantages amid rapidly increasing AI training demands and tightening U.S. export controls on high-end AI chips. The rapid upgrade from the previous GB200 platform to GB300 within seven months highlights the fast pace of innovation and
energysupercomputingAI-chipsNvidia-GB300data-centersliquid-coolinghigh-performance-computingHigh-Performance Computing Advanced More Than 425 Energy Research Projects in 2024 - CleanTechnica
In 2024, the National Renewable Energy Laboratory (NREL) completed the full deployment of Kestrel, a high-performance computing (HPC) system under the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy. Kestrel delivers approximately 56 petaflops of computing power, significantly accelerating energy research by enabling advanced simulations and analyses through artificial intelligence and machine learning. This supercomputer supported over 425 energy innovation projects across 13 funding areas, facilitating breakthroughs in energy research, materials science, and forecasting. Key projects highlighted in NREL’s Advanced Computing Annual Report for FY 2024 include the use of Questaal, a suite of electronic structure software that solves quantum physics equations with high fidelity to address complex chemical and solid-state system questions. Another notable project, funded by the Bioenergy Technologies Office, used Kestrel to model lignocellulosic biopolymer assemblies in Populus wood, helping researchers understand the molecular interactions responsible for biomass resilience. These
energyhigh-performance-computingrenewable-energymaterials-sciencebioenergymolecular-modelingartificial-intelligenceJapan connects quantum and classical in historic supercomputing first
Japan has unveiled the world’s most advanced quantum–classical hybrid computing system by integrating IBM’s latest 156-qubit Heron quantum processor with its flagship Fugaku supercomputer. This historic installation, located in Kobe and operated by Japan’s national research lab RIKEN, represents the first IBM Quantum System Two deployed outside the U.S. The Heron processor offers a tenfold improvement in quality and speed over its predecessor, enabling it to run quantum circuits beyond the reach of classical brute-force simulations. This fusion of quantum and classical computing marks a significant step toward “quantum-centric supercomputing,” where the complementary strengths of both paradigms are harnessed to solve complex problems. The direct, low-latency connection between Heron and Fugaku allows for instruction-level coordination, facilitating the development of practical quantum-classical hybrid algorithms. Researchers at RIKEN plan to apply this system primarily to challenges in chemistry and materials science, aiming to pioneer high-performance computing workflows that benefit both scientific research and industry
quantum-computingsupercomputinghybrid-computingmaterials-sciencehigh-performance-computingIBM-QuantumRIKENFujitsu to design Japan’s zetta-class supercomputer that’s 1000 times more powerful
Japanese technology company Fujitsu has been selected by the RIKEN research institute to design FugakuNext, Japan’s next-generation flagship supercomputer. Building on the success of Fugaku, which debuted in 2020 and achieved 442 petaFLOPS performance, FugakuNext aims to be a zetta-class supercomputer with performance approximately 1000 times greater than current systems. The project reflects Japan’s strategic focus on integrating AI with scientific simulations and real-time data, a concept known as “AI for Science,” to maintain leadership in science and innovation. The design phase, including the overall system, computer nodes, and CPU components, will continue until February 2026, with a total budget for the build expected to be around $750 million. Fujitsu will utilize its advanced CPUs, specifically the FUJITSU-MONAKA3 and its successor MONAKA-X, to power FugakuNext. These CPUs are engineered for high performance and energy efficiency and will enable the supercomputer
energysupercomputerFujitsuAIhigh-performance-computingCPUscientific-simulationsWorld’s 5th most efficient supercomputer runs 100% on green energy
Paderborn University’s new supercomputer, Otus, has achieved the rank of fifth most energy-efficient supercomputer globally on the Green 500 list, which benchmarks high-performance computing systems based on energy efficiency rather than raw speed. Otus, developed by Lenovo and pro-com Datensysteme GmbH, features 142,656 processor cores, 108 GPUs, AMD’s latest ‘Turin’ processors, and a five-petabyte IBM Spectrum Scale file system. It operates entirely on renewable energy, uses indirect free cooling for year-round efficiency, and repurposes its exhaust heat to warm buildings, underscoring its sustainability credentials. The supercomputer is expected to be fully operational by the third quarter of 2025. Otus nearly doubles the computing power of its predecessor, Noctua, enabling it to handle a wide range of CPU-intensive tasks such as atomic simulations and quantum computing. Its expandable architecture supports up to 100 field-programmable gate arrays (FPGAs),
energysupercomputergreen-energyhigh-performance-computingrenewable-energyenergy-efficiencysustainable-technology