Articles tagged with "supercomputer"
World’s fastest supercomputer runs record-breaking fluid simulation for rocket testing
Researchers at Lawrence Livermore National Laboratory (LLNL) have leveraged the exascale supercomputer El Capitan to perform the largest-ever fluid dynamics simulation, surpassing one quadrillion degrees of freedom in a single computational fluid dynamics (CFD) problem. The simulation modeled turbulent rocket exhaust flows from multiple engines firing simultaneously, a scenario relevant to modern rocket designs like SpaceX’s Super Heavy booster. Using a novel shock-regularization technique called Information Geometric Regularization (IGR), developed by a team including professors from Georgia Tech and NYU, the researchers achieved an 80-fold speedup over previous methods, reduced memory usage by 25 times, and cut energy consumption by more than five times. The simulation utilized all 11,136 nodes and over 44,500 AMD Instinct MI300A Accelerated Processing Units on El Capitan, and was extended on Oak Ridge National Laboratory’s Frontier supercomputer. This breakthrough sets a new benchmark for exascale CFD performance and memory efficiency
energysupercomputerfluid-dynamicsrocket-simulationhigh-performance-computingcomputational-fluid-dynamicsenergy-efficiencyNVIDIA, Oracle team up to build US’ biggest AI supercomputer
NVIDIA and Oracle have partnered with the U.S. Department of Energy (DOE) to build the nation’s largest AI supercomputer, named Solstice, featuring 100,000 NVIDIA Blackwell GPUs. Alongside Solstice, a companion system called Equinox with 10,000 GPUs will also be deployed at Argonne National Laboratory. Together, these systems will deliver a combined 2,200 exaflops of AI performance, making them the most powerful AI infrastructure developed for the DOE. They aim to accelerate scientific research and innovation across diverse fields such as climate science, healthcare, materials science, and national security by enabling researchers to train advanced AI models using NVIDIA’s Megatron-Core library and TensorRT inference software. This initiative is part of the DOE’s public-private partnership model to reinforce U.S. technological leadership in AI and supercomputing. The collaboration is expected to enhance R&D productivity and foster breakthroughs by integrating these supercomputers with DOE experimental facilities like the Advanced Photon Source. Oracle
energysupercomputerAIDepartment-of-EnergyNVIDIAOraclescientific-researchChina’s compact AI server claims 90% lower power consumption
China’s Guangdong Institute of Intelligent Science and Technology (GDIIST) has unveiled BIE-1, a compact AI supercomputer roughly the size of a mini refrigerator that reduces power consumption by 90% compared to traditional supercomputers. Developed in collaboration with Zhuhai Hengqin Neogenint Technology and Suiren Medical Technology, BIE-1 integrates 1,152 CPU cores, 4.8 terabytes of DDR5 memory, and 204 terabytes of storage. It employs brain-inspired neural networks and AI algorithms to deliver advanced computational capabilities, including high-speed training and inference of multiple data types such as text, images, and speech. The device operates quietly and maintains CPU temperatures below 70°C, while running efficiently on a standard household power socket. The BIE-1’s design addresses the challenges of traditional supercomputers, which require large physical spaces and consume massive amounts of energy for both computing and cooling. Its portability and low power usage make it suitable for deployment in
energyAI-computingsupercomputerlow-power-consumptionsustainable-technologyGuangdong-Institute-of-Intelligent-Science-and-Technologycompact-serverWorld’s fastest supercomputer shows how black holes shape galaxies
Scientists have utilized Frontier, the world’s fastest supercomputer located at Oak Ridge National Laboratory, to simulate how supermassive black holes influence the stability and evolution of galaxy clusters over billions of years. By modeling a black hole with a billion solar masses at the center of a galaxy cluster weighing a quadrillion Suns, researchers tracked the activity of black hole jets and their impact on the surrounding environment. These jets, which move at speeds up to 5% of the speed of light in the simulation, inject heat, dust, and gas into the cluster, regulating energy and preventing the collapse of these massive cosmic structures. The simulation required immense computational resources, including 700,000 node hours and over 17,000 GPUs, highlighting the unique capability of Frontier to handle such large-scale astrophysical problems. The study revealed new insights into the formation of gas filaments around galaxy clusters, phenomena previously observed but never successfully reproduced in simulations. These filaments arise from the turbulence created by interactions between cold gases, hot
energysupercomputerblack-holesastrophysicsgalaxy-clusterssimulationcomputational-scienceSupercomputer drives 500x brighter X-rays to boost battery research
Researchers at Argonne National Laboratory have combined the upgraded Advanced Photon Source (APS) with the Aurora exascale supercomputer to significantly accelerate battery research. The APS upgrade boosts X-ray beam brightness by up to 500 times, enabling unprecedented real-time, high-resolution imaging of battery materials during charge and discharge cycles. This allows scientists to observe atomic-level changes, structural defects, and electronic states of key cathode elements such as nickel, cobalt, and manganese, providing deeper insights into battery performance and degradation. Aurora complements APS by handling massive data processing and AI-driven analysis, with over 60,000 GPUs capable of performing more than one quintillion calculations per second. A high-speed terabit-per-second connection between APS and Aurora facilitates real-time data transfer and experiment feedback, enabling rapid adjustments and optimization. Argonne envisions an autonomous research loop where AI models like AuroraGPT analyze data instantly, predict outcomes, and recommend new materials to test, potentially reducing battery development timelines from years to weeks or days.
energybattery-researchsupercomputerAImaterials-scienceenergy-storageAdvanced-Photon-SourceTesla’s Dojo, a timeline
The article chronicles the development and evolution of Tesla’s Dojo supercomputer, a critical component in Elon Musk’s vision to transform Tesla from just an automaker into a leading AI company focused on full self-driving technology. First mentioned in 2019, Dojo was introduced as a custom-built supercomputer designed to train neural networks using vast amounts of video data from Tesla’s fleet. Over the years, Musk and Tesla have highlighted Dojo’s potential to significantly improve the speed and efficiency of AI training, with ambitions for it to surpass traditional GPU-based systems. Tesla officially announced Dojo in 2021, unveiling its D1 chip and plans for an AI cluster comprising thousands of these chips. By 2022, Tesla demonstrated tangible progress with Dojo, including load testing of its hardware and showcasing AI-generated imagery powered by the system. The company aimed to complete a full Exapod cluster by early 2023 and planned multiple such clusters to scale its AI capabilities. In 2023, Musk
robotAIsupercomputerTesla-Dojoself-driving-carsneural-networksD1-chipTesla Dojo: the rise and fall of Elon Musk’s AI supercomputer
Tesla’s Dojo supercomputer, once heralded by Elon Musk as a cornerstone of the company’s AI ambitions, has been officially shut down as of August 2025. Originally designed to train Tesla’s Full Self-Driving (FSD) neural networks and support autonomous vehicle and humanoid robot development, Dojo was central to Musk’s vision of Tesla as more than just an automaker. Despite years of hype and investment, the project was abruptly ended after Tesla decided that its second-generation Dojo 2 supercluster, based on in-house D2 chips, was “an evolutionary dead end.” This decision came shortly after Tesla signed a deal to source next-generation AI6 chips from Samsung, signaling a strategic pivot away from self-reliant hardware development toward leveraging external partners for chip design. The shutdown also involved disbanding the Dojo team and the departure of key personnel, including project lead Peter Bannon and about 20 employees who left to start their own AI chip company, DensityAI
robotAIautonomous-vehiclesTeslasupercomputerself-driving-technologysemiconductorTesla drops Dojo supercomputer as Musk turns to Nvidia, Samsung chips
Tesla has officially discontinued its in-house Dojo supercomputer project, which aimed to develop custom AI training chips to enhance autonomous driving and reduce reliance on external chipmakers. The decision follows several key departures from the Dojo team, including project head Peter Bannon. CEO Elon Musk explained that maintaining two distinct AI chip designs was inefficient, leading Tesla to refocus efforts on developing the AI5 and AI6 chips. These next-generation chips will be produced in partnership with Samsung’s new Texas factory, with production of AI5 chips expected to start by the end of 2026. The Dojo project was initially central to Tesla’s strategy to build proprietary AI infrastructure for self-driving cars, robots, and data centers, involving significant investment in top chip architects. However, the initiative faced persistent delays and setbacks, with prominent leaders like Jim Keller and Ganesh Venkataramanan having left previously. Many former Dojo team members have moved to a stealth startup, DensityAI, which is pursuing similar AI chip goals
robotAI-chipsTeslaNvidiaSamsungautonomous-drivingsupercomputerTesla shuts down Dojo, the AI training supercomputer that Musk said would be key to full self-driving
Tesla is shutting down its Dojo AI training supercomputer project and disbanding the team behind it, marking a significant shift in the company’s strategy for developing in-house chips and hardware for full self-driving technology. Peter Bannon, the Dojo lead, is leaving Tesla, and remaining team members will be reassigned to other data center and compute projects. This move follows the departure of about 20 former Dojo employees who have founded a new startup, DensityAI, which aims to build chips, hardware, and software for AI-powered data centers used in robotics, AI agents, and automotive applications. The decision to end Dojo comes amid Tesla’s ongoing efforts to position itself as an AI and robotics company, despite setbacks such as a limited robotaxi launch in Austin that faced criticism for problematic driving behavior. CEO Elon Musk had previously touted Dojo as central to Tesla’s AI ambitions and full self-driving goals, emphasizing its capacity to process vast amounts of video data. However, since mid-202
robotAITeslaautonomous-vehiclesAI-chipssupercomputerroboticsGermany: World’s largest brain-like supercomputer to aid drug research
Germany’s SpiNNcloud has partnered with Leipzig University to deploy the world’s largest brain-inspired supercomputer specifically designed for drug discovery and personalized medicine research. The system, based on the second generation SpiNNaker hardware, comprises 4,320 chips and around 650,000 ARM-based cores, enabling the simulation of at least 10.5 billion neurons. This architecture allows for massively parallel processing of small, heterogeneous workloads, making it highly efficient for screening billions of molecules in silico—up to 20 billion molecules in under an hour, which is 100 times faster than traditional CPU clusters. The SpiNNcloud system’s design emphasizes energy efficiency and scalability, using 48 SpiNNaker2 chips per server board, each with 152 ARM cores and specialized accelerators. This results in performance that is 18 times more energy-efficient than current GPU-based systems, addressing power consumption and cooling challenges common in high-performance computing. The brain-inspired architecture supports dynamic sparsity and extreme parallelism, which
energysupercomputerAIbrain-inspired-computinglow-power-processorsdrug-discoverypersonalized-medicineUS supercomputer trains AI to for faster nuclear plant licensing
The Oak Ridge National Laboratory (ORNL), under the U.S. Department of Energy, has partnered with AI company Atomic Canyon to accelerate the nuclear power plant licensing process using artificial intelligence. This collaboration, formalized at the Nuclear Opportunities Workshop, aims to leverage ORNL’s Frontier supercomputer—the world’s fastest—to train AI models that can efficiently review and analyze the extensive technical documentation required for nuclear licensing. By utilizing high-performance computing and AI-driven simulations, the partnership seeks to both ensure the safety of nuclear plant designs and significantly reduce the traditionally lengthy licensing timelines overseen by the U.S. Nuclear Regulatory Commission (NRC). Atomic Canyon developed specialized AI models called FERMI, trained on 53 million pages of nuclear documents from the NRC’s ADAMS database, enabling intelligent search and rapid retrieval of relevant information. This approach is intended to streamline regulatory compliance and reporting, helping meet ambitious government deadlines for new nuclear plant approvals. The initiative reflects a broader resurgence in nuclear energy as a reliable, clean power source,
energynuclear-energyartificial-intelligencesupercomputernuclear-licensinghigh-performance-computingenergy-technologyUK powers on supercomputer that runs 21 quintillion operations/sec
The UK has officially powered on its most powerful publicly accessible AI supercomputer, Isambard-AI, located at the University of Bristol. Named after engineer Isambard Kingdom Brunel, the £225 million system can perform 21 exaFLOPs (21 quintillion floating-point operations per second), making it a significant asset for British AI research. Although it ranks 11th globally in processing power, Isambard-AI is a major step for the UK, supporting public-sector projects aimed at addressing climate change, enhancing NHS services, and driving medical and technological innovation. The supercomputer operates primarily on nuclear-powered electricity and costs nearly £1 million monthly to run, with the government emphasizing its long-term benefits, including regional development through AI Growth Zones in Scotland and Wales. Isambard-AI is already enabling impactful research projects, such as developing AI models to predict human behavior in real time using wearable cameras, which could improve safety in high-risk environments like construction sites and crowd management during
energysupercomputerAInuclear-powerhigh-performance-computingUK-technologycomputational-powerDespite Protests, Elon Musk Secures Air Permit for xAI
Elon Musk’s xAI data center in Memphis has been granted an air permit by the Shelby County Health Department to continue operating its gas turbines, which power the company’s Grok chatbot. This permit was issued despite significant community opposition and an impending lawsuit alleging violations of the Clean Air Act. The xAI facility, located in the predominantly Black Boxtown neighborhood—a historically pollution-burdened area—uses mobile gas turbines that emit harmful pollutants like nitrogen oxides. Residents and local leaders, including State Rep. Justin Pearson, have raised concerns about the public health impact of these emissions, describing the situation as a public health emergency. xAI began operating the turbines before obtaining the necessary permits, leading to legal challenges from the NAACP and the Southern Environmental Law Center (SELC), which argue that the company violated environmental regulations by failing to secure permits and allowing unchecked pollution. The newly issued permit allows xAI to operate 15 turbines until 2027, though reports and aerial footage suggest the company
energygas-turbinesair-permitpollutionclean-air-actsupercomputeremissionsFujitsu to design Japan’s zetta-class supercomputer that’s 1000 times more powerful
Japanese technology company Fujitsu has been selected by the RIKEN research institute to design FugakuNext, Japan’s next-generation flagship supercomputer. Building on the success of Fugaku, which debuted in 2020 and achieved 442 petaFLOPS performance, FugakuNext aims to be a zetta-class supercomputer with performance approximately 1000 times greater than current systems. The project reflects Japan’s strategic focus on integrating AI with scientific simulations and real-time data, a concept known as “AI for Science,” to maintain leadership in science and innovation. The design phase, including the overall system, computer nodes, and CPU components, will continue until February 2026, with a total budget for the build expected to be around $750 million. Fujitsu will utilize its advanced CPUs, specifically the FUJITSU-MONAKA3 and its successor MONAKA-X, to power FugakuNext. These CPUs are engineered for high performance and energy efficiency and will enable the supercomputer
energysupercomputerFujitsuAIhigh-performance-computingCPUscientific-simulationsWorld’s 5th most efficient supercomputer runs 100% on green energy
Paderborn University’s new supercomputer, Otus, has achieved the rank of fifth most energy-efficient supercomputer globally on the Green 500 list, which benchmarks high-performance computing systems based on energy efficiency rather than raw speed. Otus, developed by Lenovo and pro-com Datensysteme GmbH, features 142,656 processor cores, 108 GPUs, AMD’s latest ‘Turin’ processors, and a five-petabyte IBM Spectrum Scale file system. It operates entirely on renewable energy, uses indirect free cooling for year-round efficiency, and repurposes its exhaust heat to warm buildings, underscoring its sustainability credentials. The supercomputer is expected to be fully operational by the third quarter of 2025. Otus nearly doubles the computing power of its predecessor, Noctua, enabling it to handle a wide range of CPU-intensive tasks such as atomic simulations and quantum computing. Its expandable architecture supports up to 100 field-programmable gate arrays (FPGAs),
energysupercomputergreen-energyhigh-performance-computingrenewable-energyenergy-efficiencysustainable-technologySiêu máy tính 200.000 GPU của Elon Musk
energyGPUsupercomputerAITeslapower-consumptionenvironmental-impact