Articles tagged with "supercomputing"
World’s first exascale supercomputer speeds plant research with new AI
Scientists at Oak Ridge National Laboratory (ORNL) have developed a novel computational method called Distributed Cross-Channel Hierarchical Aggregation (D-CHAG) that significantly enhances the processing of complex hyperspectral plant imaging data. This approach doubles the analysis speed while reducing memory usage by 75%, overcoming a major bottleneck in handling the vast data generated by hyperspectral imaging systems, which capture hundreds of light wavelengths to reveal detailed information about plant health and stress. By distributing workloads across multiple GPUs and employing a staged, hierarchical aggregation of spectral data, D-CHAG enables faster AI training on larger models without sacrificing image resolution or biological detail. The breakthrough was demonstrated using plant data from ORNL’s Advanced Plant Phenotyping Laboratory and weather datasets on Frontier, the world’s first exascale supercomputer. This advancement allows AI models to measure plant traits such as photosynthetic activity directly from images, replacing slow manual methods and accelerating crop innovation. The technology supports DOE initiatives like the Genesis Mission and OPAL, which
energyAIsupercomputingplant-researchhyperspectral-imagingbioenergycomputational-methodsChina's new AI science network to challenge Trump’s Genesis Mission
China has launched a powerful new artificial intelligence system capable of autonomously conducting advanced scientific research by directly accessing the country’s national supercomputing infrastructure. Officially unveiled on December 23, this AI platform operates at a national scale and is available to over a thousand institutional users across China. Unlike traditional research tools, it can independently plan and execute complex scientific workflows—breaking down tasks, allocating computing resources, running simulations, analyzing data, and generating reports with minimal human intervention. This dramatically accelerates research processes, reducing tasks that once took a full day to about an hour, and currently supports nearly 100 workflows in fields such as materials science, biotechnology, and industrial AI. At the core of this initiative is China’s National Supercomputing Network (SCNet), a high-speed digital backbone linking over 30 supercomputing centers nationwide. Launched in 2023 and rapidly expanded since, SCNet integrates supercomputing and intelligent computing resources to enable large-scale AI deployment for scientific research. Chinese officials
AIsupercomputingmaterials-sciencescientific-researchbiotechnologyartificial-intelligencecomputational-scienceUK achieves 1,000 times faster 5D plasma modeling for nuclear fusion
Scientists from the UK Atomic Energy Authority (UKAEA), Johannes Kepler University Linz (JKU), and Emmi AI have developed GyroSwin, an AI-powered tool that models fusion plasma turbulence up to 1,000 times faster than traditional 5D gyrokinetic simulations. These simulations, which track plasma behavior across three spatial dimensions plus two velocity dimensions, are crucial for designing fusion reactors but typically require days on supercomputers. GyroSwin uses machine learning to learn the underlying plasma dynamics, enabling accurate predictions in seconds while preserving key physical features such as turbulent fluctuations and sheared plasma flows, which are essential for meaningful scientific interpretation. This breakthrough addresses a major bottleneck in fusion research by drastically reducing computational time and cost, facilitating millions of simulations needed to optimize future fusion power plants like the UK’s Spherical Tokamak for Energy Production (STEP). By accelerating the modeling of plasma turbulence, GyroSwin supports the development of practical fusion energy—a clean, virtually limitless power
energyfusion-energyplasma-modelingAI-in-energynuclear-fusionsupercomputingmachine-learningSpectra supercomputer tests adaptive chips for nuclear security
Sandia National Laboratories has introduced Spectra, a prototype supercomputer developed in partnership with NextSilicon, designed to revolutionize national security simulations through adaptive, efficiency-focused computing. Unlike traditional CPU and GPU-based systems, Spectra employs 128 Maverick-2 dual-die accelerators—experimental chips that dynamically analyze and prioritize code tasks in real time to enhance performance while reducing power consumption. This approach aims to improve the speed and efficiency of complex simulations critical to maintaining the safety and reliability of the U.S. nuclear deterrent without underground testing. Spectra is the second system under Sandia’s Vanguard program, which tests cutting-edge technologies for potential large-scale deployment. Following the success of Astra, the first Vanguard machine that validated Arm processors for scientific workloads, Spectra seeks to demonstrate the viability of intelligence-driven, adaptive computing architectures. Early benchmarks, including HPCG, LAMMPS, and SPARTA, have shown promising results without requiring users to rewrite applications, potentially lowering the cost and complexity of
energysupercomputingadaptive-chipsnuclear-securityhigh-performance-computingpower-efficiencynational-laboratoriesWhat is Genesis Mission, and how it speeds up US scientific research
The Genesis Mission is a comprehensive U.S. government initiative aimed at revolutionizing scientific research by integrating artificial intelligence (AI) as the central driver of discovery. Announced via a White House Executive Order, the mission seeks to unify the nation’s top supercomputers, extensive scientific datasets, and advanced AI systems into a single, secure platform called the American Science and Security Platform. This platform will enable AI agents to run simulations, analyze data, generate hypotheses, and even control robotic laboratories, creating an end-to-end architecture for accelerated scientific innovation across fields such as energy, biotechnology, materials science, and national security. At the heart of the mission is the Department of Energy’s (DOE) responsibility to develop the necessary computing infrastructure, leveraging exascale supercomputers like Frontier and Aurora to perform massive calculations and train scientific foundation models. The initiative builds on prior AI-driven breakthroughs—such as AlphaFold’s protein folding solution and AI-discovered antibiotics—demonstrating AI’s ability to process vast data,
energymaterialsartificial-intelligencesupercomputingscientific-researchDOEroboticsAWS is spending $50B build AI infrastructure for the US government
Amazon Web Services (AWS) has announced a $50 billion investment to build specialized AI high-performance computing infrastructure tailored for U.S. government agencies. This initiative aims to significantly enhance federal access to AWS AI services, including Amazon SageMaker, model customization tools, Amazon Bedrock, model deployment, and Anthropic’s Claude chatbot. The project will add 1.3 gigawatts of computing power, with construction of new data centers expected to begin in 2026. AWS CEO Matt Garman emphasized that this investment will transform how federal agencies utilize supercomputing, accelerating critical missions such as cybersecurity and drug discovery, while removing technological barriers that have previously limited government AI adoption. AWS has a long history of working with the U.S. government, having started building cloud infrastructure for federal use in 2011. It launched the first air-gapped commercial cloud for classified workloads in 2014 and introduced the AWS Secret Region in 2017, which supports all security classification levels. This new AI infrastructure
energyAI-infrastructurecloud-computinghigh-performance-computinggovernment-technologydata-centerssupercomputingTwo supercomputers featuring NVIDIA Blackwell land in Japan by 2026
Japan’s RIKEN research institute plans to enhance its scientific computing infrastructure with two new supercomputers powered by NVIDIA’s latest Blackwell-generation GPUs, expected to be operational by spring 2026. Together, these systems will house 2,140 NVIDIA GPUs and focus on advancing AI-driven research, high-performance computing, and quantum technology development. The first supercomputer, equipped with 1,600 GPUs on the GB200 NVL4 platform and connected via NVIDIA’s Quantum-X800 InfiniBand networking, will support AI-accelerated scientific workflows in life sciences, materials research, climate forecasting, manufacturing, and laboratory automation. This system aims to accelerate large-scale AI model training and simulations critical to these fields. The second machine, featuring 540 NVIDIA Blackwell GPUs with the same architecture and networking, is dedicated to quantum computing research. It will not function as a quantum computer but will accelerate the development of quantum algorithms, hybrid quantum-classical simulations, and software to improve quantum hardware usability. This
supercomputingAImaterials-sciencequantum-computingNVIDIA-Blackwellhigh-performance-computingscientific-researchNVIDIA GPUs enable large-scale quantum chip modeling on supercomputer
Researchers from Lawrence Berkeley National Laboratory and the University of California have achieved a breakthrough by simulating a next-generation quantum microchip in unprecedented detail using the Perlmutter supercomputer. This simulation leveraged over 7,000 NVIDIA GPUs to model the chip’s electromagnetic wave propagation and performance before fabrication, enabling the identification of potential issues early in the design process. The chip, developed collaboratively by UC Berkeley’s Quantum Nanoelectronics Laboratory and Berkeley Lab’s Advanced Quantum Testbed, was modeled using ARTEMIS, an exascale computing tool developed under the DOE’s Exascale Computing Project. The simulation discretized the chip—a multilayer structure measuring just 10 mm by 10 mm by 0.3 mm with micron-scale features—into 11 billion grid cells and ran over a million time steps in seven hours, allowing evaluation of multiple circuit configurations within a day. This level of physical modeling at such scale is unprecedented and required nearly the full capacity of the Perlmutter system. The researchers plan to
quantum-computingGPU-accelerationsupercomputingquantum-chipschip-simulationexascale-computingmicroelectronics-materialsUS scientists simulate advanced quantum chip using nearly 7,000 GPUs
A team of researchers from Lawrence Berkeley National Laboratory and the University of California, Berkeley, has successfully simulated a next-generation quantum microchip using nearly 7,200 NVIDIA GPUs on the Perlmutter supercomputer at the National Energy Research Scientific Computing Center. This full-scale physical simulation, conducted over 24 hours, represents a significant advancement in quantum hardware design by enabling scientists to predict chip performance, identify potential issues, and reduce errors before fabrication. The simulation utilized the exascale modeling tool ARTEMIS to capture detailed electromagnetic wave propagation and interactions within the chip, which measures just 10 millimeters square and 0.3 millimeters thick with micron-scale features. The simulation was unprecedented in scale and complexity, discretizing the chip into 11 billion grid cells and running over a million time steps in seven hours, allowing testing of multiple circuit configurations daily. Unlike typical simulations, this approach modeled the chip’s material composition, wiring, resonator geometry, and electromagnetic interactions in full-wave physical detail, including
quantum-computingquantum-chipGPU-simulationsupercomputingadvanced-materialsmicroelectronicsenergy-researchGermany launches 42,000-core ‘Otus’ supercomputer for green research
Germany has launched the ‘Otus’ supercomputer at Paderborn University’s Center for Parallel Computing (PC2), featuring over 42,000 processor cores, 108 GPUs, and a five-petabyte storage system. Developed in partnership with Lenovo and AMD, Otus aims to advance scientific research nationwide by enabling complex simulations that address fundamental and applied challenges, such as atomic-level physical and chemical processes, optimizing shipping routes, improving solar cell efficiency, and developing energy-efficient AI methods. Researchers across Germany can access the system through a competitive proposal process, with the supercomputer operating continuously throughout the year. A key highlight of Otus is its commitment to sustainability: it runs entirely on renewable electricity, uses an indirect free cooling system for year-round efficiency, and repurposes waste heat to warm university buildings. This eco-friendly design contributed to Paderborn University ranking fifth on the global Green500 list of the most energy-efficient supercomputers. Lenovo and AMD emphasized the project’s blend of high performance
energysupercomputingrenewable-energyenergy-efficiencygreen-technologyhigh-performance-computingsustainable-technologyTX-GAIN: MIT supercomputer to power generative AI breakthroughs
MIT’s Lincoln Laboratory Supercomputing Center (LLSC) has unveiled TX-GAIN, the most powerful AI supercomputer at a U.S. university, designed primarily to advance generative AI and accelerate scientific research across diverse fields. With a peak performance of 2 exaflops, TX-GAIN ranks on the TOP500 list and stands as the leading AI system in the Northeast. Unlike traditional AI focused on classification tasks, TX-GAIN excels in generating new outputs and supports applications such as radar signature evaluation, supplementing weather data, anomaly detection in network traffic, and exploring chemical interactions for drug and material design. TX-GAIN’s computational power enables modeling of significantly larger and more complex protein interactions, marking a breakthrough for biological defense research. It also fosters collaboration, notably with the Department of Air Force-MIT AI Accelerator, to prototype and scale AI technologies for military applications. Housed in an energy-efficient data center in Holyoke, Massachusetts, the LLSC supports thousands of researchers working on
energysupercomputingAIscientific-researchenergy-efficiencygenerative-AImaterials-researchAnti-Trump Protesters Take Aim at ‘Naive’ US-UK AI Deal
Thousands of protesters gathered in central London to oppose President Donald Trump’s second state visit to the UK, with many expressing broader concerns about the UK government’s recent AI deal with the US. The demonstrators included environmental activists who criticized the deal’s lack of transparency, particularly regarding the involvement of tech companies and the environmental impact of expanding data centers. Central to the deal is the British startup Nscale, which plans to build more data centers expected to generate over $68 billion in revenue in six years, despite concerns about their high energy and water consumption and local opposition. Critics, including Nick Dearden of Global Justice Now and the Stop Trump Coalition, argue that the deal has been presented as beneficial without sufficient public scrutiny. They worry that the UK government may have conceded regulatory controls, such as digital services taxes and antitrust measures, to US tech giants, potentially strengthening monopolies rather than fostering sovereign British AI development or job creation. Protesters fear that the deal primarily serves the interests of large US corporations rather
IoTAIdata-centersenergy-consumptionsupercomputingtechnology-policyenvironmental-impactWhy the Oracle-OpenAI deal caught Wall Street by surprise
The recent surprise deal between OpenAI and Oracle caught Wall Street off guard but underscores Oracle’s continuing significance in AI infrastructure despite its legacy status. OpenAI’s willingness to commit substantial funds—reportedly around $60 billion annually for compute and custom AI chip development—signals its aggressive scaling strategy and desire to diversify infrastructure providers to mitigate risk. Industry experts highlight that OpenAI is assembling a comprehensive global AI supercomputing foundation, which could give it a competitive edge. Oracle’s involvement, while unexpected to some given its perceived diminished role compared to cloud giants like Google, Microsoft, and AWS, is explained by its proven capabilities in delivering large-scale, high-performance infrastructure, including supporting TikTok’s U.S. operations. However, key details about the deal remain unclear, particularly regarding how OpenAI will finance and power its massive compute needs. The company is burning through billions annually despite growing revenues from ChatGPT and other products, raising questions about sustainability. Energy sourcing is a critical concern since data centers are projected to
energyAI-infrastructurecloud-computingsupercomputingdata-centerspower-consumptionOpenAIEl Capitan transforms complex physics into jaw-dropping detail
The El Capitan supercomputer, currently the world’s fastest, has revolutionized the simulation of extreme physics events by producing unprecedentedly detailed and high-resolution models. Developed for scientists at Lawrence Livermore National Laboratory (LLNL), El Capitan can simulate phenomena such as shock waves and fluid mixing with remarkable clarity, capturing sub-micron details that traditional computers often miss. For example, researchers used it to model the impact of shock waves on a tin surface, revealing how the metal melts and ejects tiny droplets, including the influence of microscopic surface scratches. This level of fidelity, enabled by advanced physics models and a fine computational mesh, is crucial for advancing applications in physics, national defense, and fusion energy research. A key focus of the research was the Kelvin-Helmholtz instability, a complex fluid dynamic phenomenon occurring when fluids of different densities interact turbulently under extreme conditions. Using LLNL’s MARBL multiphysics code, El Capitan simulated how shockwaves interacting with minute surface rip
energysupercomputingphysics-simulationfusion-energyshock-waveshigh-performance-computingmaterials-science500 billion data points reveal how quakes could ripple through cities
Researchers led by David McCallen, in collaboration with Lawrence Berkeley and Oak Ridge national laboratories, are using supercomputers to develop highly advanced earthquake simulations that predict how seismic waves propagate and impact urban infrastructure. Their project, part of the Exascale Computing Project, has produced EQSIM (Earthquake Simulation Coder), a tool that models earthquake dynamics with unprecedented detail by incorporating geological factors such as fault type, soil composition, and surface topography. These simulations reveal how seismic energy is amplified or dampened by local geology and how buildings and critical infrastructure like water and power systems might respond or fail during earthquakes. Using the Frontier supercomputer, which operates at exascale performance, the team can run simulations covering hundreds of kilometers with up to 500 billion grid points, generating massive datasets of about 3 petabytes per simulation. This computational power allows researchers to identify seismic "hot spots" where ground motions concentrate, varying significantly by location. Notably, the research has found that smaller earthquakes can sometimes
energyearthquake-simulationseismic-wavesinfrastructure-resiliencesupercomputinggeological-modelingEQSIMUS supercomputer-backed AI aims to speed hunt for battery breakthroughs
Researchers at the University of Michigan, led by Venkat Vishwanathan, are leveraging U.S. Department of Energy (DOE) supercomputers at Argonne National Laboratory to accelerate the discovery of new battery materials. Traditionally, battery material development relied heavily on intuition and incremental improvements based on a limited set of materials discovered mainly between 1975 and 1985. The team is now using foundational AI models trained on billions of molecules via powerful supercomputers like Polaris and Aurora to predict key properties such as conductivity, melting point, and flammability. This approach enables rapid, data-driven identification of promising electrolytes and electrode materials, which are critical for developing next-generation batteries that are more powerful, longer-lasting, and safer. These foundational AI models differ from traditional AI by possessing a broad understanding of molecular structures, allowing them to efficiently tackle specific tasks in battery design. The researchers employed a text-based molecular representation system called SMILES, enhanced by a new tool named SMIRK, to improve prediction
energybattery-technologyAI-in-materials-sciencesupercomputingbattery-materials-discoveryelectrode-materialselectrolytesBuilding AI Foundation Models to Accelerate the Discovery of New Battery Materials - CleanTechnica
Researchers at the University of Michigan, leveraging the powerful supercomputers Aurora and Polaris at the Argonne Leadership Computing Facility (ALCF), are developing AI foundation models to accelerate the discovery of new battery materials. Traditionally, battery material discovery relied heavily on intuition and incremental improvements to a limited set of materials identified mainly between 1975 and 1985. The new AI-driven approach uses large, specialized foundation models trained on massive datasets of molecular structures to predict key properties such as conductivity, melting point, boiling point, and flammability. This enables a more efficient exploration of the vast chemical space—estimated to contain up to 10^60 possible molecular compounds—by focusing on promising candidates for battery electrolytes and electrodes. The team’s foundation model, trained on billions of molecules using text-based molecular representations (SMILES) and enhanced by a novel tool called SMIRK, allows for more precise and consistent learning of molecular structures. This approach helps overcome the limitations of traditional trial-and-error methods by providing
energymaterialsartificial-intelligencebattery-technologymolecular-designsupercomputingbattery-materials-discoveryEnergy Storage Breakthroughs Enable a Strong & Secure Energy Landscape at Argonne - CleanTechnica
Researchers at the University of Michigan, leveraging the supercomputing resources at the U.S. Department of Energy’s Argonne National Laboratory, are pioneering the use of artificial intelligence (AI) foundation models to accelerate the discovery of advanced battery materials. Traditionally, battery material development relied heavily on intuition and incremental improvements to a limited set of materials discovered mainly between 1975 and 1985. The new AI-driven approach uses large, specialized models trained on massive datasets of molecular information to predict key properties such as conductivity, melting point, and flammability, enabling more targeted exploration of potential battery electrolytes and electrodes. The scale of possible molecular compounds—estimated at around 10^60—makes traditional trial-and-error methods impractical. The AI foundation models, trained on billions of known molecules, can efficiently navigate this vast chemical space by identifying promising candidates with desirable properties for next-generation batteries. In 2024, the team utilized Argonne’s Polaris supercomputer to train one of the largest chemical foundation models
energybattery-materialsAI-in-energysupercomputingmolecular-designbattery-electrolytesbattery-electrodesWorld’s fastest supercomputer boosts US tsunami warning systems
US scientists at Lawrence Livermore National Laboratory (LLNL) have developed a real-time tsunami forecasting system powered by El Capitan, the world’s fastest supercomputer with a peak performance of 2.79 quintillion calculations per second. Utilizing over 43,500 AMD Instinct MI300A Accelerated Processing Units, the system solves complex acoustic-gravity wave propagation problems to create a detailed "digital twin" model of tsunami behavior. This model integrates real-time seafloor pressure sensor data with advanced physics-based simulations to infer earthquake-induced seafloor motion and predict tsunami wave propagation with quantified uncertainties, enabling rapid forecasts during actual events. The breakthrough hinges on solving a billion-parameter Bayesian inverse problem with unprecedented speed—less than 0.2 seconds—achieving a 10-billion-fold speedup compared to previous methods. This was made possible by leveraging El Capitan’s exascale computing power in an offline precomputation step, allowing subsequent rapid predictions on smaller GPU clusters. The system
energyIoTsupercomputingtsunami-warning-systemsdigital-twinsensor-datareal-time-forecastingNASA supercomputer reveals how Greenland ice melt boosts ocean life
A recent NASA-backed study reveals that the massive annual melt of Greenland’s ice sheet—losing about 270 billion tons of ice each year—is unexpectedly boosting ocean life by stimulating phytoplankton growth. Using the advanced ECCO-Darwin computer model developed by NASA’s Jet Propulsion Laboratory and MIT, scientists simulated how glacial meltwater interacts with ocean waters. The freshwater runoff from glaciers like Jakobshavn creates turbulent plumes that lift vital nutrients such as iron and nitrate from deep waters to the sunlit surface, enhancing phytoplankton growth by an estimated 15% to 40% during summer months. This process helps explain satellite observations of a 57% increase in Arctic phytoplankton between 1998 and 2018. Phytoplankton, though microscopic, play a crucial role in the marine food web by absorbing carbon dioxide and serving as the base food source for krill and other small animals, which in turn support larger marine species. However, scientists
energyclimate-changesupercomputingoceanographyglacial-meltNASAenvironmental-scienceUS nuclear research to be led by AI-powered fusion design system
Scientists at Lawrence Livermore National Laboratory (LLNL), in collaboration with Los Alamos and Sandia National Laboratories under the National Nuclear Security Administration (NNSA), have developed an AI-driven system called the Multi-Agent Design Assistant (MADA) to automate and accelerate the design of targets for inertial confinement fusion (ICF) experiments. MADA integrates large language models (LLMs) fine-tuned on internal simulation codes with high-performance computing to interpret natural language and hand-drawn diagrams, generating full simulation decks for LLNL’s 3D multiphysics code MARBL. This enables rapid exploration of fusion capsule designs by running thousands of simulations on supercomputers such as El Capitan, the world’s fastest, and Tuolumne. The AI system uses an Inverse Design Agent to convert human inputs into simulation parameters and a Job Management Agent to handle scheduling across HPC resources. This approach significantly compresses design cycles and expands the design space exploration from a handful of concepts to potentially thousands
energyfusion-energyartificial-intelligencesupercomputingnuclear-researchinertial-confinement-fusionhigh-energy-density-physicsUS supercomputer models airflow to reduce jet drag and emissions
The U.S. Department of Energy’s Argonne National Laboratory is leveraging its Aurora supercomputer, one of the world’s first exascale machines capable of over a quintillion calculations per second, to advance aircraft design by modeling airflow around commercial airplanes. A research team from the University of Colorado Boulder employs Aurora’s immense computational power alongside machine learning techniques to simulate complex turbulent airflow, particularly around airplane vertical tails. These simulations aim to improve predictive models and inform the design of smaller, more efficient vertical tails that maintain effectiveness in challenging flight conditions, such as crosswinds with engine failure, thereby potentially reducing drag and emissions. The researchers use a tool called HONEE to conduct detailed airflow simulations that capture the chaotic nature of turbulence. These high-fidelity simulations train AI-based subgrid stress (SGS) models, which predict the effects of small-scale turbulent air movements often missed in lower-resolution models but critical for accurate airflow prediction. Unlike traditional turbulence modeling that relies on extensive offline data analysis, their approach integrates machine
energysupercomputingmachine-learningaerospace-engineeringairflow-simulationturbulence-modelingexascale-computing$20 million AI system Nexus to fast-track scientific innovation in US
The U.S. National Science Foundation has awarded $20 million to Georgia Tech and partners to build Nexus, a cutting-edge AI supercomputer designed to accelerate scientific innovation nationwide. Expected to be operational by spring 2026, Nexus will deliver over 400 quadrillion operations per second, with 330 terabytes of memory and 10 petabytes of flash storage. This computing power surpasses the combined calculation capacity of 8 billion humans and is tailored specifically for artificial intelligence and high-performance computing workloads. Nexus aims to address complex challenges in fields such as drug discovery, clean energy, climate modeling, and robotics. Unlike traditional supercomputers, Nexus emphasizes broad accessibility and user-friendly interfaces, allowing researchers from diverse institutions across the U.S. to apply for access through the NSF. The system will be part of a national collaboration linking Georgia Tech with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign via a high-speed network, creating a shared infrastructure to democratize AI tools. Up
AIsupercomputingrobotics-innovationclean-energyhigh-performance-computingscientific-discoveryartificial-intelligenceGermany's JUPITER becomes fourth fastest supercomputer in the world
Germany’s JUPITER supercomputer, located at the Jülich Supercomputing Centre (JSC), has become the fourth fastest supercomputer globally and the fastest in Europe. This achievement was supported by a collaboration with Georgia Tech, where Assistant Professor Spencer Bryngelson accessed JUPITER through the JUPITER Research and Early Access Program (JUREAP). Bryngelson’s Multi-Component Flow Code (MFC) was tested on JUPITER to study the behavior of droplets subjected to high-velocity shockwaves, a complex fluid dynamics problem with significant engineering implications, especially for supersonic and hypersonic aerospace applications. The simulations revealed how droplets deform and break apart under shockwaves, providing valuable insights that help reduce risks and costs associated with physical testing. The MFC project, part of the broader Exascale Multiphysics Flows (ExaMFlow) collaboration between Georgia Tech and JSC, demonstrated strong performance on JUPITER’s key components—the JUWELS
energysupercomputinghigh-performance-computingsimulationsaerospace-engineeringfluid-dynamicsexascale-computingMeta is reportedly using actual tents to build data centers
Meta is accelerating its efforts to build AI infrastructure by using unconventional methods to construct data centers quickly. According to reports, the company is employing actual tents and ultra-light structures, along with prefabricated power and cooling modules, to expedite the deployment of computing capacity. This approach prioritizes speed over aesthetics or redundancy, reflecting Meta’s urgent need to catch up with competitors like OpenAI, xAI, and Google in the race for superintelligence technology. One notable project is Meta’s Hyperion data center, which a company spokesperson confirmed will be located in Louisiana. The facility is expected to reach a capacity of 2 gigawatts by 2030, underscoring Meta’s commitment to rapidly scaling its AI compute resources. The absence of traditional backup generators, such as diesel units, further highlights the focus on swift, efficient construction rather than conventional data center design norms. Overall, Meta’s strategy signals a shift toward innovative, speed-driven infrastructure development to support its AI ambitions.
energydata-centersMetaAI-infrastructurepower-modulescooling-technologysupercomputingDell unveils AI supercomputing system with Nvidia's advanced chips
Dell has unveiled a powerful AI supercomputing system built on Nvidia’s latest GB300 platform, marking the industry’s first deployment of such systems. Delivered to CoreWeave, an AI cloud service provider, these systems feature Dell Integrated Racks equipped with 72 Blackwell Ultra GPUs, 36 Arm-based 72-core Grace CPUs, and 36 BlueField DPUs per rack. Designed for maximum AI training and inference performance, these high-power systems require liquid cooling. CoreWeave, which counts top AI firms like OpenAI among its clients, benefits from the enhanced capabilities of the GB300 chips to accelerate training and deployment of larger, more complex AI models. This deployment underscores the growing competitive gap in AI infrastructure, where access to cutting-edge chips like Nvidia’s GB300 series offers significant advantages amid rapidly increasing AI training demands and tightening U.S. export controls on high-end AI chips. The rapid upgrade from the previous GB200 platform to GB300 within seven months highlights the fast pace of innovation and
energysupercomputingAI-chipsNvidia-GB300data-centersliquid-coolinghigh-performance-computingUS supercomputer unlocks nuclear salt reactor secrets with AI power
Scientists at Oak Ridge National Laboratory (ORNL) have developed a novel artificial intelligence (AI) framework that models the behavior of molten lithium chloride with quantum-level accuracy but in a fraction of the time required by traditional methods. Utilizing the Summit supercomputer, the machine-learning model predicts key thermodynamic properties of the salt in both liquid and solid states by training on a limited set of first-principles data. This approach dramatically reduces computational time from days to hours while maintaining high precision, addressing a major challenge in nuclear engineering related to understanding molten salts at extreme reactor temperatures. Molten salts are critical for advanced nuclear reactors as coolants, fuel solvents, and energy storage media due to their stability at high temperatures. However, their complex properties—such as melting point, heat capacity, and corrosion behavior—are difficult to measure or simulate accurately. ORNL’s AI-driven method bridges the gap between fast but less precise molecular dynamics and highly accurate but computationally expensive quantum simulations. This breakthrough enables faster, more reliable
energyAInuclear-reactorsmolten-saltsmachine-learningsupercomputingmaterials-scienceJapan connects quantum and classical in historic supercomputing first
Japan has unveiled the world’s most advanced quantum–classical hybrid computing system by integrating IBM’s latest 156-qubit Heron quantum processor with its flagship Fugaku supercomputer. This historic installation, located in Kobe and operated by Japan’s national research lab RIKEN, represents the first IBM Quantum System Two deployed outside the U.S. The Heron processor offers a tenfold improvement in quality and speed over its predecessor, enabling it to run quantum circuits beyond the reach of classical brute-force simulations. This fusion of quantum and classical computing marks a significant step toward “quantum-centric supercomputing,” where the complementary strengths of both paradigms are harnessed to solve complex problems. The direct, low-latency connection between Heron and Fugaku allows for instruction-level coordination, facilitating the development of practical quantum-classical hybrid algorithms. Researchers at RIKEN plan to apply this system primarily to challenges in chemistry and materials science, aiming to pioneer high-performance computing workflows that benefit both scientific research and industry
quantum-computingsupercomputinghybrid-computingmaterials-sciencehigh-performance-computingIBM-QuantumRIKENTiny quantum processor outshines classical AI in accuracy, energy use
Researchers led by the University of Vienna have demonstrated that a small-scale photonic quantum processor can outperform classical AI algorithms in machine learning classification tasks, marking a rare real-world example of quantum advantage with current hardware. Using a quantum photonic circuit developed at Italy’s Politecnico di Milano and a machine learning algorithm from UK-based Quantinuum, the team showed that the quantum system made fewer errors than classical counterparts. This experiment is one of the first to demonstrate practical quantum enhancement beyond simulations, highlighting specific scenarios where quantum computing provides tangible benefits. In addition to improved accuracy, the photonic quantum processor exhibited significantly lower energy consumption compared to traditional hardware, leveraging light-based information processing. This energy efficiency is particularly important as AI’s growing computational demands raise sustainability concerns. The findings suggest that even today’s limited quantum devices can enhance machine learning performance and energy efficiency, potentially guiding a future where quantum and classical AI technologies coexist symbiotically to push technological boundaries and promote greener, faster, and smarter AI solutions.
quantum-computingphotonic-quantum-processorartificial-intelligenceenergy-efficiencymachine-learningquantum-machine-learningsupercomputingEurope tames ‘elephant flows’ in 1.2 Tbit/s supercomputer trial
Europe achieved a record-breaking 1.2 terabit-per-second (Tbit/s) data transfer across 2,175 miles (3,500 kilometers) in a supercomputing trial involving CSC (IT Center for Science), SURF, and Nokia. The test demonstrated a quantum-safe, high-capacity fibre-optic connection between Amsterdam, Netherlands, and Kajaani, Finland, transferring both real research and synthetic data directly disk-to-disk. The data traversed five production research and education networks, including NORDUnet, Sunet, SIKT, and Funet, leveraging Nokia’s IP/MPLS routing and quantum-safe optical technology. Nokia’s Flexible Ethernet (FlexE) was key to managing “elephant flows,” or very large continuous data streams, proving the feasibility of ultra-fast, long-distance data transport critical for AI and high-performance computing (HPC). This milestone highlights the importance of resilient, scalable, and secure cross-border connectivity to support the exponential growth of research data, especially for AI model training and supercomputing workloads. The trial supports Europe’s ambitions for supercomputing infrastructure, such as the LUMI supercomputer in Kajaani and AI projects like GPT-nl, enabling seamless workflows across distributed data centers. The success of this multi-domain, high-throughput network test underscores the value of strategic partnerships and advanced digital backbones in driving scientific progress and preparing for future AI and HPC demands. Overall, the trial sets a new benchmark for operational long-distance data networks, providing critical insights into data transport and storage infrastructure. Stakeholders emphasized that despite geographical distances, reliable and scalable data connections are achievable and essential for Europe’s research ecosystem. Nokia and its partners are committed to continuing support for global research and education networks, ensuring they can scale confidently to meet the next generation of discovery and innovation.
energysupercomputingAIdata-transferoptical-networksquantum-safe-technologyhigh-capacity-connectivity