Amazon introduces new Trainium3 chip offering 4x AI performance

Source: interestingengineering
Author: @IntEngineering
Published: 12/3/2025
To read the full content, please visit the original article.
Read original articleAmazon Web Services (AWS) has unveiled its latest AI training chip, Trainium3, which offers more than four times the performance and memory of its predecessor, Trainium2, for both AI training and inference tasks. The new chip is integrated into the Trainium3 UltraServer system, with each server containing 144 Trainium3 chips, and AWS enabling customers to link thousands of these servers—scaling up to one million chips per deployment, a tenfold increase from the previous generation. Trainium3 systems are also designed for greater efficiency, consuming about 40% less power while delivering higher compute throughput, which helps reduce infrastructure strain and AI operating costs amid rapidly expanding data center demands. Early adopters such as Anthropic and Karakuri have reported improved inference performance, faster iteration cycles, and lower compute costs.
Looking ahead, AWS previewed its next-generation processor, Trainium4, which will support NVIDIA’s NVLink Fusion, a high-speed interconnect technology enabling tightly coupled AI compute
Tags
energyAI-chipAWS-Trainiumdata-center-efficiencyNVIDIA-NVLinkcustom-siliconAI-compute