Articles tagged with "AI-chip"
Amazon introduces new Trainium3 chip offering 4x AI performance
Amazon Web Services (AWS) has unveiled its latest AI training chip, Trainium3, which offers more than four times the performance and memory of its predecessor, Trainium2, for both AI training and inference tasks. The new chip is integrated into the Trainium3 UltraServer system, with each server containing 144 Trainium3 chips, and AWS enabling customers to link thousands of these servers—scaling up to one million chips per deployment, a tenfold increase from the previous generation. Trainium3 systems are also designed for greater efficiency, consuming about 40% less power while delivering higher compute throughput, which helps reduce infrastructure strain and AI operating costs amid rapidly expanding data center demands. Early adopters such as Anthropic and Karakuri have reported improved inference performance, faster iteration cycles, and lower compute costs. Looking ahead, AWS previewed its next-generation processor, Trainium4, which will support NVIDIA’s NVLink Fusion, a high-speed interconnect technology enabling tightly coupled AI compute
energyAI-chipAWS-Trainiumdata-center-efficiencyNVIDIA-NVLinkcustom-siliconAI-computeAmazon releases an impressive new AI chip and teases a Nvidia-friendly roadmap
Amazon Web Services (AWS) has unveiled its latest AI training chip, Trainium3, along with the Trainium3 UltraServer system at its AWS re:Invent 2025 conference. Built on a 3-nanometer process, Trainium3 delivers significant improvements over its predecessor, offering more than four times the speed and memory capacity for AI training and inference. Each UltraServer can host 144 chips, and thousands of these servers can be linked to scale up to one million Trainium3 chips, representing a tenfold increase from the previous generation. Additionally, the new chips are 40% more energy efficient, aligning with AWS’s goal to reduce operational costs and energy consumption while providing cost savings to AI cloud customers. Early adopters such as Anthropic, Karakuri, Splashmusic, and Decart have already reported substantial reductions in inference costs using Trainium3. Looking ahead, AWS teased the development of Trainium4, which promises another major performance boost and will support Nvidia’s
energyAI-chipcloud-computingdata-centerenergy-efficiencyNvidiaAWSTesla's Hail Mary — Signs of Progress vs. Historical Concerns - CleanTechnica
The article from CleanTechnica highlights Tesla's recent struggles with declining sales and a lack of successful new product launches since the Model Y, painting a somewhat bleak near-term outlook for the company. Despite these challenges, Tesla is pursuing ambitious, revolutionary projects rather than incremental improvements, reflecting a "Hail Mary" strategy under Elon Musk’s leadership. This approach carries significant risk but also the potential for substantial growth and increased global influence if successful. Key developments include Tesla’s plans to enable texting while driving with Full Self-Driving (FSD) soon, pending safety reviews, and anticipated regulatory approvals for FSD in Japan and China by early 2026. Tesla is also expanding its Robotaxi pilot program across several U.S. cities, aiming for hundreds to thousands of vehicles in operation by year-end. The company recently introduced a new AI5 self-driving chip with improved performance and is progressing with the Tesla Semi factory and Tesla Insurance expansion. Additionally, Tesla has launched initiatives like the MultiPass charging program in Europe
robotautonomous-vehiclesTeslaAI-chipRobotaxiself-driving-technologyelectric-vehiclesFlorida team builds chip to run AI tasks 100-fold at lower power cost
Researchers at the University of Florida have developed a novel silicon photonic chip that uses light, rather than solely electricity, to perform convolution operations—key computations in AI tasks such as image and pattern recognition. By integrating optical components like laser light and microscopic Fresnel lenses directly onto the chip, the device can execute these operations much faster and with significantly lower energy consumption. Tests demonstrated that the prototype achieved about 98% accuracy in classifying handwritten digits, comparable to conventional electronic chips, while operating at near-zero energy for these computations. A notable innovation of this chip is its ability to process multiple data streams simultaneously through wavelength multiplexing, using lasers of different colors passing through the lenses concurrently. This parallel processing capability enhances efficiency and throughput. The project, involving collaboration with UCLA and George Washington University, aligns with trends in the industry where companies like NVIDIA are already incorporating optical components into AI hardware. The researchers anticipate that chip-based optical computing will become integral to future AI systems, potentially enabling more sustainable scaling of AI technologies
energyAI-chipoptical-computingsilicon-photonicsenergy-efficiencymachine-learningsemiconductor-materialsCan an AI chip that mimics the brain beat the data deluge?
The article discusses BrainChip’s Akida processor, a neuromorphic AI chip inspired by the brain’s energy-efficient event-driven processing. Unlike traditional AI chips that process every data frame regardless of changes, Akida leverages spiking neural networks to compute only when input signals exceed a threshold, significantly reducing redundant calculations. This approach exploits data sparsity by processing only changes between frames, leading to power savings of up to 100 times in scenarios with minimal activity, such as a static security camera feed. However, in highly dynamic scenes with frequent changes, these savings diminish. Akida’s architecture uses a digital implementation of spiking neural networks, employing activation functions like ReLU to trigger computations selectively. This mimics biological neurons that fire only when stimulated beyond a threshold, enabling progressively fewer computations across network layers. Despite these efficiency gains, neuromorphic chips like Akida remain niche due to limitations such as 8-bit precision constraints and gaps in development tooling. While promising for edge devices constrained by power,
AI-chipneuromorphic-computingenergy-efficiencyedge-devicesIoT-sensorsbrain-inspired-technologylow-power-AINvidia reportedly plans to release new AI chip designed for China
Nvidia is reportedly planning to release a new AI chip tailored specifically for the Chinese market, aiming to navigate around U.S. export restrictions on advanced semiconductor technology. The chip, expected as early as September, will be based on Nvidia’s Blackwell RTX Pro 6000 processor but modified to comply with current regulations. Notably, these China-specific chips will exclude high-bandwidth memory and NVLink, Nvidia’s proprietary high-speed communication interface, which are key features in its more advanced AI chips. This move reflects Nvidia’s determination to maintain its presence and sales in China despite tightening export controls. Nvidia CEO Jensen Huang recently indicated a potential impact on the company’s revenue and profit forecasts due to these restrictions, though this new product launch might mitigate some of those effects. Additional details from Nvidia were not provided at the time of reporting.
materialsAI-chipsemiconductorNvidiatechnologyprocessorhardwareVolkswagen Getting Xpeng Turing Chips Next - CleanTechnica
Volkswagen is set to adopt Xpeng’s new Turing AI chip for upcoming vehicle models, marking a shift from Nvidia’s Orin X chip previously used. These models, developed in collaboration with Xpeng, are planned for launch next year and will target the Chinese market. This partnership reflects Xpeng’s strategy to expand its business by leveraging its AI chip technology and attracting long-term partners, positioning itself as a technology leader beyond just manufacturing its own vehicles. Volkswagen and Xpeng are jointly developing two mid-class segment Volkswagen brand cars, combining their respective strengths. Volkswagen has also invested $700 million in Xpeng, underscoring its confidence in the startup’s technological capabilities amid a competitive automotive landscape. This collaboration highlights Volkswagen’s commitment to innovation and strategic partnerships to enhance its electric vehicle offerings in China.
robotAI-chipautomotive-technologyVolkswagenXpengelectric-vehiclesautomotive-innovationXPENG G7 Scores 10,000 Orders in Just 46 Minutes - CleanTechnica
XPENG’s latest electric SUV, the G7, has made a strong market debut, securing 10,000 pre-orders within just 46 minutes of availability. Priced starting at RMB 235,800 (approximately $32,870), the G7 offers advanced features including an 800V electrical architecture, 5C superfast charging, and a CLTC-rated range of 702 kilometers (436 miles), though real-world range may be somewhat lower. The vehicle is positioned between XPENG’s G6 and G9 models in terms of size and pricing, with dimensions of 4,892 mm in length and a wheelbase of 2,890 mm. A notable technological highlight of the G7 is its use of XPENG’s new Turing AI chip in the Ultra trim, delivering over 2,200 TOPS of computing power and enabling Level 3 autonomous driving capabilities. The Max trim retains the Nvidia Orin X chip. The G7’s combination of competitive pricing,
electric-vehiclesAI-chipsuperfast-charging800V-architectureenergy-storageautonomous-drivingXPENG-G7China unveils world’s first non-binary AI chip for industry use
China has begun mass production of the world’s first non-binary AI chips, developed by Professor Li Hongge and his team at Beihang University. These chips integrate traditional binary logic with probabilistic computing through a novel Hybrid Stochastic Number (HSN) system, overcoming key limitations of current chip technologies such as high power consumption and poor compatibility with older systems. The new chips offer enhanced fault tolerance, power efficiency, and multitasking capabilities via system-on-chip (SoC) design and in-memory computing algorithms, making them suitable for applications in aviation, manufacturing, and smart control systems like touchscreens. The development leverages mature semiconductor fabrication processes, including a 110-nanometer process for initial touch and display chips and a 28 nm CMOS process for machine learning chips. The team’s innovations enable microsecond-level on-chip computing latency, balancing hardware acceleration with software flexibility. Future plans include creating specialized instructions and chip designs to further optimize hybrid probabilistic computing for complex AI tasks such as speech and image processing. Despite the promising advancements, challenges remain regarding compatibility and long-term reliability, indicating that widespread adoption and impact will require further development and validation.
AI-chipnon-binary-computingenergy-efficiencysemiconductor-technologymachine-learning-chipin-memory-computingsmart-control-systemsBrain-like thinking AI chip with 100x less energy use developed
energyAI-chipneuromorphic-computingenergy-efficiencycybersecurityon-device-processingpattern-recognitionHuawei aims to take on Nvidia’s H100 with new AI chip
HuaweiAI-chipNvidiaAscend-910DsemiconductortechnologyChina