Articles tagged with "AI-chips"
Microsoft won’t stop buying AI chips from Nvidia, AMD, even after launching its own, Nadella says
Microsoft has introduced its first in-house AI chip, the Maia 200, deployed in one of its data centers with plans for broader rollout. Designed as an "AI inference powerhouse," Maia 200 is optimized for running AI models in production and reportedly outperforms competing chips, including Google's latest Tensor Processing Units (TPUs). This move aligns with a broader industry trend where cloud giants develop proprietary AI chips to address supply constraints and high costs associated with Nvidia’s latest hardware. Despite launching Maia 200, Microsoft CEO Satya Nadella emphasized that the company will continue purchasing AI chips from Nvidia and AMD, highlighting ongoing partnerships and mutual innovation. Nadella noted that vertical integration does not preclude using third-party components, reflecting a pragmatic approach to balancing in-house development with external technology. The Maia 200 chip will initially be used by Microsoft’s Superintelligence team, led by a former Google DeepMind co-founder, to build advanced AI models aimed at reducing reliance on external providers like OpenAI and Anthropic
energyAI-chipsMicrosoftNvidiaAMDcloud-computingAI-inferenceAI chip startup Ricursive hits $4B valuation two months after launch
Ricursive Intelligence, an AI chip startup founded by former Google researchers Anna Goldie (CEO) and Azalia Mirhoseini (CTO), has rapidly achieved a $4 billion valuation just two months after its formal launch. The company raised $300 million in a Series A round led by Lightspeed, bringing its total funding to $335 million. Ricursive is developing an AI system capable of designing and autonomously improving AI chips, including creating its own silicon substrate layer to accelerate chip advancements. The founders’ prior work on reinforcement learning for chip layout design has been instrumental in four generations of Google’s TPU chips. Ricursive is part of a broader trend of startups focused on AI systems that self-improve hardware. Notably, it should not be confused with Recursive, another AI startup working on similar self-improving AI systems and reportedly also targeting a $4 billion valuation. Additionally, Naveen Rao’s Unconventional AI recently raised a $475 million seed round at a $4.5
AI-chipssemiconductor-materialschip-design-automationsilicon-substratereinforcement-learningAI-hardwarestartup-fundingFrom invisibility cloaks to AI chips: Neurophos raises $110M to build tiny optical processors for inferencing
Neurophos, an Austin-based startup, has developed a groundbreaking “metasurface modulator” that enables tiny optical processors capable of performing matrix-vector multiplication—an essential operation for AI inferencing—much faster and more efficiently than traditional silicon-based GPUs and TPUs. By miniaturizing optical transistors to a scale about 10,000 times smaller than conventional optical components, Neurophos can fit thousands of these modulators on a single chip, significantly boosting computational speed and energy efficiency. This innovation addresses key challenges in photonic computing, such as large component size and high power consumption due to digital-analog conversions, positioning Neurophos’s optical processing units (OPUs) as a promising alternative to silicon chips in AI data centers. The company recently raised $110 million in a Series A funding round led by Bill Gates’ venture firm Gates Frontier, with participation from Microsoft’s M12 and other investors. CEO Dr. Patrick Bowen claims that Neurophos’s OPUs will outperform Nvidia
materialsoptical-processorsAI-chipsphotonic-chipsenergy-efficiencymetasurface-modulatorsemiconductor-technologyA timeline of the US semiconductor market in 2025
The U.S. semiconductor industry experienced significant developments throughout 2025, marked by leadership changes, government interventions, and shifting international trade dynamics. Nvidia emerged as a dominant player, reporting record revenues driven largely by its data center business, and securing a non-exclusive licensing deal with chip maker Groq, including hiring Groq’s founder and acquiring $20 billion in assets. Despite challenges, Nvidia also navigated complex regulatory environments, including a reversal by the U.S. Department of Commerce allowing it and AMD to export advanced AI chips to China, although China imposed restrictions on domestic companies purchasing Nvidia chips and ruled that Nvidia violated antitrust laws related to a past acquisition. Intel made notable strides with the announcement of its Panther Lake processor, built on its advanced 18A semiconductor process and produced exclusively at its Arizona fab. The company also underwent leadership changes shortly after the U.S. government took an equity stake in Intel’s foundry program, a move aimed at securing domestic chip production amid tariff rumors and geopolitical tensions.
semiconductorsAI-chipsNvidiaIntelchip-manufacturingsemiconductor-industrytechnology-tariffsElon Musk says Tesla’s restarted Dojo3 will be for ‘space-based AI compute’
Elon Musk announced that Tesla plans to restart development of its third-generation AI chip, Dojo3, but with a new focus on “space-based AI compute” rather than training self-driving models on Earth. This marks a strategic shift following Tesla’s shutdown of the original Dojo supercomputer project five months earlier, which included disbanding the Dojo team after the departure of its lead, Peter Bannon. At that time, Tesla had intended to rely more on external partners like Nvidia, AMD, and Samsung for AI compute and chip manufacturing. However, Musk’s recent statements suggest a renewed commitment to in-house chip development, highlighting that Tesla’s AI5 chip design is progressing well and that the upcoming AI7/Dojo3 chip will be geared toward operating AI data centers in space. Musk’s vision aligns with broader industry discussions about the limitations of Earth’s power grids and the potential benefits of off-planet data centers powered by constant solar energy. Tesla aims to rebuild its Dojo team
AI-chipsTesla-Dojospace-based-computingenergy-harvestingsemiconductor-technologyautonomous-drivingAI-hardwareThe US imposes 25% tariff on Nvidia’s H200 AI chips headed to China
The U.S. government, under President Donald Trump, has imposed a 25% tariff on certain advanced AI semiconductors, including Nvidia’s H200 chips, when these are produced outside the U.S., pass through the U.S., and are then exported to countries like China. This tariff formalizes a key aspect of the Department of Commerce’s earlier decision to allow Nvidia to sell these chips to vetted Chinese customers starting in December. The tariff also affects chips from other companies, such as AMD’s MI325X. Despite the tariff, Nvidia welcomed the move, emphasizing that it enables American chip manufacturers to compete globally while supporting domestic jobs and manufacturing. China faces a complex situation in the global AI and semiconductor race, balancing its desire to develop a robust domestic chip industry with the need to access advanced foreign technology in the interim. The Chinese government is reportedly drafting regulations to control how many semiconductors Chinese companies can import, potentially allowing some purchases of Nvidia’s chips, which would mark a shift from previous
semiconductorsAI-chipsNvidiatariffssemiconductor-industryUS-China-tradeadvanced-technologyOpenAI signs deal, worth $10 billion, for compute from Cerebras
OpenAI has entered a multi-year agreement with AI chipmaker Cerebras, securing 750 megawatts of compute power from 2026 through 2028 in a deal valued at over $10 billion. This partnership aims to accelerate AI processing speeds, enabling faster response times for OpenAI’s customers by leveraging Cerebras’s specialized AI chips, which the company claims outperform traditional GPU-based systems like those from Nvidia. The enhanced compute capacity is expected to support real-time AI inference, which Cerebras CEO Andrew Feldman likens to the transformative impact broadband had on the internet. Cerebras, which gained prominence following the AI surge sparked by ChatGPT’s 2022 launch, has been expanding despite postponing its IPO multiple times. The company is reportedly in talks to raise an additional $1 billion at a $22 billion valuation. OpenAI’s strategy involves diversifying its compute infrastructure to optimize performance across different workloads, with Cerebras providing a dedicated low-latency inference solution. This collaboration is
energyAI-chipscompute-powerdata-centershigh-performance-computingsemiconductor-technologyAI-infrastructureNVIDIA can now sell AI chips to China as US eases export rules
The U.S. Commerce Department has eased export restrictions on advanced AI chips to China, allowing companies like NVIDIA and AMD to apply for licenses to sell certain high-performance processors under strict conditions. This marks a significant shift from previous policies that largely rejected such exports outright. Under the new rules, chipmakers can seek approval to export processors like NVIDIA’s H200 and AMD’s MI325X on a case-by-case basis, provided they demonstrate no shortage of supply in the U.S. and certify that shipments will not detract from domestic needs. The policy also applies to Macau and restricts eligibility to chips below specific performance thresholds, while explicitly barring exports for military, intelligence, or weapons-related uses. The revised framework further limits exports to no more than 50% of the volume shipped domestically and requires rigorous customer verification and independent third-party testing before shipment. This approach aims to prevent advanced U.S. AI technology from enhancing China’s defense or intelligence capabilities while cautiously reopening commercial access. NVIDIA’s H200
AI-chipssemiconductor-export-controlsNVIDIAadvanced-processorsUS-China-technology-tradeAI-hardwarechip-manufacturingNVIDIA eyes $20 billion Groq deal as AI chip race grows, report says
NVIDIA has agreed to acquire AI chip startup Groq in a cash deal valued at $20 billion, marking the largest acquisition in NVIDIA’s history and significantly expanding its presence in specialized AI accelerator hardware. The deal follows Groq’s recent $750 million funding round at a $6.9 billion valuation, which included major investors such as BlackRock, Samsung, and Cisco. The acquisition covers Groq’s core assets but excludes its Groq Cloud business. Groq, founded in 2016 by former Google engineers including CEO Jonathan Ross, focuses on low-latency inference chips designed to accelerate large language model tasks, positioning itself as a challenger to NVIDIA’s GPUs and Google’s TPUs. This acquisition underscores NVIDIA’s broader strategy to deepen its influence across the AI hardware ecosystem amid growing demand for AI inference hardware. NVIDIA’s cash reserves have grown substantially, reaching $60.6 billion by October 2023, enabling aggressive investments and partnerships, including a planned $100 billion investment in OpenAI and
energyAI-chipsNVIDIAGroqsemiconductorAI-hardwareaccelerator-technologyNvidia acquires AI chip challenger Groq for $20B, report says
Nvidia is reportedly acquiring AI chip startup Groq for $20 billion, as competition intensifies among tech companies to enhance their AI computing capabilities. While Nvidia’s GPUs have become the industry standard for AI processing, Groq has developed a distinct type of chip known as a language processing unit (LPU), which claims to be ten times faster and consume one-tenth the energy compared to traditional solutions. Groq’s CEO, Jonathan Ross, has a background in innovation, having contributed to Google’s chip development efforts. Groq has experienced rapid growth, recently raising funds at a $6.9 billion valuation and expanding its user base to over 2 million developers, up from approximately 356,000 the previous year. The acquisition would strengthen Nvidia’s position in the AI hardware market by integrating Groq’s advanced chip technology. Nvidia has not yet provided an official comment on the reported deal.
energyAI-chipsNvidiaGroqsemiconductor-technologylanguage-processing-unitcomputing-powerDepartment of Commerce may approve Nvidia H200 chip exports to China
The U.S. Department of Commerce is reportedly preparing to approve Nvidia’s export of advanced H200 AI chips to China, marking a potential shift in U.S. policy. These H200 chips are significantly more advanced than the H20 chips Nvidia previously developed specifically for the Chinese market. However, the approval would only allow the shipment of H200 chips that are about 18 months old. Nvidia has expressed support for this decision, emphasizing that it balances national interests and supports American manufacturing jobs. This development follows recent statements from Commerce Secretary Gina Raimondo indicating a pending decision on the matter. The potential approval comes amid ongoing tensions and legislative efforts to restrict advanced AI chip exports to China over national security concerns. Bipartisan lawmakers introduced the Secure and Feasible Exports Act (SAFE) Chips Act, which would impose a 30-month ban on exporting advanced AI chips to China, though the timing of a vote remains uncertain. Historically, the Trump administration had imposed export restrictions on chip companies like Nvidia, but also showed
materialssemiconductorAI-chipsNvidiachip-exporttechnology-tradeadvanced-manufacturingAndy Jassy says Amazon’s Nvidia competitor chip is already a multi-billion-dollar business
Amazon CEO Andy Jassy announced at the AWS Re:Invent conference that the company’s AI chip business, centered on its Nvidia competitor Trainium, is already a multi-billion-dollar revenue run-rate enterprise. The current generation, Trainium2, boasts over one million chips in production and is used by more than 100,000 companies, powering the majority of usage on Amazon’s AI app development platform, Bedrock. Jassy emphasized that Trainium2 offers compelling price-performance advantages over other GPUs, making it a popular choice among AWS’s extensive cloud customer base. A significant portion of Trainium2’s revenue comes from Anthropic, a key AWS partner using over 500,000 Trainium2 chips in Project Rainier, Amazon’s large-scale AI server cluster designed to support Anthropic’s advanced model training needs. While other major AI players like OpenAI also use AWS, they primarily rely on Nvidia chips, underscoring the challenge of competing with Nvidia’s entrenched GPU technology and proprietary CUDA software
energyAI-chipscloud-computingsemiconductor-technologyAmazon-TrainiumNvidia-competitordata-centersAll the biggest news from AWS’ big tech show re:Invent 2025
At AWS re:Invent 2025, Amazon Web Services emphasized AI advancements focused on enterprise customization and autonomous AI agents. CEO Matt Garman highlighted a shift from AI assistants to AI agents capable of independently performing tasks and automating workflows, unlocking significant business value. Key announcements included expanded capabilities for AWS’s AgentCore platform, such as policy-setting features to control AI agent behavior, enhanced memory and logging functions, and 13 pre-built evaluation systems to help customers assess agent performance. AWS also introduced three new “Frontier agents” designed for coding, security reviews, and DevOps tasks, with preview versions already available. AWS unveiled its new AI training chip, Trainium3, promising up to 4x performance improvements and 40% lower energy use for AI training and inference. The company teased Trainium4, which will be compatible with Nvidia chips, signaling deeper integration with Nvidia technology. Additionally, AWS expanded its Nova AI model family with new text and multimodal models, alongside Nova Forge, a
energyAI-chipscloud-computingAI-agentsNvidia-compatibilityAI-trainingAWS-re:InventMicrosoft’s plan to fix its chip problem is, partly, to let OpenAI do the heavy lifting
Microsoft is addressing its semiconductor challenges by leveraging its partnership with OpenAI, which is developing custom AI chips in collaboration with Broadcom. Under a revised agreement, Microsoft has secured intellectual property rights to OpenAI’s chip designs and will have access to these innovations, allowing it to adopt and extend the technology for its own use. This move comes as Microsoft’s chip efforts have lagged behind competitors like Google and Amazon, making the partnership a pragmatic solution to accelerate its AI hardware capabilities. CEO Satya Nadella emphasized that Microsoft benefits from OpenAI’s system-level innovations, gaining a significant advantage without bearing the full burden of chip development. The agreement also grants Microsoft continued access to OpenAI’s AI models through 2032, though OpenAI retains exclusive rights to its consumer hardware products. This collaboration highlights the complexity and cost of building advanced AI chips, with Microsoft opting to rely on OpenAI’s expertise and a strategic contract to bolster its position in the AI hardware space.
semiconductorsAI-chipsMicrosoftOpenAIchip-designtechnology-collaborationcustom-hardwareIRON: Xpeng's humanoid robot uses solid-state battery for long life
At the 2025 AI Day in Guangzhou, Chinese company Xpeng unveiled the second-generation IRON humanoid robot, featuring significant upgrades in movement, control, and balance to mimic human behavior in dynamic environments. Standing 5 feet 10 inches tall and weighing 154 pounds, IRON combines advanced software with flexible mechanics, including 62 active joints and synthetic muscles modeled after the human spine, enabling fluid, natural motions such as walking, twisting, and balancing on uneven surfaces. Its curved head display forms an expressive face, while a lightweight all-solid-state battery provides long-lasting, safe energy without overheating. Powered by three Turing AI chips capable of 2,250 trillion operations per second, IRON integrates Xpeng’s Vision-Language-Action (VLA) system to instantly analyze visual and auditory inputs and respond appropriately. This allows the robot to perform tasks like answering questions, folding laundry, and guiding visitors. Its walking ability, trained on thousands of hours of human gait data, enables it
robothumanoid-robotsolid-state-batteryAI-chipssynthetic-musclesroboticsenergy-storageThe brain may be the blueprint for the next computing frontier
The article discusses the rapid advancement of neuromorphic computing, a technology that models hardware on the brain’s neurons and spiking activity to achieve highly energy-efficient and low-latency data processing. Unlike traditional deep neural networks (DNNs) that rely on continuous numeric activations and consume significant power, spiking neural networks (SNNs) use asynchronous, event-driven spikes inspired by biological neurons. This approach enables dramatic reductions in energy use and processing time; for instance, Intel’s Loihi chips reportedly perform AI inference 50 times faster and with 100 times less energy than conventional CPUs and GPUs, while IBM’s TrueNorth chip achieves unprecedented energy efficiency at 400 billion operations per second per watt. However, SNNs currently face challenges in accuracy and training tool maturity compared to traditional AI models. The global race to develop neuromorphic hardware is intensifying, with major players like Intel and IBM in the US leading early efforts through chips such as Loihi and TrueNorth, and startups
energyneuromorphic-computingspiking-neural-networksAI-chipsbrain-inspired-hardwareenergy-efficiencyedge-computingNvidia becomes first public company worth $5 trillion
Nvidia has become the first public company to reach a $5 trillion market capitalization, driven primarily by its dominant position in the AI chip market. The company’s shares surged over 5.6% following news that U.S. President Donald Trump planned to discuss Nvidia’s Blackwell chips with Chinese President Xi Jinping. Nvidia CEO Jensen Huang highlighted the company’s expectation of $500 billion in AI chip sales and emphasized expansion into sectors such as security, energy, and science, which will require thousands of Nvidia GPUs. Additionally, Nvidia is investing in enabling AI-native 5G-Advanced and 6G networks through its platforms, further solidifying its role in the AI infrastructure ecosystem. This milestone comes just three months after Nvidia first surpassed a $1 trillion valuation, with its stock rising more than 50% in 2025 due to strong demand for its GPUs used in data centers for training large language models and AI inference. Nvidia’s GPUs remain scarce and highly sought after, supporting the growing infrastructure needed
energyAI-chipsGPUsdata-centersNvidia5G-networks6G-networksIntel unveils 18A chips in major push to revive US semiconductor edge
Intel has unveiled its most advanced processors to date—the Core Ultra series 3 (codenamed Panther Lake) and Xeon 6+—built on its cutting-edge 18A semiconductor process. Panther Lake targets consumer and commercial AI PCs, gaming, and edge computing, featuring a scalable multi-chiplet architecture with up to 16 new performance and efficient cores, delivering over 50% faster CPU performance than its predecessor. It also includes an Intel Arc GPU with up to 12 Xe cores for 50% faster graphics and supports AI acceleration up to 180 TOPS. Additionally, Intel is expanding Panther Lake’s reach into robotics and edge applications through a new AI software suite and reference board. Xeon 6+, Intel’s first 18A-based server processor, is designed for hyperscale data centers and cloud providers, offering up to 288 efficient cores and a 17% increase in instructions per cycle, with availability expected in early 2026. The 18A process represents a
semiconductorsIntel-18AAI-chipsroboticsedge-computingenergy-efficiencymaterials-engineeringWall Street analysts explain how AMD’s own stock will pay for OpenAI’s billions in chip purchases
AMD and OpenAI have announced an expanded partnership in which OpenAI will assist AMD in refining its Instinct GPUs—AMD’s competitor to Nvidia chips—and commit to purchasing 6 gigawatts of compute capacity over several years. The deal is valued in the billions, but rather than paying with cash, OpenAI will use AMD stock to finance its purchases. AMD has granted OpenAI up to 160 million stock warrants, which vest as certain milestones are met, including significant increases in AMD’s stock price. For example, the final tranche requires AMD’s market cap to reach around $1 trillion, implying a potential value of about $100 billion for OpenAI’s stake if all conditions are met and shares are held without selling. UBS analyst Timothy Arcuri suggests that OpenAI will likely sell portions of its AMD stock over time to cover its GPU purchases, effectively making this a financing arrangement for AMD. Despite the unconventional structure, the deal serves as a strong validation of AMD’s AI GPU capabilities,
energyAI-chipsAMDOpenAIGPUssemiconductor-materialscompute-capacityNVIDIA invests $5B in Intel, launches joint AI and PC chip venture
NVIDIA is investing $5 billion in Intel, becoming one of its largest shareholders and forming a strategic partnership to jointly develop future data center and PC chips. This collaboration aims to combine Intel’s x86 CPU architecture with NVIDIA’s AI and GPU technologies, with Intel building custom CPUs for NVIDIA’s AI infrastructure and manufacturing x86 system-on-chips integrated with NVIDIA RTX GPU chiplets for high-performance personal computers. The deal provides a significant boost to Intel, which has struggled in recent years, as evidenced by a 23% surge in its stock price following the announcement. The partnership leverages the strengths of both companies: Intel’s foundational x86 architecture, manufacturing capabilities, and advanced packaging, alongside NVIDIA’s AI leadership and CUDA architecture. Analysts view NVIDIA’s involvement as a pivotal moment for Intel, repositioning it from an AI laggard to a key player in AI infrastructure. The collaboration also has competitive implications, potentially challenging rivals like AMD and TSMC, which currently manufactures NVIDIA’s top processors. The
semiconductorsAI-chipsNVIDIAInteldata-centersPC-processorsAI-infrastructureChina tells its tech companies they can’t buy AI chips from Nivida
China’s Cyberspace Administration has officially banned domestic tech companies, including major players like ByteDance and Alibaba, from purchasing Nvidia’s AI chips, specifically the RTX Pro 6000D server designed for the Chinese market. This move follows earlier discouragements from Beijing to avoid Nvidia chips and instead support local AI chip manufacturers. Nvidia’s chips are widely regarded as some of the most advanced globally, making this ban a significant setback for China’s tech ecosystem, despite efforts by companies like Huawei and Alibaba to develop indigenous AI hardware. Nvidia’s CEO Jensen Huang expressed disappointment but acknowledged the broader geopolitical tensions between China and the U.S. He emphasized Nvidia’s willingness to support Chinese companies if permitted. The ban comes amid a complex backdrop of U.S. export controls: the Trump administration initially restricted Nvidia’s chip sales to China in April, causing substantial revenue losses for Nvidia. Although restrictions were partially eased later, including a controversial revenue-sharing proposal with the U.S. government, Nvidia has yet to resume significant sales
semiconductorsAI-chipsNvidiaChina-tech-marketsemiconductor-industrychip-manufacturingtechnology-regulationsA timeline of the US semiconductor market in 2025
The U.S. semiconductor market in 2025 has experienced significant developments amid geopolitical tensions and industry shifts, largely driven by the strategic importance of AI chip technology. Nvidia reported a record quarter in August, with a notable 56% year-over-year revenue growth in its data center business, underscoring its strong market position despite broader industry turmoil. Meanwhile, Intel underwent major changes: the U.S. government took an equity stake in the company’s foundry program to maintain control, and Japanese conglomerate SoftBank also acquired a strategic stake. Intel further restructured by spinning out its telecom chip business and consolidating operations to improve efficiency, including halting projects in Germany and Poland and planning workforce reductions. Political dynamics have heavily influenced the semiconductor landscape. President Donald Trump announced potential tariffs on the industry, though none had been implemented by early September, and publicly criticized Intel CEO Lip-Bu Tan amid concerns over Tan’s ties to China. Tan met with Trump to discuss Intel’s role in revitalizing U.S
materialssemiconductorAI-chipsIntelNvidiachip-manufacturingtechnology-industryAfter falling behind in generative AI, IBM and AMD look to quantum for an edge
IBM and AMD are collaborating to develop a commercially viable quantum computing architecture as a strategic move to regain competitiveness after lagging behind in the generative AI market. Their joint effort aims to create a scalable and open-source quantum system, making advanced quantum computing more accessible to researchers and developers. This initiative targets complex real-world applications such as drug and materials discovery, optimization, and logistics. By leveraging AMD’s AI-specialized chips and IBM’s expertise in quantum technology, the partnership seeks to position both companies as key infrastructure providers in the evolving tech landscape. IBM’s CEO, Arvind Krishna, emphasized the transformative potential of quantum computing to simulate the natural world and represent information in fundamentally new ways, highlighting the significance of this venture for future technological advancements.
materialsquantum-computingAI-chipsIBMAMDdrug-discoveryoptimizationSoftBank makes $2B investment in Intel
Japanese conglomerate SoftBank has agreed to invest $2 billion in Intel by purchasing common stock at $23 per share, signaling a strong commitment to advanced semiconductor technology and manufacturing in the United States. The deal, announced after market hours on August 18, 2025, led to a more than 5% increase in Intel’s share price. SoftBank Group Chairman and CEO Masayoshi Son emphasized that the investment reflects confidence in the expansion of U.S.-based semiconductor manufacturing, with Intel playing a pivotal role, especially amid growing interest in AI chip development. This investment serves as a significant validation for Intel, which has faced competitive pressures from companies like Nvidia and is currently undergoing a restructuring under new CEO Lip-Bu Tan. Intel is focusing on streamlining its semiconductor business, particularly its client and data center segments, while reducing workforce in its Intel Foundry division. The deal also aligns with SoftBank’s renewed focus on the U.S. market and AI technologies, complementing its recent activities such
semiconductorsAI-chipsIntelSoftBankadvanced-technologysemiconductor-manufacturingdata-centersAI chip shipments from US had secret location trackers: Report
A recent report reveals that some AI chip shipments from the United States to other countries were covertly equipped with location tracking devices. These trackers were reportedly placed in shipments deemed at high risk of illegal diversion to China, aiming to enforce US export restrictions on advanced AI chips. The devices were typically hidden within server packaging from manufacturers like Dell and Super Micro, which include chips from Nvidia and AMD. While the exact parties responsible for installing the trackers and the precise points along the shipping routes remain unclear, US agencies such as the Department of Commerce’s Bureau of Industry and Security, Homeland Security Investigations, and the FBI are suspected to be involved. This tactic aligns with longstanding US law enforcement practices to monitor shipments and prevent unauthorized technology transfers to restricted countries. The use of these trackers comes amid ongoing US efforts to limit China’s access to cutting-edge AI technology, which is crucial for innovations in electric vehicles, semiconductors, and aerospace. Since 2022, the US has restricted sales of advanced chips from Nvidia and
IoTAI-chipsexport-controlssemiconductor-trackingsupply-chain-securityadvanced-technologylocation-trackersElon Musk confirms shutdown of Tesla Dojo, ‘an evolutionary dead end’
Elon Musk has confirmed the shutdown of Tesla’s Dojo supercomputer project, describing it as “an evolutionary dead end” after the company decided to consolidate its AI chip development efforts. Initially, Tesla developed the first Dojo supercomputer using a combination of Nvidia GPUs and in-house D1 chips, with plans for a second-generation Dojo 2 powered by a D2 chip. However, Tesla has shelved the D2 chip and the broader Dojo 2 project to focus resources on its AI5 and AI6 chips. The AI5 chip is designed primarily for Tesla’s Full Self-Driving (FSD) system, while the AI6 chip aims to support both onboard inference for autonomous driving and humanoid robots, as well as large-scale AI training. Musk explained that it makes more sense to integrate many AI5/AI6 chips on a single board to reduce network complexity and costs, a configuration he referred to as “Dojo 3.” This strategic pivot reflects Tesla’s
robotAI-chipsTesla-Dojoautonomous-vehiclesself-driving-technologyAI-traininghumanoid-robotsRobotaxi Falls Into Construction Pit, Tesla Dojo Done - CleanTechnica
The article from CleanTechnica highlights two recent developments that may signal challenges in the advancement of robotaxi technology. First, a Baidu Apollo Go robotaxi in China fell into a construction pit while carrying a paying passenger, despite visible barriers and warning signs. Fortunately, the passenger was unharmed but had to be rescued by local residents. This incident has raised public concerns about the readiness and safety of robotaxis, potentially undermining confidence in the technology despite generally positive overall performance statistics. Secondly, Tesla has disbanded its Dojo supercomputer engineering team, effectively ending its in-house development of AI chips for autonomous driving. Tesla had previously touted Dojo as a critical component for perfecting its Full Self Driving (FSD) system and even considered monetizing the supercomputer’s capabilities. Now, Tesla will rely more heavily on external partners like Nvidia, AMD, and Samsung for computing needs. While this shift may not drastically impact Tesla’s stock, it reflects the high costs and technical challenges Tesla faces in
roboticsautonomous-vehiclesrobotaxiTesla-DojoAI-chipsautonomous-drivingTeslaTesla drops Dojo supercomputer as Musk turns to Nvidia, Samsung chips
Tesla has officially discontinued its in-house Dojo supercomputer project, which aimed to develop custom AI training chips to enhance autonomous driving and reduce reliance on external chipmakers. The decision follows several key departures from the Dojo team, including project head Peter Bannon. CEO Elon Musk explained that maintaining two distinct AI chip designs was inefficient, leading Tesla to refocus efforts on developing the AI5 and AI6 chips. These next-generation chips will be produced in partnership with Samsung’s new Texas factory, with production of AI5 chips expected to start by the end of 2026. The Dojo project was initially central to Tesla’s strategy to build proprietary AI infrastructure for self-driving cars, robots, and data centers, involving significant investment in top chip architects. However, the initiative faced persistent delays and setbacks, with prominent leaders like Jim Keller and Ganesh Venkataramanan having left previously. Many former Dojo team members have moved to a stealth startup, DensityAI, which is pursuing similar AI chip goals
robotAI-chipsTeslaNvidiaSamsungautonomous-drivingsupercomputerTesla shuts down Dojo, the AI training supercomputer that Musk said would be key to full self-driving
Tesla is shutting down its Dojo AI training supercomputer project and disbanding the team behind it, marking a significant shift in the company’s strategy for developing in-house chips and hardware for full self-driving technology. Peter Bannon, the Dojo lead, is leaving Tesla, and remaining team members will be reassigned to other data center and compute projects. This move follows the departure of about 20 former Dojo employees who have founded a new startup, DensityAI, which aims to build chips, hardware, and software for AI-powered data centers used in robotics, AI agents, and automotive applications. The decision to end Dojo comes amid Tesla’s ongoing efforts to position itself as an AI and robotics company, despite setbacks such as a limited robotaxi launch in Austin that faced criticism for problematic driving behavior. CEO Elon Musk had previously touted Dojo as central to Tesla’s AI ambitions and full self-driving goals, emphasizing its capacity to process vast amounts of video data. However, since mid-202
robotAITeslaautonomous-vehiclesAI-chipssupercomputerroboticsTrump says he’ll announce semiconductor and chip tariffs
President Donald Trump announced plans to impose tariffs on semiconductors and chips as early as next week, though specific details about these tariffs have not yet been disclosed. This move could significantly disrupt U.S. hardware and AI companies, which rely heavily on semiconductor imports. Despite the U.S. producing only a small portion of the world’s chips, it remains home to many leading semiconductor companies. Efforts to boost domestic chip manufacturing have been underway since the 2022 CHIPS Act, which allocated $52 billion in subsidies to increase U.S. production capacity, with companies like Intel investing in new manufacturing facilities. The tariff announcement coincides with ongoing deliberations over AI chip export restrictions. The Trump administration has criticized the Biden administration’s multi-tiered export control rules introduced in May, which limit sales of advanced AI semiconductors to certain countries for national security reasons. In July, the Trump administration released a policy framework emphasizing the need for chip export restrictions but without detailed proposals. Recent reports suggest the Trump administration
semiconductorschip-tariffssemiconductor-manufacturingAI-chipsexport-restrictionschip-industrytechnology-policyChina cites ‘backdoor safety risk’ in Nvidia’s H20 AI chip; company denies allegation
Chinese authorities have summoned Nvidia over alleged security vulnerabilities in its H20 AI chip, citing “serious security risks” and concerns about potential backdoors that could allow remote access or tracking. The Cyberspace Administration of China (CAC) questioned Nvidia representatives and requested documentation to clarify these issues. Nvidia has denied the allegations, affirming that their chips contain no such backdoors. This investigation comes amid stalled trade talks between Washington and Beijing and could delay Nvidia’s efforts to resume sales of the H20 chip in China, complicating the company’s market position. The scrutiny of Nvidia’s H20 chip aligns with China’s broader strategy to reduce reliance on U.S. semiconductor technology and promote domestic alternatives, such as Huawei’s Ascend 910C chip, which is gaining traction for AI workloads. The H20 was designed to comply with U.S. export restrictions, and its sales resumption was seen as a potential breakthrough in easing trade tensions. However, the current probe and regulatory uncertainty highlight ongoing geopolitical and
semiconductorsAI-chipscybersecurityNvidiaChina-tech-markettrade-restrictionssemiconductor-alternativesAmazon CEO wants to put ads in your Alexa+ conversations
Amazon CEO Andy Jassy envisions integrating advertising into conversations with Alexa+, the company’s advanced AI-powered digital assistant. Currently available to millions of users, Alexa+ enhances natural, multi-turn interactions and is offered free to Prime subscribers, with an additional $20 monthly subscription tier. Jassy indicated that future subscription models might include an ad-free option, while advertising could serve as a tool to help users discover products and generate revenue. This approach marks a significant shift from Amazon’s limited existing Alexa ads, which have so far been confined to occasional visual or audio spots on devices like the Echo Show. Amazon’s push into AI and advertising comes amid substantial investment in AI infrastructure, including a 90% year-over-year increase in capital expenditures to $31.4 billion in Q2 2025, aimed at developing proprietary AI chips and data centers. While AWS revenue grew 18%, Amazon seeks new revenue streams to support these costs. However, challenges remain: Alexa+’s rollout has faced mixed reviews,
IoTsmart-assistantsAlexaAI-advertisingAmazonvoice-technologyAI-chipsTesla signs $16.5B deal with Samsung to make AI chips
Tesla has entered a $16.5 billion agreement with Samsung to manufacture its next-generation AI6 chips, which are designed to power a wide range of Tesla technologies, from its Full Self-Driving (FSD) system to Optimus humanoid robots and AI training in data centers. Samsung’s new Texas fabrication plant will be dedicated to producing these AI6 chips, marking a significant expansion in Tesla’s chip manufacturing capabilities. Elon Musk also mentioned that Tesla is collaborating with TSMC for its AI5 chips, which have recently completed design and will initially be produced in TSMC’s Taiwan and Arizona facilities. Samsung already produces Tesla’s A14 chip, and this new deal represents a major boost for Samsung’s chip-making ambitions after previous struggles to secure large clients. Musk indicated that Tesla’s spending on Samsung chips could exceed the initial $16.5 billion deal, with actual production output expected to be several times higher. Additionally, Tesla will assist Samsung in optimizing manufacturing efficiency at the Texas fab,
robotAI-chipsTeslaSamsungautonomous-drivinghumanoid-robotssemiconductor-manufacturingTesla confirms $16.5 billion Samsung deal for next-gen chip supply
Samsung Electronics has secured a $16.5 billion semiconductor supply deal with Tesla to produce next-generation AI chips, confirmed by both Samsung’s regulatory filing and Elon Musk’s social media announcement. The contract, effective from July 26, 2024, through December 31, 2033, involves Samsung’s new Texas semiconductor fabrication plant dedicated to manufacturing Tesla’s AI6 chips. Musk highlighted the strategic importance of this partnership, noting that Samsung currently produces AI4 chips while TSMC handles AI5 chips, with Tesla collaborating closely with Samsung to optimize manufacturing efficiency. Although Samsung has kept full contract details confidential to protect trade secrets, the deal’s scale and duration underscore its significance. This agreement represents a major boost for Samsung’s foundry business, which has been striving to catch up with competitors like TSMC in the rapidly growing AI chip market. Samsung is advancing its semiconductor technology, including plans for mass production of 2-nanometer chips that offer improved speed and energy efficiency—technology expected to
energymaterialssemiconductorAI-chipsTeslaSamsungmanufacturingA timeline of the US semiconductor market in 2025
The U.S. semiconductor industry in 2025 has experienced significant upheaval amid the intensifying global AI competition. Intel, under new CEO Lip-Bu Tan, focused on restructuring and efficiency, canceling projects in Germany and Poland, consolidating test operations, and planning substantial layoffs of up to 20% in certain units. Intel also made key leadership hires to pivot back to an engineering-driven approach. Meanwhile, AMD expanded its AI hardware capabilities through acquisitions, including companies specializing in AI inference chips and software adaptation to compete more directly with Nvidia’s dominance. On the policy front, the Trump administration introduced an AI Action Plan emphasizing chip export controls and allied coordination, though specific restrictions remained undefined. Nvidia faced challenges due to U.S. export licensing requirements on AI chips, leading the company to exclude China-related revenue from forecasts and file applications to resume chip sales there, including launching a China-specific RTX Pro chip. The U.S. also grappled with national security concerns over AI chip sales to the UAE and
semiconductorsAI-chipsIntelNvidiachip-export-controlssemiconductor-industryrare-earth-elementsSmuggled NVIDIA chips flood China despite US export crackdown
A Financial Times investigation reveals that despite the U.S. government's export controls introduced in April 2025 banning NVIDIA’s China-specific H20 AI chips, over $1 billion worth of smuggled NVIDIA B200 and other restricted chips have flooded the Chinese market. These chips are openly sold on Chinese social media platforms like Douyin and Xiaohongshu, often alongside other high-end NVIDIA products, and are purchased by local data center suppliers serving major AI firms. The black market emerged rapidly after the export ban, with sellers even promising access to next-generation B300 chips ahead of official launches. NVIDIA maintains that it does not sell restricted chips to Chinese customers and does not support unauthorized deployments, emphasizing that datacenters require official service and support. CEO Jensen Huang has downplayed the extent of chip diversion and criticized export controls as ineffective, arguing they may accelerate China’s independent AI hardware development, potentially undermining U.S. leadership. The U.S. government is pressuring allies like Singapore, where arrests
semiconductorAI-chipsNVIDIAexport-controlsblack-marketdata-centerschip-smugglingTrump’s AI Action Plan aims to block chip exports to China but lacks key details
The Trump administration’s recently released AI Action Plan aims to maintain U.S. leadership in AI technology while preventing adversaries, particularly China, from benefiting from American innovations. Central to the plan is strengthening export controls on AI chips through “creative approaches,” including working with government agencies and the AI industry to develop chip location verification features and establishing enforcement mechanisms for export restrictions. The plan emphasizes the need for international alignment with allies to impose strong export controls and prevent backfilling, using tools like the Foreign Direct Product Rule and secondary tariffs. However, the plan lacks detailed strategies on how these goals will be achieved, especially regarding coordination with global allies and specific enforcement measures. Instead, it outlines foundational steps for future sustainable export guidelines rather than immediate policy implementations. This ambiguity reflects ongoing uncertainty, as the administration has shown inconsistent export restriction policies recently, such as rescinding previous Biden-era rules and fluctuating stances on semiconductor exports to China. Upcoming executive orders expected around July 23 may focus more on organizing government efforts than
energysemiconductorsAI-chipsexport-controlstechnology-policyinternational-tradechip-manufacturingInstead of selling to Meta, AI chip startup FuriosaAI signed a huge customer
South Korean AI chip startup FuriosaAI recently announced a partnership to supply its AI chip, RNGD, to enterprises using LG AI Research’s EXAONE platform, a next-generation hybrid AI model optimized for large language models (LLMs). This collaboration targets multiple sectors including electronics, finance, telecommunications, and biotechnology. The deal follows FuriosaAI’s decision to reject Meta’s $800 million acquisition offer three months prior, citing disagreements over post-acquisition strategy and organizational structure rather than price. FuriosaAI’s CEO June Paik emphasized the company’s commitment to remaining independent and advancing sustainable AI computing. The partnership with LG AI Research is significant as it represents a rare endorsement of a competitor to Nvidia by a major enterprise. FuriosaAI’s RNGD chip demonstrated 2.25 times better inference performance and greater energy efficiency compared to competitive GPUs when running LG’s EXAONE models. Unlike general-purpose GPUs, FuriosaAI’s hardware is specifically designed for AI computing, lowering total cost of ownership while
AI-chipsFuriosaAILG-AI-Researchenergy-efficiencyAI-computingsemiconductor-materialsAI-hardwareNvidia’s resumption of H20 chip sales related to rare earth element trade talks
Nvidia recently reversed its June decision to halt sales of its H20 AI chip to China, filing an application to resume these sales. This move is closely linked to ongoing U.S.-China trade discussions concerning rare earth elements (REEs), such as lanthanum and cerium, which are predominantly mined in China and are essential for technologies including electric vehicle batteries. U.S. Commerce Secretary Howard Lutnick indicated that Nvidia’s chip sales resumption is part of broader negotiations around these critical materials, emphasizing that China will not receive Nvidia’s most advanced technology. The decision has sparked controversy, with some U.S. lawmakers, including Congressman Raja Krishnamoorthi, criticizing it as inconsistent with prior export control policies aimed at protecting advanced technology from foreign adversaries. However, Lutnick downplayed these concerns, assuring that the chips sold to China are not among Nvidia’s top-tier products. This development follows rumors that Nvidia was seeking ways to comply with U.S. export regulations while continuing business in China
energyrare-earth-elementsNvidiasemiconductor-chipsAI-chipstrade-talksexport-controlsNvidia is set to resume China chip sales after months of regulatory whiplash
Nvidia has announced it is filing applications to resume sales of its H20 artificial intelligence chips to China after several months of regulatory uncertainty. The H20 chip, designed for AI inference tasks rather than training new models, is currently the most powerful AI processor Nvidia can legally export to China under U.S. export controls. Alongside the H20, Nvidia is introducing a new “RTX Pro” chip tailored specifically for the Chinese market, which the company says complies fully with regulations and is suited for digital manufacturing applications like smart factories and logistics. The regulatory back-and-forth began in April when the Trump administration imposed restrictions on sales of high-performance chips, including the H20, potentially costing Nvidia $15 to $16 billion in revenue from Chinese customers. However, after Nvidia CEO Jensen Huang attended a high-profile dinner at Mar-a-Lago and pledged increased U.S. investments and jobs, the administration paused the ban. This episode highlights the ongoing tension between U.S. national security concerns aimed at limiting China’s
materialssemiconductorAI-chipsNvidiaChina-tech-marketexport-controlsdigital-manufacturingNvidia boss dismisses China military chip use, cites US tech risk
Nvidia CEO Jensen Huang has downplayed concerns that China’s military could effectively use American AI chips, citing export restrictions and the risk of sanctions as major deterrents. Speaking ahead of a planned visit to China, Huang argued that Chinese military institutions would avoid dependence on US-origin hardware like Nvidia’s advanced A100 and H100 GPUs due to the possibility of supply cutoffs. His comments come amid ongoing US efforts to limit Beijing’s access to cutting-edge semiconductor technologies, which Washington views as critical to national security. Despite Huang’s reassurances, US lawmakers remain wary. Senators Jim Banks and Elizabeth Warren have formally urged Huang not to engage with Chinese military-linked entities or firms circumventing US export controls, such as DeepSeek, a Chinese AI company accused of indirectly sourcing Nvidia chips to support military and intelligence projects. The bipartisan concern reflects broader fears over the dual-use nature of high-end GPUs, which power both civilian AI applications and sophisticated military systems like battlefield automation and electronic warfare. Meanwhile, Nvidia faces complex geopolitical challenges
semiconductorsAI-chipsNvidiamilitary-technologyexport-controlsUS-China-relationstechnology-securityNvidia becomes first $4 trillion company as AI demand explodes
Nvidia has become the first publicly traded company to reach a $4 trillion market capitalization, driven by soaring demand for its AI chips. The semiconductor giant's stock surged to a record $164 per share, marking a rapid valuation increase from $1 trillion in June 2023 to $4 trillion in just over a year—faster than tech giants Apple and Microsoft, which have also surpassed $3 trillion valuations. Nvidia now holds the largest weight in the S&P 500 at 7.3%, surpassing Apple and Microsoft, and its market value exceeds the combined stock markets of Canada and Mexico as well as all publicly listed UK companies. This historic rise is fueled by the global tech industry's race to develop advanced AI models, all heavily reliant on Nvidia’s high-performance chips. Major players like Microsoft, Meta, Google, Amazon, and OpenAI depend on Nvidia hardware for AI training and inference tasks. The launch of Nvidia’s next-generation Blackwell chips, designed for massive AI workloads, has intensified
robotAI-chipsautonomous-systemsNvidiasemiconductordata-centersartificial-intelligenceDell unveils AI supercomputing system with Nvidia's advanced chips
Dell has unveiled a powerful AI supercomputing system built on Nvidia’s latest GB300 platform, marking the industry’s first deployment of such systems. Delivered to CoreWeave, an AI cloud service provider, these systems feature Dell Integrated Racks equipped with 72 Blackwell Ultra GPUs, 36 Arm-based 72-core Grace CPUs, and 36 BlueField DPUs per rack. Designed for maximum AI training and inference performance, these high-power systems require liquid cooling. CoreWeave, which counts top AI firms like OpenAI among its clients, benefits from the enhanced capabilities of the GB300 chips to accelerate training and deployment of larger, more complex AI models. This deployment underscores the growing competitive gap in AI infrastructure, where access to cutting-edge chips like Nvidia’s GB300 series offers significant advantages amid rapidly increasing AI training demands and tightening U.S. export controls on high-end AI chips. The rapid upgrade from the previous GB200 platform to GB300 within seven months highlights the fast pace of innovation and
energysupercomputingAI-chipsNvidia-GB300data-centersliquid-coolinghigh-performance-computingChatGPT: Everything you need to know about the AI-powered chatbot
ChatGPT, OpenAI’s AI-powered text-generating chatbot, has rapidly grown since its launch to reach 300 million weekly active users. In 2024, OpenAI made significant strides with new generative AI offerings and the highly anticipated launch of its OpenAI platform, despite facing internal executive departures and legal challenges related to copyright infringement and its shift toward a for-profit model. As of 2025, OpenAI is contending with perceptions of losing ground in the AI race, while working to strengthen ties with Washington and secure one of the largest funding rounds in history. Recent updates in 2025 include OpenAI’s strategic use of Google’s AI chips alongside Nvidia GPUs to power its products, marking a diversification in hardware. A new MIT study raised concerns that ChatGPT usage may impair critical thinking by showing reduced brain engagement compared to traditional writing methods. The ChatGPT iOS app saw 29.6 million downloads in the past month, highlighting its massive popularity. OpenAI also launched o3
energyartificial-intelligenceOpenAIGPUsAI-chipspower-consumptionmachine-learningA timeline of the US semiconductor market in 2025
The U.S. semiconductor market in the first half of 2025 has experienced significant turbulence amid the ongoing AI technology race. Intel underwent major leadership changes with Lip-Bu Tan appointed CEO, who quickly initiated organizational restructuring including planned layoffs of 15-20% in certain units and efforts to spin off non-core businesses such as its telecom chip division. Meanwhile, AMD aggressively expanded its AI hardware capabilities through acquisitions, including the teams behind Untether AI and Enosemi, a silicon photonics startup, positioning itself to challenge Nvidia’s dominance in AI chip technology. Nvidia faced considerable challenges due to U.S. government-imposed AI chip export restrictions, particularly on its H20 AI chips, which led to a projected $8 billion revenue loss in Q2 and a decision to exclude China-related revenue forecasts going forward. The U.S. government’s AI chip export policies have been contentious, with the Biden administration’s proposed AI Diffusion Rule ultimately abandoned in May, and the Trump administration signaling a different regulatory
materialssemiconductor-industryAI-chipsIntelNvidiaAMDchip-export-restrictionsTaiwan places export controls on Huawei and SMIC
Taiwan has imposed export controls on Chinese technology companies Huawei and SMIC, restricting their access to critical resources needed for AI chip production. The Taiwanese International Trade Administration has classified certain high-tech commodities as strategic, requiring government approval for any shipments to these companies. This move effectively limits Huawei and SMIC’s ability to obtain Taiwanese plant construction technologies, materials, and equipment. The export controls are part of a broader effort by Taiwan to address national security concerns and combat arms proliferation. On June 10, the administration added over 600 entities from countries including Russia, Pakistan, Iran, Myanmar, and mainland China—among them Huawei and SMIC—to its restricted entity list. This development could significantly hinder China’s progress in developing advanced AI semiconductors.
materialssemiconductorsexport-controlsAI-chipshigh-tech-commoditiesTaiwan-tradesupply-chain-securityAnthropic suggests tweaks to proposed US AI chip export controls
AI-export-controlsAnthropicUS-governmentAI-chipsnational-securitytechnology-competitionChina