Articles tagged with "Nvidia"
Microsoft won’t stop buying AI chips from Nvidia, AMD, even after launching its own, Nadella says
Microsoft has introduced its first in-house AI chip, the Maia 200, deployed in one of its data centers with plans for broader rollout. Designed as an "AI inference powerhouse," Maia 200 is optimized for running AI models in production and reportedly outperforms competing chips, including Google's latest Tensor Processing Units (TPUs). This move aligns with a broader industry trend where cloud giants develop proprietary AI chips to address supply constraints and high costs associated with Nvidia’s latest hardware. Despite launching Maia 200, Microsoft CEO Satya Nadella emphasized that the company will continue purchasing AI chips from Nvidia and AMD, highlighting ongoing partnerships and mutual innovation. Nadella noted that vertical integration does not preclude using third-party components, reflecting a pragmatic approach to balancing in-house development with external technology. The Maia 200 chip will initially be used by Microsoft’s Superintelligence team, led by a former Google DeepMind co-founder, to build advanced AI models aimed at reducing reliance on external providers like OpenAI and Anthropic
energyAI-chipsMicrosoftNvidiaAMDcloud-computingAI-inferenceNvidia invests $2B to help debt-ridden CoreWeave add 5GW of AI compute
Nvidia has invested $2 billion in CoreWeave, a cloud computing company specializing in AI, to help it expand its AI compute capacity by 5 gigawatts. This strategic investment comes as CoreWeave faces financial challenges but has successfully capitalized on the AI boom by acquiring several AI startups and expanding partnerships, including with major hyperscalers like OpenAI, Meta, and Microsoft. As part of the deal, Nvidia will assist CoreWeave in acquiring land and power for new data centers and integrate Nvidia’s latest technologies—such as the upcoming Blackwell GPU architecture, Bluefield storage systems, and Vera CPUs—into CoreWeave’s platform. The partnership also involves incorporating Nvidia’s AI software and architecture into Nvidia’s reference designs for cloud and enterprise customers, strengthening CoreWeave’s position in the competitive AI infrastructure market. Following the announcement, CoreWeave’s shares rose by over 15%, signaling investor confidence. For Nvidia, this deal represents another significant move to maintain its leadership role
energyAI-computedata-centersNvidiacloud-computingsemiconductor-chipsAI-architectureA timeline of the US semiconductor market in 2025
The U.S. semiconductor industry experienced significant developments throughout 2025, marked by leadership changes, government interventions, and shifting international trade dynamics. Nvidia emerged as a dominant player, reporting record revenues driven largely by its data center business, and securing a non-exclusive licensing deal with chip maker Groq, including hiring Groq’s founder and acquiring $20 billion in assets. Despite challenges, Nvidia also navigated complex regulatory environments, including a reversal by the U.S. Department of Commerce allowing it and AMD to export advanced AI chips to China, although China imposed restrictions on domestic companies purchasing Nvidia chips and ruled that Nvidia violated antitrust laws related to a past acquisition. Intel made notable strides with the announcement of its Panther Lake processor, built on its advanced 18A semiconductor process and produced exclusively at its Arizona fab. The company also underwent leadership changes shortly after the U.S. government took an equity stake in Intel’s foundry program, a move aimed at securing domestic chip production amid tariff rumors and geopolitical tensions.
semiconductorsAI-chipsNvidiaIntelchip-manufacturingsemiconductor-industrytechnology-tariffsThe US imposes 25% tariff on Nvidia’s H200 AI chips headed to China
The U.S. government, under President Donald Trump, has imposed a 25% tariff on certain advanced AI semiconductors, including Nvidia’s H200 chips, when these are produced outside the U.S., pass through the U.S., and are then exported to countries like China. This tariff formalizes a key aspect of the Department of Commerce’s earlier decision to allow Nvidia to sell these chips to vetted Chinese customers starting in December. The tariff also affects chips from other companies, such as AMD’s MI325X. Despite the tariff, Nvidia welcomed the move, emphasizing that it enables American chip manufacturers to compete globally while supporting domestic jobs and manufacturing. China faces a complex situation in the global AI and semiconductor race, balancing its desire to develop a robust domestic chip industry with the need to access advanced foreign technology in the interim. The Chinese government is reportedly drafting regulations to control how many semiconductors Chinese companies can import, potentially allowing some purchases of Nvidia’s chips, which would mark a shift from previous
semiconductorsAI-chipsNvidiatariffssemiconductor-industryUS-China-tradeadvanced-technologyCES 2026: Everything revealed, from Nvidia’s debuts to AMD’s new chips to Razer’s AI oddities
CES 2026 in Las Vegas showcased a wide array of consumer tech innovations, with AI remaining the central theme, particularly emphasizing physical AI and robotics. Major companies like Nvidia, AMD, and Ford highlighted advancements that integrate AI into tangible applications. Nvidia introduced its new Rubin computing architecture, designed to handle the growing computational demands of AI, set to replace the Blackwell architecture later in 2026. The company also unveiled AI models and tools aimed at autonomous vehicles, reinforcing its vision to make its infrastructure a foundational platform for generalist robots. AMD’s keynote, delivered by CEO Lisa Su, focused on expanding AI capabilities through personal computing with the Ryzen AI 400 Series processors and featured collaborations with prominent AI figures and companies such as OpenAI and Luma AI. Ford announced an AI assistant to be integrated into its app ahead of a 2027 vehicle rollout, built on large language models and hosted via Google Cloud, though specific user experience details remain sparse. Additionally, Caterpillar partnered with Nvidia to introduce the
robotAIautonomous-vehiclesNvidiaAMDCES-2026roboticsCES 2026: Everything revealed, from Nvidia’s debuts to AMD’s new chips to Razer’s AI oddities
CES 2026 in Las Vegas showcased major advancements and announcements, with AI remaining a central theme across the event. Nvidia unveiled its new Rubin computing architecture, designed to replace the Blackwell architecture later this year, offering enhanced speed and storage to meet growing AI computational demands. Nvidia also highlighted its AI model for autonomous vehicles and tools like Alpamayo, aiming to extend AI’s reach into robotics and physical-world applications. Meanwhile, AMD’s CEO Lisa Su presented the company’s latest Ryzen AI 400 Series processors, emphasizing the expansion of AI capabilities in personal computing, supported by partnerships with AI leaders such as OpenAI and Luma AI. Beyond the headline tech reveals, CES featured a variety of intriguing and unconventional products and initiatives. Ford introduced an AI assistant integrated into its app, planned for vehicle deployment in 2027, leveraging Google Cloud and large language models, though specific user experience details remain sparse. Caterpillar partnered with Nvidia to develop the “Cat AI Assistant” for automated construction equipment, alongside using
robotAIautonomous-vehiclesNvidiaAMDprocessorsCES-2026Caterpillar taps Nvidia to bring AI to its construction equipment
Caterpillar is advancing the integration of artificial intelligence and automation into its construction equipment through a collaboration with semiconductor leader Nvidia. The company is piloting an AI assistive system called “Cat AI” on its Cat 306 CR Mini Excavator, showcased at CES. This system, built on a fleet of AI agents, enables machine operators to ask questions, access resources, receive safety tips, and schedule maintenance, all while working on-site. A key advantage of Cat AI is its ability to collect and transmit extensive operational data—Caterpillar’s machines send about 2,000 messages per second—providing actionable insights without requiring operators to be tied to laptops. In addition to AI assistive technology, Caterpillar is experimenting with digital twins of construction sites using Nvidia’s Omniverse simulation platform. These digital models help optimize scheduling and accurately estimate material needs by leveraging the rich data collected from the equipment. Caterpillar already operates fully autonomous vehicles and views these pilot programs as foundational steps toward broader automation across
robotAIconstruction-equipmentautomationNvidiadigital-twinsautonomous-vehiclesNvidia wants to be the Android of generalist robotics
At CES 2026, Nvidia unveiled a comprehensive robotics ecosystem aimed at becoming the default platform for generalist robotics, analogous to Android’s role in smartphones. This ecosystem includes new open foundation models—such as Cosmos Transfer 2.5, Cosmos Predict 2.5, a vision language model (VLM), and Isaac GR00T N1.6—that enable robots to reason, plan, and adapt across diverse tasks and environments, moving beyond narrow, task-specific bots. Nvidia also introduced Isaac Lab-Arena, an open-source simulation framework designed to safely and efficiently test robotic capabilities in virtual environments, addressing the high cost and risk of physical validation. Supporting this ecosystem is Nvidia OSMO, an open-source command center that integrates workflows from data generation to training across desktop and cloud platforms. To power these innovations, Nvidia launched the Jetson T4000 graphics card, delivering 1200 teraflops of AI compute with efficient power consumption, targeting cost-effective on-device processing. Nvidia is
roboticsAINvidiasimulationedge-computingrobot-foundation-modelsJetson-ThorNvidia launches Alpamayo, open AI models that allow autonomous vehicles to ‘think like a human’
Nvidia has introduced Alpamayo, a new suite of open-source AI models, simulation tools, and datasets aimed at advancing autonomous vehicle (AV) capabilities by enabling them to reason through complex driving scenarios like humans. Central to this release is Alpamayo 1, a 10-billion-parameter vision language action (VLA) model that employs chain-of-thought reasoning to break down problems step-by-step and select the safest driving actions, even in rare or unfamiliar situations such as traffic light outages. This model’s code is publicly available on Hugging Face, allowing developers to fine-tune it for various applications, including simpler driving systems, auto-labeling video data, and decision evaluators. Nvidia also encourages combining real and synthetic data generated via its Cosmos platform to enhance training and testing. Alongside Alpamayo 1, Nvidia is releasing an extensive open dataset comprising over 1,700 hours of driving data from diverse geographies and conditions, focusing on rare and complex scenarios. To support
robotautonomous-vehiclesAI-modelssimulation-toolsNvidiaopen-source-AIphysical-robotsNvidia acquires AI chip challenger Groq for $20B, report says
Nvidia is reportedly acquiring AI chip startup Groq for $20 billion, as competition intensifies among tech companies to enhance their AI computing capabilities. While Nvidia’s GPUs have become the industry standard for AI processing, Groq has developed a distinct type of chip known as a language processing unit (LPU), which claims to be ten times faster and consume one-tenth the energy compared to traditional solutions. Groq’s CEO, Jonathan Ross, has a background in innovation, having contributed to Google’s chip development efforts. Groq has experienced rapid growth, recently raising funds at a $6.9 billion valuation and expanding its user base to over 2 million developers, up from approximately 356,000 the previous year. The acquisition would strengthen Nvidia’s position in the AI hardware market by integrating Groq’s advanced chip technology. Nvidia has not yet provided an official comment on the reported deal.
energyAI-chipsNvidiaGroqsemiconductor-technologylanguage-processing-unitcomputing-powerNvidia bulks up open source offerings with an acquisition and new open AI models
Nvidia is strengthening its presence in open source AI through two major initiatives: the acquisition of SchedMD and the release of a new family of open AI models. SchedMD, founded in 2010 by the original developers of the widely used open source workload management system Slurm, has been a long-term partner of Nvidia. The acquisition, with undisclosed terms, aims to leverage SchedMD’s technology as critical infrastructure for generative AI, enabling Nvidia to accelerate access to diverse computing systems. Nvidia plans to continue investing in this technology to support AI development at scale. In addition to the acquisition, Nvidia introduced the Nemotron family of open AI models, which it claims to be the most efficient open models for building accurate AI agents. This lineup includes the Nemotron 3 Nano for targeted tasks, Nemotron 3 Super for multi-agent AI applications, and Nemotron 3 Ultra for more complex tasks. Nvidia’s CEO Jensen Huang emphasized that Nemotron represents a move toward open innovation,
robotAI-modelsNvidiaopen-source-AIgenerative-AIworkload-managementGPUsSoftBank and Nvidia reportedly in talks to fund SkildAI at $14B, nearly tripling its value
SoftBank Group and Nvidia are reportedly negotiating to lead a funding round exceeding $1 billion for Skild AI, a robotics software startup, at a valuation of $14 billion. This potential investment would nearly triple Skild AI’s valuation from its previous $4.7 billion mark in May 2025, when it raised $500 million with participation from SoftBank, LG Technology Ventures, Samsung, Nvidia, and others. Unlike many robotics startups that focus on proprietary hardware, Skild AI develops a robot-agnostic foundational model called Skild Brain, designed to be adaptable across various robot types and applications. The company demonstrated this model in July with robots performing tasks such as picking up dishes and climbing stairs, and has formed strategic partnerships with LG CNS and Hewlett Packard Enterprise to expand its ecosystem. The growing investor interest in AI-driven robotics is reflected in other recent large funding rounds within the sector. For example, Physical Intelligence, which also develops generalized robotic “brains,” and Figure, a humanoid robot
roboticsAI-roboticsSoftBankNvidiarobot-foundation-modelhumanoid-robotsrobot-agnostic-AIDepartment of Commerce may approve Nvidia H200 chip exports to China
The U.S. Department of Commerce is reportedly preparing to approve Nvidia’s export of advanced H200 AI chips to China, marking a potential shift in U.S. policy. These H200 chips are significantly more advanced than the H20 chips Nvidia previously developed specifically for the Chinese market. However, the approval would only allow the shipment of H200 chips that are about 18 months old. Nvidia has expressed support for this decision, emphasizing that it balances national interests and supports American manufacturing jobs. This development follows recent statements from Commerce Secretary Gina Raimondo indicating a pending decision on the matter. The potential approval comes amid ongoing tensions and legislative efforts to restrict advanced AI chip exports to China over national security concerns. Bipartisan lawmakers introduced the Secure and Feasible Exports Act (SAFE) Chips Act, which would impose a 30-month ban on exporting advanced AI chips to China, though the timing of a vote remains uncertain. Historically, the Trump administration had imposed export restrictions on chip companies like Nvidia, but also showed
materialssemiconductorAI-chipsNvidiachip-exporttechnology-tradeadvanced-manufacturingAmazon releases an impressive new AI chip and teases a Nvidia-friendly roadmap
Amazon Web Services (AWS) has unveiled its latest AI training chip, Trainium3, along with the Trainium3 UltraServer system at its AWS re:Invent 2025 conference. Built on a 3-nanometer process, Trainium3 delivers significant improvements over its predecessor, offering more than four times the speed and memory capacity for AI training and inference. Each UltraServer can host 144 chips, and thousands of these servers can be linked to scale up to one million Trainium3 chips, representing a tenfold increase from the previous generation. Additionally, the new chips are 40% more energy efficient, aligning with AWS’s goal to reduce operational costs and energy consumption while providing cost savings to AI cloud customers. Early adopters such as Anthropic, Karakuri, Splashmusic, and Decart have already reported substantial reductions in inference costs using Trainium3. Looking ahead, AWS teased the development of Trainium4, which promises another major performance boost and will support Nvidia’s
energyAI-chipcloud-computingdata-centerenergy-efficiencyNvidiaAWSNvidia announces new open AI models and tools for autonomous driving research
Nvidia has unveiled new AI infrastructure and models aimed at advancing physical AI applications, particularly in robotics and autonomous vehicles. At the NeurIPS AI conference, the company introduced Alpamayo-R1, described as the first vision-language-action model specifically designed for autonomous driving research. This model integrates visual and textual data to enable vehicles to perceive their environment and make informed decisions, leveraging Nvidia’s existing Cosmos reasoning model family, which was initially launched in January 2025. Alpamayo-R1 is intended to help autonomous vehicles achieve level 4 autonomy—full self-driving capability within defined areas and conditions—by providing them with “common sense” reasoning to handle complex driving scenarios more like humans. In addition to the new model, Nvidia released the Cosmos Cookbook on GitHub, a comprehensive resource including step-by-step guides, inference tools, and post-training workflows to assist developers in customizing and training Cosmos models for various applications. This toolkit covers essential processes such as data curation, synthetic data generation, and model
robotautonomous-vehiclesAI-modelsNvidiaphysical-AIautonomous-drivingvision-language-modelsNvidia, Deutsche Telekom strike €1B partnership for a data center in Munich
Nvidia and Deutsche Telekom have announced a €1 billion partnership to build a new data center in Munich, dubbed the “Industrial AI Cloud.” This facility will deploy over 1,000 Nvidia DGX B200 systems and RTX Pro Servers equipped with up to 10,000 Blackwell GPUs to deliver AI inferencing and related services to German companies while adhering to German data sovereignty laws. Early collaborators include Agile Robots, which will assist in server rack installation, and Perplexity, which plans to offer localized AI inferencing services. Deutsche Telekom will provide the physical infrastructure, while SAP will contribute its Business Technology platform and applications, targeting industrial use cases such as digital twins and physics-based simulations. The project aligns with broader European efforts to reduce dependence on foreign technology infrastructure and promote domestic AI capabilities, although funding for AI in the EU remains significantly lower than in the U.S. Unlike the EU’s AI gigafactory initiative, this data center is a separate endeavor expected to become operational in early 2026
robotAIdata-centerindustrial-AIDeutsche-TelekomNvidiadigital-twinsMicrosoft inks $9.7B deal with Australia’s IREN for AI cloud capacity
Microsoft has secured a significant $9.7 billion, five-year contract with Australia-based IREN to expand its AI cloud computing capacity. This deal grants Microsoft access to advanced compute infrastructure equipped with Nvidia GB300 GPUs, which will be deployed in phases through 2026 at IREN’s facility in Childress, Texas, designed to support up to 750 megawatts of capacity. Separately, IREN is investing about $5.8 billion in GPUs and equipment from Dell to support this infrastructure expansion. The agreement follows Microsoft’s recent launch of AI models optimized for reasoning, agentic AI systems, and multi-modal generative AI, reflecting the company's efforts to meet growing demand for AI services. Microsoft has also previously acquired approximately 200,000 Nvidia GB300 GPUs for data centers in Europe and the U.S. IREN, originally a bitcoin-mining firm, has pivoted successfully to AI workloads, leveraging its extensive GPU resources. CEO Daniel Roberts anticipates that the Microsoft contract will utilize only
energycloud-computingAI-infrastructureGPUsdata-centersMicrosoftNvidiaNvidia becomes first public company worth $5 trillion
Nvidia has become the first public company to reach a $5 trillion market capitalization, driven primarily by its dominant position in the AI chip market. The company’s shares surged over 5.6% following news that U.S. President Donald Trump planned to discuss Nvidia’s Blackwell chips with Chinese President Xi Jinping. Nvidia CEO Jensen Huang highlighted the company’s expectation of $500 billion in AI chip sales and emphasized expansion into sectors such as security, energy, and science, which will require thousands of Nvidia GPUs. Additionally, Nvidia is investing in enabling AI-native 5G-Advanced and 6G networks through its platforms, further solidifying its role in the AI infrastructure ecosystem. This milestone comes just three months after Nvidia first surpassed a $1 trillion valuation, with its stock rising more than 50% in 2025 due to strong demand for its GPUs used in data centers for training large language models and AI inference. Nvidia’s GPUs remain scarce and highly sought after, supporting the growing infrastructure needed
energyAI-chipsGPUsdata-centersNvidia5G-networks6G-networksThe billion-dollar infrastructure deals powering the AI boom
The article highlights the massive investment and infrastructure buildup fueling the current AI boom, emphasizing the enormous computing power required to train and run AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. The piece details key deals, starting with Microsoft’s landmark $1 billion investment in OpenAI in 2019, which established Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership now valued at nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of close collaboration between AI firms and cloud providers has become standard, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for other AI ventures. Oracle’s emergence as a major AI infrastructure player is underscored by its unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in mid-2025
energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAIWhile OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them
Microsoft CEO Satya Nadella announced the deployment of the company’s first massive AI system—referred to as an AI “factory” by Nvidia—at Microsoft Azure’s global data centers. These systems consist of clusters with over 4,600 Nvidia GB300 rack computers equipped with the new Blackwell Ultra GPU chips, connected via Nvidia’s high-speed InfiniBand networking technology. Microsoft plans to deploy hundreds of thousands of these Blackwell Ultra GPUs worldwide, enabling the company to run advanced AI workloads, including those from its partner OpenAI. This announcement comes shortly after OpenAI secured significant data center deals and committed approximately $1 trillion in 2025 to build its own infrastructure. Microsoft emphasized that, unlike OpenAI’s ongoing build-out, it already operates extensive data centers in 34 countries, positioning itself as uniquely capable of supporting frontier AI demands today. The new AI systems are designed to handle next-generation AI models with hundreds of trillions of parameters. Further details on Microsoft’s AI infrastructure expansion are
energydata-centersAI-hardwareGPUscloud-computingNvidiaMicrosoft-AzureEven after Stargate, Oracle, Nvidia and AMD, OpenAI has more big deals coming soon, Sam Altman says
OpenAI has been actively securing large-scale infrastructure deals to support its rapidly growing AI model development, with major partnerships involving Nvidia, AMD, Oracle, and others. Nvidia has invested in OpenAI, becoming a shareholder, while AMD has granted OpenAI up to 10% of its stock in exchange for collaboration on next-generation AI GPUs. These deals include commitments for tens of gigawatts of AI data center capacity, such as OpenAI’s $500 billion Stargate deal with Oracle and SoftBank for U.S. facilities, and additional expansions in the UK and Europe. Nvidia is also preparing OpenAI for a future where it operates its own data centers, although the cost of such infrastructure—estimated at $50 to $60 billion per gigawatt—is currently beyond OpenAI’s direct financial capacity. OpenAI CEO Sam Altman emphasized that these partnerships are part of an aggressive infrastructure investment strategy to support more capable future AI models and products. Despite OpenAI’s revenue not yet approaching the scale of its
energyAI-data-centersNvidiaAMDOpenAIcloud-computingsemiconductor-chipsThe billion-dollar infrastructure deals powering the AI boom
The article highlights the massive investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang estimates that $3 to $4 trillion will be spent on AI infrastructure by 2030, with major tech companies like Microsoft, Meta, Oracle, Google, and OpenAI leading the charge. Central to this surge was Microsoft’s initial $1 billion investment in OpenAI in 2019, which positioned Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider relationships has become common, with companies like Anthropic partnering with Amazon and Google Cloud acting as primary computing partners for various AI firms. Oracle has emerged as a major player in AI infrastructure through unprecedented deals with OpenAI, including a $30 billion cloud services contract revealed in 2025 and a staggering $300 billion five-year compute power
energyAI-infrastructurecloud-computingdata-centersNvidiaMicrosoft-AzureOpenAIAlibaba bets big on AI with Nvidia tie-up, new data center plans
Alibaba is intensifying its focus on artificial intelligence, unveiling a major partnership with Nvidia, plans to expand its global data center network, and launching its most advanced AI models at the 2025 Apsara Conference. The collaboration with Nvidia will integrate Physical AI tools into Alibaba’s cloud platform, enhancing capabilities in data synthesis, model training, simulation, and testing for applications like robotics and autonomous driving. This move is part of Alibaba’s broader strategy to compete aggressively in the AI sector, which has driven its Hong Kong and U.S.-listed shares up nearly 10%. CEO Eddie Wu emphasized that Alibaba will increase its AI investment beyond the already committed 380 billion yuan ($53 billion). Alibaba also announced plans to open new data centers in Brazil, France, the Netherlands, and additional sites across Mexico, Japan, South Korea, Malaysia, and Dubai, expanding its existing network of 91 data centers in 29 regions. This expansion aims to meet growing demand from AI developers and enterprise customers worldwide, positioning Alibaba
AINvidiaData-CentersCloud-ComputingRoboticsAutonomous-DrivingArtificial-IntelligenceThe billion-dollar infrastructure deals powering the AI boom
The article highlights the massive financial investments and infrastructure developments fueling the current AI boom, emphasizing the enormous computing power required to run advanced AI models. Nvidia CEO Jensen Huang projects that $3 to $4 trillion will be spent on AI infrastructure by 2030, with significant contributions from AI companies themselves. Major tech players such as Microsoft, OpenAI, Meta, Oracle, Google, and Amazon are heavily investing in cloud services, data centers, and specialized hardware to support AI training and deployment. These efforts are straining power grids and pushing the limits of existing data center capacities. A pivotal moment in the AI infrastructure race was Microsoft’s initial $1 billion investment in OpenAI, which secured Microsoft as OpenAI’s exclusive cloud provider and laid the groundwork for a partnership that has since grown to nearly $14 billion. Although OpenAI has recently diversified its cloud partnerships, this model of exclusive or primary cloud provider deals has become common, with Amazon investing $8 billion in Anthropic and Nvidia committing $100 billion to
energyAI-infrastructurecloud-computingdata-centerspower-gridsNvidiaMicrosoft-AzureNvidia eyes $500M investment into self-driving tech startup Wayve
Nvidia CEO Jensen Huang visited the UK with a commitment to invest £2 billion ($2.6 billion) to boost the country’s AI startup ecosystem, with a potential $500 million strategic investment targeted at Wayve, a UK-based self-driving technology startup. Wayve has signed a letter of intent with Nvidia to explore this investment as part of its next funding round, following Nvidia’s participation in Wayve’s $1.05 billion Series C round in May 2024. The investment is aligned with Nvidia’s broader AI startup funding initiative, which also involves venture capital firms like Accel and Balderton. Wayve is advancing its self-driving technology through a data-driven, self-learning approach that does not rely on high-definition maps, making it adaptable to existing vehicle sensors such as cameras and radar. Wayve’s autonomous driving platform, which has been developed in close collaboration with Nvidia since 2018, currently uses Nvidia GPUs in its Ford Mach E test vehicles. The company recently unveiled its third
robotautonomous-vehiclesself-driving-technologyNvidiaAImachine-learningautomotive-technologyChina tells its tech companies they can’t buy AI chips from Nivida
China’s Cyberspace Administration has officially banned domestic tech companies, including major players like ByteDance and Alibaba, from purchasing Nvidia’s AI chips, specifically the RTX Pro 6000D server designed for the Chinese market. This move follows earlier discouragements from Beijing to avoid Nvidia chips and instead support local AI chip manufacturers. Nvidia’s chips are widely regarded as some of the most advanced globally, making this ban a significant setback for China’s tech ecosystem, despite efforts by companies like Huawei and Alibaba to develop indigenous AI hardware. Nvidia’s CEO Jensen Huang expressed disappointment but acknowledged the broader geopolitical tensions between China and the U.S. He emphasized Nvidia’s willingness to support Chinese companies if permitted. The ban comes amid a complex backdrop of U.S. export controls: the Trump administration initially restricted Nvidia’s chip sales to China in April, causing substantial revenue losses for Nvidia. Although restrictions were partially eased later, including a controversial revenue-sharing proposal with the U.S. government, Nvidia has yet to resume significant sales
semiconductorsAI-chipsNvidiaChina-tech-marketsemiconductor-industrychip-manufacturingtechnology-regulationsA timeline of the US semiconductor market in 2025
The U.S. semiconductor market in 2025 has experienced significant developments amid geopolitical tensions and industry shifts, largely driven by the strategic importance of AI chip technology. Nvidia reported a record quarter in August, with a notable 56% year-over-year revenue growth in its data center business, underscoring its strong market position despite broader industry turmoil. Meanwhile, Intel underwent major changes: the U.S. government took an equity stake in the company’s foundry program to maintain control, and Japanese conglomerate SoftBank also acquired a strategic stake. Intel further restructured by spinning out its telecom chip business and consolidating operations to improve efficiency, including halting projects in Germany and Poland and planning workforce reductions. Political dynamics have heavily influenced the semiconductor landscape. President Donald Trump announced potential tariffs on the industry, though none had been implemented by early September, and publicly criticized Intel CEO Lip-Bu Tan amid concerns over Tan’s ties to China. Tan met with Trump to discuss Intel’s role in revitalizing U.S
materialssemiconductorAI-chipsIntelNvidiachip-manufacturingtechnology-industryNvidia is latest investor to back AV startup Nuro in $203M funding round
Nvidia has joined a group of new investors backing autonomous vehicle startup Nuro in a $203 million Series E funding round. The round includes $97 million from new investors such as Icehouse Ventures, Kindred Ventures, Nvidia, and Pledge Ventures, alongside existing backer Baillie Gifford. Uber also participated, contributing a “multi-hundred-million dollar” investment as part of a broader partnership involving electric car maker Lucid. Nvidia’s involvement follows years of technical collaboration, with Nuro utilizing Nvidia GPUs and the Drive AGX Thor platform for its self-driving software development. The total Series E funding includes an earlier $106 million tranche announced in April, bringing Nuro’s total raised capital to $2.3 billion with a post-money valuation of $6 billion—a 30% decrease from its $8.6 billion valuation in 2021. Nuro has undergone significant strategic shifts amid challenging economic conditions and industry consolidation. After layoffs in 2022 and 2023,
robotautonomous-vehiclesself-driving-technologyNvidiaelectric-vehiclesAImobilityNvidia Cosmos Robot Trainer
Nvidia has announced Cosmos, a new simulation and reasoning platform designed to enhance AI, robotics, and autonomous vehicle development. Cosmos aims to enable smarter and faster training of AI models by providing advanced simulation environments that closely mimic real-world scenarios. This approach helps improve the accuracy and efficiency of AI systems used in robotics and autonomous technologies. The platform leverages Nvidia’s expertise in graphics processing and AI to create detailed, realistic simulations that facilitate better decision-making and reasoning capabilities in machines. By accelerating the training process and improving model robustness, Cosmos is expected to advance the development of intelligent robots and autonomous vehicles, ultimately contributing to safer and more reliable AI-driven systems.
robotAINvidiaautonomous-vehiclessimulationrobotics-trainingartificial-intelligenceHow a once-tiny research lab helped Nvidia become a $4 trillion-dollar company
The article chronicles the evolution of Nvidia’s research lab from a small group of about a dozen people in 2009, primarily focused on ray tracing, into a robust team of over 400 researchers that has been instrumental in transforming Nvidia from a video game GPU startup into a $4 trillion company driving the AI revolution. Bill Dally, who joined the lab after being persuaded by Nvidia leadership, expanded the lab’s focus beyond graphics to include circuit design and VLSI chip integration. Early on, the lab recognized the potential of AI and began developing specialized GPUs and software for AI applications well before the current surge in AI demand, positioning Nvidia as a leader in AI hardware. Currently, Nvidia’s research efforts are pivoting toward physical AI and robotics, aiming to develop the core technologies that will power future robots. This shift is exemplified by the work of Sanja Fidler, who joined Nvidia in 2018 to lead the Omniverse research lab in Toronto, focusing on simulation models for robotics and
robotartificial-intelligenceNvidiaGPUsrobotics-developmentAI-hardwaretechnology-researchTesla drops Dojo supercomputer as Musk turns to Nvidia, Samsung chips
Tesla has officially discontinued its in-house Dojo supercomputer project, which aimed to develop custom AI training chips to enhance autonomous driving and reduce reliance on external chipmakers. The decision follows several key departures from the Dojo team, including project head Peter Bannon. CEO Elon Musk explained that maintaining two distinct AI chip designs was inefficient, leading Tesla to refocus efforts on developing the AI5 and AI6 chips. These next-generation chips will be produced in partnership with Samsung’s new Texas factory, with production of AI5 chips expected to start by the end of 2026. The Dojo project was initially central to Tesla’s strategy to build proprietary AI infrastructure for self-driving cars, robots, and data centers, involving significant investment in top chip architects. However, the initiative faced persistent delays and setbacks, with prominent leaders like Jim Keller and Ganesh Venkataramanan having left previously. Many former Dojo team members have moved to a stealth startup, DensityAI, which is pursuing similar AI chip goals
robotAI-chipsTeslaNvidiaSamsungautonomous-drivingsupercomputerTwo arrested for smuggling AI chips to China; Nvidia says no to kill switches
The U.S. Department of Justice arrested Chuan Geng and Shiwei Yang on August 2 in California for allegedly smuggling advanced AI chips to China through their company, ALX Solutions. They face charges under the Export Control Reform Act, which carries penalties of up to 20 years in prison. The DOJ indicated the chips involved were highly powerful GPUs designed specifically for AI applications, strongly suggesting Nvidia’s H100 GPUs. Evidence showed ALX Solutions shipped these chips to intermediaries in Singapore and Malaysia while receiving payments from entities in Hong Kong and China, apparently to circumvent U.S. export restrictions. In response, Nvidia emphasized its strict compliance with U.S. export controls and stated that any diverted products would lack service and support. The company also rejected recent U.S. government proposals to embed kill switches or backdoors in chips to prevent smuggling, arguing such measures would compromise security and trust in U.S. technology. Nvidia warned that creating vulnerabilities intentionally would benefit hackers and hostile actors, ultimately harming America
AIsemiconductorsNvidiaexport-controlchip-smugglingtechnology-securityGPUsChina cites ‘backdoor safety risk’ in Nvidia’s H20 AI chip; company denies allegation
Chinese authorities have summoned Nvidia over alleged security vulnerabilities in its H20 AI chip, citing “serious security risks” and concerns about potential backdoors that could allow remote access or tracking. The Cyberspace Administration of China (CAC) questioned Nvidia representatives and requested documentation to clarify these issues. Nvidia has denied the allegations, affirming that their chips contain no such backdoors. This investigation comes amid stalled trade talks between Washington and Beijing and could delay Nvidia’s efforts to resume sales of the H20 chip in China, complicating the company’s market position. The scrutiny of Nvidia’s H20 chip aligns with China’s broader strategy to reduce reliance on U.S. semiconductor technology and promote domestic alternatives, such as Huawei’s Ascend 910C chip, which is gaining traction for AI workloads. The H20 was designed to comply with U.S. export restrictions, and its sales resumption was seen as a potential breakthrough in easing trade tensions. However, the current probe and regulatory uncertainty highlight ongoing geopolitical and
semiconductorsAI-chipscybersecurityNvidiaChina-tech-markettrade-restrictionssemiconductor-alternativesA timeline of the US semiconductor market in 2025
The U.S. semiconductor industry in 2025 has experienced significant upheaval amid the intensifying global AI competition. Intel, under new CEO Lip-Bu Tan, focused on restructuring and efficiency, canceling projects in Germany and Poland, consolidating test operations, and planning substantial layoffs of up to 20% in certain units. Intel also made key leadership hires to pivot back to an engineering-driven approach. Meanwhile, AMD expanded its AI hardware capabilities through acquisitions, including companies specializing in AI inference chips and software adaptation to compete more directly with Nvidia’s dominance. On the policy front, the Trump administration introduced an AI Action Plan emphasizing chip export controls and allied coordination, though specific restrictions remained undefined. Nvidia faced challenges due to U.S. export licensing requirements on AI chips, leading the company to exclude China-related revenue from forecasts and file applications to resume chip sales there, including launching a China-specific RTX Pro chip. The U.S. also grappled with national security concerns over AI chip sales to the UAE and
semiconductorsAI-chipsIntelNvidiachip-export-controlssemiconductor-industryrare-earth-elementsNvidia Breaks $4 Trillion Market Value Record
Nvidia has become the first publicly traded company to reach a $4 trillion market valuation, surpassing established tech giants such as Apple, Microsoft, and Google. Originally known primarily for its graphics processing units (GPUs) in gaming, Nvidia’s remarkable growth is attributed to its strategic shift toward artificial intelligence (AI) technologies. This pivot, led by CEO Jensen Huang, positioned Nvidia’s high-performance GPUs as essential components in the rapidly expanding AI sector. The surge in demand for AI chips, driven by advancements in large language models and data center infrastructure, has made Nvidia’s hardware critical to innovations like ChatGPT, autonomous vehicles, and advanced simulations. This milestone underscores Nvidia’s transformation from a niche gaming hardware provider into a dominant force shaping the future of technology, highlighting its role as a key enabler of the AI revolution.
robotAIautonomous-vehiclesGPUsdata-centersartificial-intelligenceNvidiaNvidia’s resumption of H20 chip sales related to rare earth element trade talks
Nvidia recently reversed its June decision to halt sales of its H20 AI chip to China, filing an application to resume these sales. This move is closely linked to ongoing U.S.-China trade discussions concerning rare earth elements (REEs), such as lanthanum and cerium, which are predominantly mined in China and are essential for technologies including electric vehicle batteries. U.S. Commerce Secretary Howard Lutnick indicated that Nvidia’s chip sales resumption is part of broader negotiations around these critical materials, emphasizing that China will not receive Nvidia’s most advanced technology. The decision has sparked controversy, with some U.S. lawmakers, including Congressman Raja Krishnamoorthi, criticizing it as inconsistent with prior export control policies aimed at protecting advanced technology from foreign adversaries. However, Lutnick downplayed these concerns, assuring that the chips sold to China are not among Nvidia’s top-tier products. This development follows rumors that Nvidia was seeking ways to comply with U.S. export regulations while continuing business in China
energyrare-earth-elementsNvidiasemiconductor-chipsAI-chipstrade-talksexport-controlsNvidia is set to resume China chip sales after months of regulatory whiplash
Nvidia has announced it is filing applications to resume sales of its H20 artificial intelligence chips to China after several months of regulatory uncertainty. The H20 chip, designed for AI inference tasks rather than training new models, is currently the most powerful AI processor Nvidia can legally export to China under U.S. export controls. Alongside the H20, Nvidia is introducing a new “RTX Pro” chip tailored specifically for the Chinese market, which the company says complies fully with regulations and is suited for digital manufacturing applications like smart factories and logistics. The regulatory back-and-forth began in April when the Trump administration imposed restrictions on sales of high-performance chips, including the H20, potentially costing Nvidia $15 to $16 billion in revenue from Chinese customers. However, after Nvidia CEO Jensen Huang attended a high-profile dinner at Mar-a-Lago and pledged increased U.S. investments and jobs, the administration paused the ban. This episode highlights the ongoing tension between U.S. national security concerns aimed at limiting China’s
materialssemiconductorAI-chipsNvidiaChina-tech-marketexport-controlsdigital-manufacturingNvidia boss dismisses China military chip use, cites US tech risk
Nvidia CEO Jensen Huang has downplayed concerns that China’s military could effectively use American AI chips, citing export restrictions and the risk of sanctions as major deterrents. Speaking ahead of a planned visit to China, Huang argued that Chinese military institutions would avoid dependence on US-origin hardware like Nvidia’s advanced A100 and H100 GPUs due to the possibility of supply cutoffs. His comments come amid ongoing US efforts to limit Beijing’s access to cutting-edge semiconductor technologies, which Washington views as critical to national security. Despite Huang’s reassurances, US lawmakers remain wary. Senators Jim Banks and Elizabeth Warren have formally urged Huang not to engage with Chinese military-linked entities or firms circumventing US export controls, such as DeepSeek, a Chinese AI company accused of indirectly sourcing Nvidia chips to support military and intelligence projects. The bipartisan concern reflects broader fears over the dual-use nature of high-end GPUs, which power both civilian AI applications and sophisticated military systems like battlefield automation and electronic warfare. Meanwhile, Nvidia faces complex geopolitical challenges
semiconductorsAI-chipsNvidiamilitary-technologyexport-controlsUS-China-relationstechnology-securityNvidia reportedly plans to release new AI chip designed for China
Nvidia is reportedly planning to release a new AI chip tailored specifically for the Chinese market, aiming to navigate around U.S. export restrictions on advanced semiconductor technology. The chip, expected as early as September, will be based on Nvidia’s Blackwell RTX Pro 6000 processor but modified to comply with current regulations. Notably, these China-specific chips will exclude high-bandwidth memory and NVLink, Nvidia’s proprietary high-speed communication interface, which are key features in its more advanced AI chips. This move reflects Nvidia’s determination to maintain its presence and sales in China despite tightening export controls. Nvidia CEO Jensen Huang recently indicated a potential impact on the company’s revenue and profit forecasts due to these restrictions, though this new product launch might mitigate some of those effects. Additional details from Nvidia were not provided at the time of reporting.
materialsAI-chipsemiconductorNvidiatechnologyprocessorhardwareNvidia becomes first $4 trillion company as AI demand explodes
Nvidia has become the first publicly traded company to reach a $4 trillion market capitalization, driven by soaring demand for its AI chips. The semiconductor giant's stock surged to a record $164 per share, marking a rapid valuation increase from $1 trillion in June 2023 to $4 trillion in just over a year—faster than tech giants Apple and Microsoft, which have also surpassed $3 trillion valuations. Nvidia now holds the largest weight in the S&P 500 at 7.3%, surpassing Apple and Microsoft, and its market value exceeds the combined stock markets of Canada and Mexico as well as all publicly listed UK companies. This historic rise is fueled by the global tech industry's race to develop advanced AI models, all heavily reliant on Nvidia’s high-performance chips. Major players like Microsoft, Meta, Google, Amazon, and OpenAI depend on Nvidia hardware for AI training and inference tasks. The launch of Nvidia’s next-generation Blackwell chips, designed for massive AI workloads, has intensified
robotAI-chipsautonomous-systemsNvidiasemiconductordata-centersartificial-intelligenceHumanoid robots could soon build Nvidia chips at US Foxconn facility
Taiwanese manufacturing giant Foxconn is collaborating with Nvidia to develop humanoid robots intended for deployment at a new Foxconn facility in Houston, Texas. The plan, still under negotiation but expected to be finalized soon, aims to use these robots to assist in the production of Nvidia’s upcoming GB300 AI servers. If realized, this would mark the first time Nvidia products are developed with humanoid robot assistance. The Houston factory was chosen due to its new construction and ample space, facilitating the integration of advanced robotics to potentially increase production speed and reduce manufacturing costs. Foxconn is developing two types of humanoid robots, one with legs and another with a wheeled autonomous mobile robot (AMR) base, the latter being a more cost-effective option. These robots are expected to be operational by early next year, coinciding with the start of GB300 server production. This initiative aligns with broader industry trends, as companies like Mercedes-Benz and BMW have also tested humanoid robots on their production lines. Nvidia recently
humanoid-robotsNvidiaFoxconnAI-serversrobotics-manufacturingautonomous-mobile-robotsrobot-foundation-modelA timeline of the US semiconductor market in 2025
The U.S. semiconductor market in the first half of 2025 has experienced significant turbulence amid the ongoing AI technology race. Intel underwent major leadership changes with Lip-Bu Tan appointed CEO, who quickly initiated organizational restructuring including planned layoffs of 15-20% in certain units and efforts to spin off non-core businesses such as its telecom chip division. Meanwhile, AMD aggressively expanded its AI hardware capabilities through acquisitions, including the teams behind Untether AI and Enosemi, a silicon photonics startup, positioning itself to challenge Nvidia’s dominance in AI chip technology. Nvidia faced considerable challenges due to U.S. government-imposed AI chip export restrictions, particularly on its H20 AI chips, which led to a projected $8 billion revenue loss in Q2 and a decision to exclude China-related revenue forecasts going forward. The U.S. government’s AI chip export policies have been contentious, with the Biden administration’s proposed AI Diffusion Rule ultimately abandoned in May, and the Trump administration signaling a different regulatory
materialssemiconductor-industryAI-chipsIntelNvidiaAMDchip-export-restrictionsHuawei aims to take on Nvidia’s H100 with new AI chip
HuaweiAI-chipNvidiaAscend-910DsemiconductortechnologyChina