Articles tagged with "AI-hardware"
China's new cooling system can touch sub-zero in seconds to save AI
Chinese researchers have developed a novel pressure-driven chemical cooling technique that can rapidly supercool a liquid medium to sub-zero temperatures within 30 seconds. Unlike traditional cooling methods that rely on continuous energy input such as fans or chilled water loops, this process uses the unusual solubility behavior of ammonium thiocyanate salt under pressure. When pressurized, a saturated salt solution forms, and upon sudden pressure release, the salt re-dissolves in a way that absorbs significant heat from the surroundings, causing a rapid temperature drop. This endothermic dissolution, enhanced by pressure control, offers a high cooling power burst ideal for managing sudden thermal spikes. This technology shows promise for energy-intensive AI data centers, which generate intense heat from GPUs and other hardware, often facing unpredictable heat surges during computationally demanding tasks. The rapid salt cooling process could act as a thermal buffer, reducing peak cooling loads and potentially lowering energy costs associated with traditional cooling systems that can consume 30-50% of a data center
energycooling-technologydata-centersAI-hardwarechemical-coolingthermal-managementammonium-thiocyanateAI chip startup Ricursive hits $4B valuation two months after launch
Ricursive Intelligence, an AI chip startup founded by former Google researchers Anna Goldie (CEO) and Azalia Mirhoseini (CTO), has rapidly achieved a $4 billion valuation just two months after its formal launch. The company raised $300 million in a Series A round led by Lightspeed, bringing its total funding to $335 million. Ricursive is developing an AI system capable of designing and autonomously improving AI chips, including creating its own silicon substrate layer to accelerate chip advancements. The founders’ prior work on reinforcement learning for chip layout design has been instrumental in four generations of Google’s TPU chips. Ricursive is part of a broader trend of startups focused on AI systems that self-improve hardware. Notably, it should not be confused with Recursive, another AI startup working on similar self-improving AI systems and reportedly also targeting a $4 billion valuation. Additionally, Naveen Rao’s Unconventional AI recently raised a $475 million seed round at a $4.5
AI-chipssemiconductor-materialschip-design-automationsilicon-substratereinforcement-learningAI-hardwarestartup-fundingNot to be outdone by OpenAI, Apple is reportedly developing an AI wearable
Apple is reportedly developing an AI-powered wearable device in the form of a pin that users can attach to their clothing. According to a report by The Information, the device will feature two cameras (one standard lens and one wide-angle), three microphones, a physical button, a speaker, and a charging strip similar to Fitbit’s design. The pin is described as a thin, flat, circular disc with an aluminum-and-glass shell, roughly the size of an AirTag but slightly thicker. Apple engineers are reportedly aiming to accelerate the development of this product to compete with OpenAI, which is expected to announce its own AI hardware device—possibly earbuds—later in 2026. The pin could potentially launch in 2027 with an initial production run of around 20 million units. This move signals a growing interest and competition in the AI hardware market, as companies seek to integrate AI capabilities into wearable technology. However, consumer demand for such devices remains uncertain. The report references Humane, a startup founded by
IoTAI-wearablesmart-devicesAppleAI-hardwarewearable-technologyconsumer-electronicsElon Musk says Tesla’s restarted Dojo3 will be for ‘space-based AI compute’
Elon Musk announced that Tesla plans to restart development of its third-generation AI chip, Dojo3, but with a new focus on “space-based AI compute” rather than training self-driving models on Earth. This marks a strategic shift following Tesla’s shutdown of the original Dojo supercomputer project five months earlier, which included disbanding the Dojo team after the departure of its lead, Peter Bannon. At that time, Tesla had intended to rely more on external partners like Nvidia, AMD, and Samsung for AI compute and chip manufacturing. However, Musk’s recent statements suggest a renewed commitment to in-house chip development, highlighting that Tesla’s AI5 chip design is progressing well and that the upcoming AI7/Dojo3 chip will be geared toward operating AI data centers in space. Musk’s vision aligns with broader industry discussions about the limitations of Earth’s power grids and the potential benefits of off-planet data centers powered by constant solar energy. Tesla aims to rebuild its Dojo team
AI-chipsTesla-Dojospace-based-computingenergy-harvestingsemiconductor-technologyautonomous-drivingAI-hardwareNVIDIA can now sell AI chips to China as US eases export rules
The U.S. Commerce Department has eased export restrictions on advanced AI chips to China, allowing companies like NVIDIA and AMD to apply for licenses to sell certain high-performance processors under strict conditions. This marks a significant shift from previous policies that largely rejected such exports outright. Under the new rules, chipmakers can seek approval to export processors like NVIDIA’s H200 and AMD’s MI325X on a case-by-case basis, provided they demonstrate no shortage of supply in the U.S. and certify that shipments will not detract from domestic needs. The policy also applies to Macau and restricts eligibility to chips below specific performance thresholds, while explicitly barring exports for military, intelligence, or weapons-related uses. The revised framework further limits exports to no more than 50% of the volume shipped domestically and requires rigorous customer verification and independent third-party testing before shipment. This approach aims to prevent advanced U.S. AI technology from enhancing China’s defense or intelligence capabilities while cautiously reopening commercial access. NVIDIA’s H200
AI-chipssemiconductor-export-controlsNVIDIAadvanced-processorsUS-China-technology-tradeAI-hardwarechip-manufacturingThe Top Engineering Stories of 2025
The year 2025 was marked by significant advancements and transformative events in engineering and technology. Key highlights included the implementation of tariffs by former President Trump on Chinese GPUs, which influenced global tech policy and supply chains. Technological breakthroughs spanned a wide range of fields, from humanoid robots like Tesla’s Optimus learning to run, to major progress in quantum computing, fusion energy, and space propulsion systems. These developments pushed the boundaries of what is physically and technologically possible. Additionally, 2025 saw record-setting advances in AI hardware and meaningful strides toward cleaner energy solutions and faster space travel. The convergence of these innovations demonstrated how engineering continued to reshape industries and global dynamics within a single year. Overall, 2025 stood out as a pivotal year that underscored the rapid pace of technological evolution and its impact on both Earth and space exploration.
robotsenergyAI-hardwarefusion-energyelectric-vehiclesquantum-computingspace-propulsionNVIDIA eyes $20 billion Groq deal as AI chip race grows, report says
NVIDIA has agreed to acquire AI chip startup Groq in a cash deal valued at $20 billion, marking the largest acquisition in NVIDIA’s history and significantly expanding its presence in specialized AI accelerator hardware. The deal follows Groq’s recent $750 million funding round at a $6.9 billion valuation, which included major investors such as BlackRock, Samsung, and Cisco. The acquisition covers Groq’s core assets but excludes its Groq Cloud business. Groq, founded in 2016 by former Google engineers including CEO Jonathan Ross, focuses on low-latency inference chips designed to accelerate large language model tasks, positioning itself as a challenger to NVIDIA’s GPUs and Google’s TPUs. This acquisition underscores NVIDIA’s broader strategy to deepen its influence across the AI hardware ecosystem amid growing demand for AI inference hardware. NVIDIA’s cash reserves have grown substantially, reaching $60.6 billion by October 2023, enabling aggressive investments and partnerships, including a planned $100 billion investment in OpenAI and
energyAI-chipsNVIDIAGroqsemiconductorAI-hardwareaccelerator-technologyChina’s light-based AI chips beat NVIDIA GPUs at some tasks by 100x
Chinese researchers have developed new photonic (light-based) AI chips, such as ACCEL and LightGen, that reportedly outperform NVIDIA’s GPUs by over 100 times in speed and energy efficiency for specific generative AI tasks like video production, image synthesis, and low-light vision. Unlike traditional NVIDIA GPUs, which use electrons flowing through transistors to execute flexible, general-purpose computations, these photonic chips perform preset analog mathematical operations via optical interference. This approach enables extremely fast and power-efficient processing but limits their flexibility and applicability to narrowly defined AI workloads. ACCEL, developed by Tsinghua University, is a hybrid chip combining photonic and analog electronic components, capable of delivering 4.6 PetaFLOPS while consuming minimal power. LightGen, created by a collaboration between Shanghai Jiao Tong University and Tsinghua University, is a fully optical chip with over 2 million photonic neurons, excelling in tasks like image generation, style transfer, and 3D image manipulation.
materialsphotonic-chipsAI-hardwaresemiconductor-technologyenergy-efficiencyoptical-computingmicrochipsUS engineers develop 3D chip that offers order-of-magnitude speed gains
US engineers from Stanford, Carnegie Mellon, University of Pennsylvania, and MIT, in collaboration with SkyWater Technology, have developed a novel 3D multilayer computer chip architecture that significantly outperforms traditional 2D chips. Unlike conventional flat chips where components and memory are spread out on a single surface, this new design stacks ultra-thin layers vertically, interconnected by dense vertical wiring that enables rapid data movement. This architecture effectively overcomes the "memory wall" bottleneck—where processing speed outpaces data delivery—by integrating memory and computation closely in a vertical arrangement, akin to elevators in a high-rise building facilitating fast travel between floors. Early hardware tests show the prototype chip achieves roughly a fourfold speed improvement over comparable 2D chips, while simulations of future versions with more layers predict up to a twelve-fold gain on real AI workloads, including those based on Meta’s LLaMA model. The design also promises dramatic improvements in energy-delay product (EDP), balancing higher throughput with lower energy
semiconductor3D-chipAI-hardwarecomputer-architecturevertical-integrationchip-innovationmemory-wallUS engineers develop 3D chip that offers order-of-magnitude speed gains
US engineers from Stanford, Carnegie Mellon, University of Pennsylvania, and MIT, in collaboration with SkyWater Technology, have developed a novel 3D multilayer computer chip architecture that significantly outperforms traditional 2D chips. Unlike flat chips where components are spread out on a single surface, this new design stacks ultra-thin layers vertically, connected by dense vertical wiring that enables rapid data movement akin to elevators in a high-rise building. Early hardware tests show the prototype achieves roughly four times the performance of comparable 2D chips, while simulations of taller versions with more layers predict up to a twelve-fold improvement on AI workloads, including those based on Meta’s LLaMA model. This breakthrough addresses the longstanding "memory wall" bottleneck, where data transfer speeds limit processing despite faster computing elements and limited nearby memory. By vertically integrating memory and computation, the chip drastically shortens data pathways, improving both throughput and energy efficiency. The team claims this architecture could realistically lead to 100- to 1
semiconductor3D-chipAI-hardwarecomputer-architecturevertical-integrationmemory-wallchip-innovationUnconventional AI confirms its massive $475M seed round
Unconventional AI, a startup founded by Naveen Rao, former head of AI at Databricks, has secured $475 million in seed funding at a $4.5 billion valuation. The funding round was led by Andreessen Horowitz and Lightspeed Ventures, with additional investments from Lux Capital and DCVC. This initial raise is part of a larger planned round that could reach up to $1 billion. Although the final valuation is slightly below the $5 billion Rao initially aimed for, the company’s value may increase if the full funding target is met. The startup aims to develop a new, energy-efficient computer specifically designed for AI applications, with Rao emphasizing a goal to achieve efficiency comparable to biological systems. Rao has a strong track record in AI and machine learning startups, having previously founded MosaicML, acquired by Databricks for $1.3 billion, and another machine learning platform acquired by Intel for over $400 million. Unconventional AI’s ambitious funding and vision position it as
energyAI-hardwareenergy-efficient-computingstartup-fundingsemiconductor-technologymachine-learningcomputer-architectureMeta acquires AI device startup Limitless
Meta has acquired Limitless, an AI startup formerly known as Rewind, which developed an AI-powered pendant designed to record conversations and create searchable records. Following the acquisition, Limitless will cease sales of its hardware devices and maintain customer support for one year. Existing customers will be transitioned to an Unlimited Plan without subscription fees temporarily, while some software functionalities, including the original Rewind app, will be discontinued. Founded by Dan Siroker, co-founder of Optimizely, Limitless pivoted to AI hardware last year with its $99 pendant, a wearable device that could be clipped to clothing or worn as a necklace. The acquisition aligns with Meta’s broader vision of integrating AI-enabled wearables, complementing its current focus on AR/AI glasses such as Ray-Ban Meta and Oakley Meta. Limitless expressed its commitment to supporting Meta’s existing products rather than expanding the AI pendant market, citing increased competition from major players including Meta itself. The startup’s founder highlighted the dramatic shift in the
IoTAI-deviceswearable-technologyMeta-acquisitionAI-hardwaresmart-wearablespersonal-superintelligencePowerLattice attracts investment from ex-Intel CEO Pat Gelsinger for its power saving chiplet
PowerLattice, a startup founded in 2023 by veteran engineers from Qualcomm, NUVIA, and Intel, has developed a novel power delivery chiplet that reduces computer chip power consumption by over 50%. This innovation addresses the growing demand for energy-efficient semiconductor solutions amid the increasing compute capacity needs driven by AI workloads and large language models. The company recently emerged from stealth with a $25 million Series A funding round led by Playground Global and Celesta Capital, bringing total funding to $31 million. Pat Gelsinger, former Intel CEO and general partner at Playground Global, endorsed PowerLattice’s technology, highlighting the team's expertise and the significance of their power delivery approach. PowerLattice’s chiplet works by bringing power closer to the processor, thereby minimizing energy loss. The startup has reached a key milestone with its first batch of chiplets produced by TSMC and is currently undergoing testing with an unnamed manufacturer. The company plans to expand testing to other potential customers, including major chipmakers
energysemiconductorpower-efficiencychipletAI-hardwarePowerLatticePat-GelsingerSam Altman says he doesn’t want the government to bail out OpenAI if it fails
OpenAI CEO Sam Altman clarified that the company does not want or expect government bailouts if it fails, emphasizing that taxpayers should not be responsible for rescuing companies that make poor business decisions. This statement came after OpenAI CFO Sarah Friar initially suggested at a Wall Street Journal event that the U.S. government should “backstop” the company’s infrastructure loans to reduce financing costs and ensure access to the latest computing chips. Friar later retracted this, stating OpenAI is not seeking such government guarantees. Altman acknowledged that while loan guarantees have been discussed in the context of supporting semiconductor manufacturing in the U.S., OpenAI itself has not formally applied for such support. The discussion around government involvement sparked responses from other industry figures, including former Trump AI advisor David Sacks, who affirmed that the U.S. government has no plans to bail out AI companies. He highlighted the competitive landscape with multiple major AI firms, suggesting that if one fails, others will fill the gap. The government’s
energydata-centersinfrastructuregovernment-policyAI-hardwarefinancingsemiconductor-chipsKevin Rose’s simple test for AI hardware — would you want to punch someone in the face who’s wearing it?
Kevin Rose, a veteran investor and general partner at True Ventures, offers a straightforward yet insightful test for evaluating AI hardware investments: if wearing the device makes you want to "punch someone in the face," it’s likely not worth investing in. Rose’s perspective stems from his experience with wearables like Oura rings and his skepticism toward the current surge of AI wearables that often disregard social norms around privacy and emotional impact. He emphasizes that successful hardware must resonate emotionally and be socially acceptable, not just technologically advanced. Rose criticizes AI devices that are “always on” and intrusive, sharing a personal anecdote about abandoning the Humane AI pendant after it complicated a personal argument by recording conversations. Rose also warns about the broader societal implications of AI, comparing the current AI adoption phase to the early, reckless days of social media. He highlights concerns about AI’s impact on reality perception, such as photo apps that erase real-world elements, potentially distorting memories and truth. With his own children, Rose navig
IoTAI-hardwarewearable-technologysmart-devicesprivacy-concernssocial-impactAI-wearablesIon-based artificial neurons mimic brain chemistry for AI computing
Researchers at USC have developed artificial neurons that physically replicate the electrochemical behavior of real brain cells, marking a significant advancement toward more efficient, brain-like AI hardware. Unlike conventional neuromorphic chips that digitally simulate brain activity, these new neurons utilize actual chemical and electrical processes, specifically relying on the movement of silver ions within a “diffusive memristor” structure. This approach mimics the brain’s natural signaling, where electrical signals convert to chemical signals at synapses and back again, enabling each artificial neuron to occupy the space of just one transistor—dramatically reducing size and potentially increasing speed and efficiency. The innovation addresses a key limitation of current computing systems: energy inefficiency. While modern computers are powerful, they consume excessive energy and lack the efficiency of the human brain, which learns from few examples using only about 20 watts of power. By leveraging ion dynamics rather than electron flow, the USC team aims to create hardware that supports more efficient, hardware-based learning akin to biological brains. Although the
artificial-neuronsneuromorphic-computingion-based-computingenergy-efficiencyAI-hardwarememristor-technologybrain-inspired-computingSwiss startup opens access to world's first living computer for universities worldwide
Swiss startup FinalSpark is pioneering the development of the world’s first living computer by using clusters of human brain cells called organoids to perform simple computational tasks. These organoids, derived from reprogrammed human skin cells and containing about 10,000 neurons each, are maintained in nutrient-rich solutions and connected to electrodes that translate neural activity into electrical signals analogous to digital binary code. Unlike traditional silicon-based processors, these living bioprocessors demonstrate basic learning behaviors and can be trained using dopamine to reinforce neural responses, mimicking biological learning processes. This approach promises vastly greater energy efficiency—biological neurons are estimated to be one million times more energy-efficient than artificial ones—potentially addressing the high power consumption challenges of current AI models. Despite these advances, maintaining living computers remains challenging due to the fragility of organoids, which lack blood vessels and cannot be rebooted once they die; their lifespan currently maxes out at about four months. Researchers observe a final burst of neural activity before organoid
biocomputingliving-computersbrain-organoidsenergy-efficiencyneural-processorsAI-hardwarewetware-technologyOpenAI and Broadcom partner on AI hardware
OpenAI has announced a significant partnership with Broadcom to acquire 10 gigawatts of custom AI accelerator hardware. These AI accelerator racks are planned for deployment in OpenAI’s and partner data centers from 2026 through 2029. By designing its own chips and systems, OpenAI aims to integrate insights from its advanced AI model development directly into the hardware, enhancing performance and intelligence capabilities. The financial terms of the deal were not disclosed, though the Financial Times estimated the value. This hardware agreement follows a series of major recent deals by OpenAI, including a multi-billion dollar arrangement with Nvidia for 10 gigawatts of hardware and a reportedly historic agreement with Oracle, which remains unconfirmed. These partnerships underscore OpenAI’s strategic focus on securing substantial computing resources to support its AI research and product development efforts over the coming years.
energyAI-hardwaredata-centerscustom-chipsaccelerator-racksOpenAIhardware-partnershipDGX Spark: NVIDIA unveils its smallest AI computer at $3,999
NVIDIA has launched the DGX Spark, touted as the world’s smallest AI supercomputer, priced at $3,999. This compact 2.6-pound device integrates the new GB10 Grace Blackwell Superchip, which combines a 20-core Arm-based Grace CPU with a Blackwell GPU featuring CUDA cores equivalent to the RTX 5070 graphics card. Optimized for desktop AI development, the DGX Spark delivers up to 1,000 trillion operations per second using fifth-generation Tensor Cores and FP4 support, supported by NVLink-C2C interconnect technology for high-bandwidth CPU-GPU communication. It comes equipped with 128GB of shared LPDDR5x memory, 4TB NVMe storage, and connectivity options including USB-C, Wi-Fi 7, and HDMI, running on NVIDIA’s Ubuntu-based DGX OS preloaded with AI tools. Designed for developers, researchers, and students, the DGX Spark enables local fine-tuning and deployment of large AI
robotAI-computingNVIDIA-DGX-SparkAI-developmentrobotics-simulationAI-hardwareedge-AI-computingWhile OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them
Microsoft CEO Satya Nadella announced the deployment of the company’s first massive AI system—referred to as an AI “factory” by Nvidia—at Microsoft Azure’s global data centers. These systems consist of clusters with over 4,600 Nvidia GB300 rack computers equipped with the new Blackwell Ultra GPU chips, connected via Nvidia’s high-speed InfiniBand networking technology. Microsoft plans to deploy hundreds of thousands of these Blackwell Ultra GPUs worldwide, enabling the company to run advanced AI workloads, including those from its partner OpenAI. This announcement comes shortly after OpenAI secured significant data center deals and committed approximately $1 trillion in 2025 to build its own infrastructure. Microsoft emphasized that, unlike OpenAI’s ongoing build-out, it already operates extensive data centers in 34 countries, positioning itself as uniquely capable of supporting frontier AI demands today. The new AI systems are designed to handle next-generation AI models with hundreds of trillions of parameters. Further details on Microsoft’s AI infrastructure expansion are
energydata-centersAI-hardwareGPUscloud-computingNvidiaMicrosoft-Azure6-gigawatt handshake: AMD joins OpenAI’s trillion-dollar AI plan
OpenAI has entered a landmark multi-year agreement with AMD to deploy up to 6 gigawatts of AMD Instinct GPUs, marking one of the largest GPU deployment deals in AI history. The partnership will start with a 1-gigawatt rollout of AMD’s upcoming MI450 GPUs in late 2026 and scale to 6 gigawatts over multiple hardware generations, powering OpenAI’s future AI models and services. This collaboration builds on their existing relationship involving AMD’s MI300X and MI350X GPUs, with both companies committing to jointly advance AI hardware and software through shared technical expertise. Following the announcement, AMD’s stock surged nearly 24%, reflecting strong market confidence. A significant component of the deal includes an equity arrangement whereby OpenAI received a warrant for up to 160 million AMD shares, potentially giving OpenAI about a 10% stake in AMD if fully exercised. The warrant vests in stages tied to deployment milestones and AMD’s stock price. Although the exact financial terms
energyAI-hardwareGPUsAMDOpenAIhigh-performance-computingAI-compute-capacityA year after filing to IPO, still-private Cerebras Systems raises $1.1B
Cerebras Systems, a Silicon Valley-based AI hardware company and competitor to Nvidia, raised $1.1 billion in a Series G funding round that values the company at $8.1 billion. This latest round, co-led by Fidelity and Atreides Management with participation from Tiger Global and others, brings Cerebras’ total funding to nearly $2 billion since its 2015 founding. The company specializes in AI chips, hardware systems, and cloud services, and has experienced rapid growth driven by its AI inference services launched in August 2024, which enable AI models to generate outputs. To support this growth, Cerebras opened five new data centers in 2025 across the U.S., with plans for further expansion in Montreal and Europe. Originally, Cerebras had filed for an IPO in September 2024 but faced regulatory delays due to a $335 million investment from Abu Dhabi-based G42, triggering a review by the Committee on Foreign Investment in the United States (CFIUS).
AI-hardwaresemiconductordata-centerscloud-computingAI-inferencetechnology-fundingSilicon-Valley-startupsMicrosoft in-chip cooling breakthrough cuts GPU heat rise by 65%
Microsoft has developed a breakthrough in-chip microfluidic cooling technology that channels liquid coolant directly inside GPU chips to remove heat more efficiently. This approach carves microscopic grooves into the silicon, enabling coolant to flow in direct contact with hot spots, which reduces the maximum GPU temperature rise by up to 65% and outperforms traditional cold plate cooling systems by as much as three times. The system also leverages AI to identify heat patterns and direct cooling precisely where needed. Microsoft successfully demonstrated this technology by cooling a server running simulated Teams meetings and is now prioritizing reliability testing. The design was inspired by natural vein structures, with Microsoft collaborating with Swiss startup Corintis to create bio-inspired coolant channels that improve heat dissipation compared to straight channels. The engineering challenge involved balancing channel depth for effective coolant flow without compromising silicon strength, developing leak-proof chip packaging, and integrating etching processes into chip manufacturing. Beyond individual chips, Microsoft envisions microfluidics playing a major role in datacenters by enabling more
energycooling-technologymicrofluidicsGPU-coolingAI-hardwaresemiconductor-materialsthermal-managementPreparing for your later-stage raise: Insider strategies from top investors at TechCrunch Disrupt 2025
The article highlights an upcoming session at TechCrunch Disrupt 2025, scheduled for October 29 at the Builders Stage in San Francisco, focused on strategies for securing later-stage funding. The session emphasizes that raising late-stage capital requires more than just meeting revenue goals; founders must craft compelling narratives, monitor key metrics, and cultivate long-term investor relationships. Attendees can expect practical advice, candid insights, and actionable frameworks from experienced investors and founders to better prepare for major funding rounds. The panel features three prominent experts: Andrea Thomaz, CEO and co-founder of Diligent Robotics, who offers a founder’s perspective on building investor trust in AI hardware startups; Zeya Yang, partner at IVP with a background in AI-native startups and product leadership at major tech firms; and Lila Preston, head of growth equity at Generation Investment Management, known for scaling impact-driven companies with a global outlook. The article also promotes early registration for TechCrunch Disrupt 2025, highlighting significant ticket savings available
robotAI-hardwaresocial-roboticscollaborative-roboticshealthcare-roboticsventure-capitalstartup-fundingHumanoids, AVs, and what’s next in AI hardware with Waabi and Apptronik at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025, taking place from October 27 to 29 at Moscone West in San Francisco, will feature a key session focused on the future of AI hardware, particularly in robotics and autonomous systems. The event will bring together over 10,000 startup and venture capital leaders to explore groundbreaking technologies and ideas. A highlight of the conference is a discussion with Raquel Urtasun, founder and CEO of Waabi, and Jeff Cardenas, co-founder and CEO of Apptronik, who will share insights on integrating AI with real-world physical systems such as autonomous vehicles and humanoid robots. The session will delve into the challenges and innovations involved in developing intelligent machines that operate safely and effectively in the physical world. Topics include the use of simulation, sensors, and software infrastructure critical to scaling these technologies. The conversation aims to provide a realistic and forward-looking perspective on how AI-driven robotics and self-driving platforms are evolving and the implications for industry, labor, and infrastructure.
roboticsautonomous-vehiclesAI-hardwarehumanoid-robotssensorssimulation-technologyintelligent-machinesHumanoids, AVs, and what’s next in AI hardware at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025, taking place from October 27 to 29 at Moscone West in San Francisco, will gather over 10,000 startup and venture capital leaders to explore cutting-edge technology and future trends. A highlight of the event is a session focused on the future of AI hardware, particularly in robotics and autonomous systems. This session will feature live demonstrations and discussions on the advancements and challenges in developing humanoid robots and autonomous vehicles, emphasizing the integration of AI with real-world physics through simulation, sensors, and software infrastructure. Key speakers include Raquel Urtasun, founder and CEO of Waabi, and Jeff Cardenas, co-founder and CEO of Apptronik, who will share insights into the breakthroughs and bottlenecks in scaling intelligent machines safely and effectively. The discussion aims to provide a realistic and forward-looking perspective on how AI-driven robotics and autonomous platforms are evolving, highlighting their potential impact on industry, labor, and infrastructure. This session underscores the unique constraints and
robotautonomous-vehiclesAI-hardwareroboticshumanoid-robotssensorsautonomous-systemsChina unveils ‘world’s first’ brain-like AI, 100x faster on local tech
Researchers at the Chinese Academy of Sciences’ Institute of Automation in Beijing have developed SpikingBrain 1.0, a “brain-like” large language model (LLM) that operates up to 100 times faster than conventional AI models while using significantly less training data—less than 2% of what typical models require. Unlike mainstream Transformer-based LLMs, which face efficiency bottlenecks due to quadratic scaling of computation with sequence length, SpikingBrain 1.0 employs “spiking computation,” mimicking biological neurons by firing signals only when triggered. This event-driven approach reduces energy consumption and accelerates processing, enabling the model to handle extremely long sequences of data efficiently. The team tested two versions of SpikingBrain 1.0, with 7 billion and 76 billion parameters respectively, trained on roughly 150 billion tokens—a relatively small dataset for models of this size. In benchmarks, the smaller model processed a 4-million-token prompt over 100 times faster than standard systems
energyartificial-intelligencebrain-like-AIspiking-computationMetaX-chipsenergy-efficiencyAI-hardwareNeoLogic wants to build more energy-efficient CPUs for AI data centers
NeoLogic, an Israel-based fabless semiconductor startup founded in 2021 by CEO Messica and CTO Leshem, aims to develop more energy-efficient server CPUs tailored for AI data centers. Despite skepticism from industry experts who believed innovation in logic synthesis and circuit design was no longer possible, NeoLogic is pursuing a novel approach by simplifying logic processing with fewer transistors and logic gates. This design strategy is intended to enable faster processing speeds while significantly reducing power consumption. The founders bring extensive semiconductor experience, with backgrounds at Intel, Synopsis, and in circuit manufacturing. The company is collaborating with two unnamed hyperscalers on CPU design and plans to produce a single-core test chip by the end of the year, targeting deployment in data centers by 2027. NeoLogic recently secured $10 million in a Series A funding round led by KOMPAS VC, with participation from M Ventures, Maniv Mobility, and lool Ventures. These funds will support engineering expansion and ongoing CPU development. Given the increasing energy
energysemiconductorsCPUsdata-centersAI-hardwareenergy-efficiencychip-designHow a once-tiny research lab helped Nvidia become a $4 trillion-dollar company
The article chronicles the evolution of Nvidia’s research lab from a small group of about a dozen people in 2009, primarily focused on ray tracing, into a robust team of over 400 researchers that has been instrumental in transforming Nvidia from a video game GPU startup into a $4 trillion company driving the AI revolution. Bill Dally, who joined the lab after being persuaded by Nvidia leadership, expanded the lab’s focus beyond graphics to include circuit design and VLSI chip integration. Early on, the lab recognized the potential of AI and began developing specialized GPUs and software for AI applications well before the current surge in AI demand, positioning Nvidia as a leader in AI hardware. Currently, Nvidia’s research efforts are pivoting toward physical AI and robotics, aiming to develop the core technologies that will power future robots. This shift is exemplified by the work of Sanja Fidler, who joined Nvidia in 2018 to lead the Omniverse research lab in Toronto, focusing on simulation models for robotics and
robotartificial-intelligenceNvidiaGPUsrobotics-developmentAI-hardwaretechnology-researchInstead of selling to Meta, AI chip startup FuriosaAI signed a huge customer
South Korean AI chip startup FuriosaAI recently announced a partnership to supply its AI chip, RNGD, to enterprises using LG AI Research’s EXAONE platform, a next-generation hybrid AI model optimized for large language models (LLMs). This collaboration targets multiple sectors including electronics, finance, telecommunications, and biotechnology. The deal follows FuriosaAI’s decision to reject Meta’s $800 million acquisition offer three months prior, citing disagreements over post-acquisition strategy and organizational structure rather than price. FuriosaAI’s CEO June Paik emphasized the company’s commitment to remaining independent and advancing sustainable AI computing. The partnership with LG AI Research is significant as it represents a rare endorsement of a competitor to Nvidia by a major enterprise. FuriosaAI’s RNGD chip demonstrated 2.25 times better inference performance and greater energy efficiency compared to competitive GPUs when running LG’s EXAONE models. Unlike general-purpose GPUs, FuriosaAI’s hardware is specifically designed for AI computing, lowering total cost of ownership while
AI-chipsFuriosaAILG-AI-Researchenergy-efficiencyAI-computingsemiconductor-materialsAI-hardwareHumanoids, AVs, and what’s next in AI hardware at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025, taking place from October 27 to 29 at Moscone West in San Francisco, will gather over 10,000 startup and venture capital leaders to explore cutting-edge technology and future trends. A highlight of the event is a session on AI hardware featuring Raquel Urtasun, founder and CEO of Waabi, and Jeff Cardenas, co-founder and CEO of Apptronik. These industry pioneers will discuss the evolving landscape of AI hardware, emphasizing its critical role in enabling advanced applications in humanoid robotics and autonomous vehicles. The session promises live demonstrations and in-depth technical insights into how AI hardware facilitates the transition from simulation and conceptual models to real-world deployment of embodied intelligence. Jeff Cardenas leads Apptronik in creating practical, human-centered humanoid robots through strategic partnerships with companies like Google DeepMind, NVIDIA, and Mercedes-Benz, aiming to make robotics commercially viable and safe for human collaboration. Meanwhile, Raquel Urtasun is advancing autonomous vehicle
roboticshumanoid-robotsautonomous-vehiclesAI-hardwaresimulation-technologyembodied-intelligenceautonomous-systemsNew memristor-based system from China boosts AI data sorting efficiency
Chinese researchers from Peking University and the Chinese Institute for Brain Research have developed a novel memristor-based hardware system that significantly enhances data sorting efficiency for AI and scientific computing applications. By integrating memristors—components capable of both memory and processing functions—with an advanced iterative search-based sorting algorithm, the system achieves a 7.7-fold increase in throughput and improves energy efficiency by over 160 times compared to conventional sorting methods. Additionally, it boosts area efficiency by more than 32 times, marking a major advancement toward combining storage and computation in a single platform. This innovation addresses the longstanding Von Neumann bottleneck, where traditional computing architectures separate memory and processing units, causing delays in data transfer and limiting performance. Unlike typical resistors, memristors retain memory of electrical charge flow, enabling them to perform computations directly within memory. The researchers’ approach eliminates the need for comparison operations common in traditional sorting algorithms by using memristors to iteratively identify minimum or maximum values, thereby reducing time and
memristorenergy-efficiencyAI-hardwaredata-sortingscientific-computingmemory-technologycomputing-innovationReversible computing can help reclaim your chip's wasted energy
The article discusses the significant energy inefficiency in modern AI hardware, where nearly all electrical energy consumed by processors is lost as heat due to fundamental limitations in conventional CMOS transistor technology. This inefficiency is especially critical as generative AI models like ChatGPT demand substantially more power per query compared to traditional searches, contributing to data centers potentially consuming up to 12% of US electricity by 2030. The root cause lies in abrupt transistor switching in CMOS chips, which dissipates energy as heat and imposes costly cooling requirements and scalability challenges. Vaire Computing, a startup based in the US and UK, proposes a solution through reversible computing using adiabatic switching. This approach gradually transfers electrical charge during transistor switching, significantly reducing energy loss by preserving and recycling information rather than erasing it, thereby circumventing Landauer’s principle that links information deletion to heat generation. Vaire’s prototypes currently reclaim about 50% of wasted computational energy, with expectations for even greater efficiency improvements. This innovation could mark a
energysemiconductorreversible-computingchip-efficiencyAI-hardwareadiabatic-switchingdata-centersCourt filings reveal OpenAI and io’s early work on an AI device
Recent court filings from a trademark dispute lawsuit between OpenAI, Jony Ive’s startup io, and Google-backed hardware company iyO have revealed new insights into OpenAI and io’s early efforts to develop a mass-market AI hardware device. The filings show that over the past year, OpenAI executives and former Apple leaders at io have extensively researched in-ear hardware, purchasing over 30 headphone sets to study existing products. Despite this focus, the first device from OpenAI and io is reportedly not an in-ear or wearable device, but its exact form factor remains undisclosed. Co-founder Tang Tan stated that the prototype mentioned by OpenAI CEO Sam Altman is still in early development and at least a year away from market release. Altman has described the device as a “third device” complementing smartphones and laptops, capable of being pocket-sized or desk-based and fully aware of the user’s surroundings. The filings also reveal interactions between OpenAI/io and iyO leadership, including a May 1 meeting
AI-hardwarewearable-technologyOpenAIIoT-devicessmart-devicesAI-innovationconsumer-electronicsUptime Industries wants to boost localized AI usage with an ‘AI-in-a-box’ called Lemony AI
Uptime Industries has developed Lemony AI, a compact “AI-in-a-box” device designed to run large language models (LLMs), AI agents, and workflows locally on-premise. About the size of a sandwich and consuming only 65 watts of power, each Lemony node can support LLMs with up to 75 billion parameters, hosting both open-source and adapted closed models. Multiple devices can be stacked to form clusters, allowing different models to run simultaneously. The company has partnered with IBM and JetBrains to facilitate customer access to various AI models, including IBM’s proprietary ones. The concept originated from a side project by Uptime’s co-founders, who explored distributing language models on small devices like Raspberry Pis. Recognizing the potential for localized AI to enhance adoption—especially among enterprises wary of cloud-based solutions—they focused on creating a small, privacy-centric device that teams could deploy without extensive organizational approval. This approach appeals particularly to regulated sectors such as finance, healthcare, and law, where data privacy is critical since all data and models remain within the device. Uptime has raised $2 million in seed funding to advance development, plans to extend its Lemony OS software to other hardware platforms, and aims to evolve from single-user to team-based software functionality. Lemony AI is offered at $499 per month for up to five users.
energyAI-hardwareedge-computingon-premise-AIlow-power-devicesAI-clustersdata-privacyWorld’s fastest quantum switch built by US team for ultra-fast AI
materialsquantum-computinggrapheneultrafast-computingAI-hardwaretransistorslaser-technology