Articles tagged with "AI"
AI speeds up development of structural EV batteries in German study
Scientists at RWTH Aachen University in Germany have accelerated the development of structural electric vehicle (EV) battery systems by combining advanced engineering with AI-driven validation techniques. Over three and a half years, the PEAk-Bat research project, funded by the German Federal Ministry for Economic Affairs and Energy, demonstrated that integrating batteries directly into the vehicle chassis—rather than treating them as separate components—can increase volumetric energy density by over 10% and gravimetric energy density by more than 15%. The project involved building and rigorously testing 10 vehicle prototypes with integrated structural battery systems, supported by extensive digital simulations validated through real-world trials. A key innovation of the project was the use of AI-powered simulation models for early validation of battery safety, structural integrity, and thermal performance. This virtual-first approach significantly reduced the need for time-consuming and costly physical tests, enabling faster development cycles and lower prototype costs. The researchers also developed new industry guidelines for analyzing changes in battery systems and determining necessary safety tests
energyelectric-vehiclesAIbattery-technologystructural-batteriesdigital-simulationautomotive-engineeringWorld’s first driverless race sees cars zip through at 155 mph
The Abu Dhabi Autonomous Racing League (A2RL) hosted the world’s first fully driverless car Grand Final at Yas Marina Circuit, marking a significant milestone in artificial intelligence and robotics. Six autonomous racecars competed, with Germany’s TUM team winning the championship after a fiercely contested 20-lap race against Italy’s Unimore team. Both teams pushed their cars to speeds exceeding 155 mph (250 km/h), engaging in a close battle that lasted over half the race. A pivotal moment occurred when Unimore collided with a slower car while attempting an overtake, allowing TUM to regain and maintain the lead to secure victory. Despite the crash, Unimore earned the Fastest Lap Award, highlighting their competitive pace. The event was celebrated by prominent UAE officials, including Sheikh Zayed bin Mohamed bin Zayed Al Nahyan and Faisal Al Bannai, who emphasized the race’s role as a convergence of engineering disciplines and a testbed for accelerating autonomous technologies. Al Bannai described A2
roboticsautonomous-vehiclesAIdriverless-carsautonomous-racingartificial-intelligencerobotics-engineeringAnthropic study finds Claude helps humans train robots faster
Anthropic conducted an internal one-day study, dubbed Project Fetch, to evaluate how its AI model Claude impacts human performance in real-world robotics tasks. Two teams of software engineers were tasked with programming a quadruped robot dog to fetch a beach ball, with only one team having access to Claude. The Claude-assisted team completed seven out of eight tasks, outperforming the non-AI team, which completed six. The most significant advantage was seen in hardware-level tasks such as connecting to the robot and accessing sensor data, where Claude helped quickly identify solutions and troubleshoot issues, while the non-AI team struggled and required external hints. The study also revealed that the Claude-assisted team wrote about nine times more code and explored multiple approaches in parallel, boosting creativity and iteration speed, although sometimes pursuing unproductive directions. While the non-AI team occasionally moved faster in some tasks, the AI-assisted system ultimately provided smoother and more user-friendly control. Additionally, analysis of team interactions showed that the non-AI group experienced
robotAIroboticsrobot-doghuman-robot-interactionautomationmachine-learningMusk's Compensation Dream Is A Reality — So What Comes Next? - CleanTechnica
The article discusses the realization of Elon Musk’s ambitious Tesla compensation package and explores the company’s future prospects beyond its established electric vehicle (EV) business. While Tesla’s initial success was rooted in its EVs, Musk is now focusing on advancing technologies such as artificial intelligence (AI), robotics, full self-driving (FSD) capabilities, and robotaxi deployment. Tesla’s valuation and growth potential increasingly depend on these emerging areas, alongside the long-anticipated but still unconfirmed launch of a more affordable $25,000 EV model. However, some analysts express concern about Tesla’s reliance on its current vehicle lineup without frequent new model introductions, which could pose risks for sustained sales momentum. Key growth areas highlighted include Tesla’s FSD software, which reportedly reduces collision rates significantly compared to average US driving statistics, and the ongoing development of the robotaxi pilot program, currently tested with human safety drivers and soon expanding to multiple cities. Additionally, Tesla’s energy storage segment showed impressive year-over-year growth of 81
robotenergyautonomous-vehiclesAITeslaelectric-vehiclesself-driving-technologyBone AI raises $12M to challenge Asia’s defense giants with AI-powered robotics
Bone AI, a Seoul- and Palo Alto-based startup founded by DK Lee, has raised $12 million in seed funding to develop an integrated AI platform combining software, hardware, and manufacturing for autonomous defense robotics. The company focuses on next-generation unmanned aerial (UAVs), ground (UGVs), and marine (USVs) vehicles primarily for government and defense clients, starting with aerial drones designed for logistics, wildfire detection, and anti-drone missions. Despite South Korea’s large defense industry and $69 billion order backlog, its defense-tech startup ecosystem remains underdeveloped, creating an opportunity that Bone AI aims to fill by leveraging Korea’s manufacturing strengths and advanced materials expertise through strategic partnerships like Kolon Group. Bone AI has quickly gained traction, securing a seven-figure government contract and generating $3 million in revenue within its first year, partly by acquiring a South Korean drone company. Founder DK Lee emphasizes the company’s broader vision as a “physical AI” firm that integrates AI simulation, autonomy
roboticsAIautonomous-vehiclesdronesdefense-technologymanufacturingadvanced-materialsVideo: Russian airline tests humanoid robot as in-flight attendant
Russian airline Pobeda has become the first in the world to deploy a humanoid robot, named "Volodya," as part of its in-flight cabin crew on the Ulyanovsk-Moscow route. During the trial flight on November 12, Volodya greeted passengers, delivered safety instructions, and interacted with travelers, enhancing the passenger experience despite not serving food or beverages. The robot, resembling a Unitree G1 model, demonstrated reinforcement learning technology by mimicking flight attendant actions, drawing significant interest from passengers who took photos with it. However, the introduction of humanoid robots in aviation has sparked mixed reactions. Some social media users expressed concerns about robots potentially replacing human jobs and questioned the necessity of such automation. Others humorously highlighted risks if the robot malfunctioned during flight. Despite these concerns, other airlines like Qatar Airways are also exploring humanoid robots equipped with conversational AI to assist passengers, while manufacturers such as Hyundai and Kia are developing wearable robots for aviation assembly and maintenance. The
robothumanoid-robotaviation-technologyAIautomationhuman-robot-collaborationairline-innovationAutonomous boats that conduct high-speed interception head to Australia
Australia’s Elysium EPL has partnered with New Zealand’s Seasats to resell and support Seasats’ autonomous surface vessels (ASVs) in Australia and New Zealand. This collaboration enables Elysium to manage sales, integration, training, and regulatory compliance locally, providing governments and industries in the Indo-Pacific region easier access to these advanced maritime technologies. The partnership aligns with the strategic priorities of the AUKUS alliance, emphasizing robotics, AI, and undersea technology development, thereby enhancing Australia’s sovereign capabilities in defense and maritime surveillance. Seasats offers two key ASV models: the Lightfish, a long-endurance scout drone capable of months-long autonomous operation with a range of 6,000 nautical miles, used for border monitoring and offshore security; and the Quickfish, a high-speed interceptor drone reaching speeds up to 34 knots with a 400+ nautical mile range, designed for threat confrontation and equipped with modular payloads including ISR drones, electronic warfare, and strike systems.
roboticsautonomous-boatsmaritime-dronesAIdefense-technologyunmanned-surface-vesselsAUKUSChina's Gen Z inventor aims to build the 'Android' of humanoid robots
In April 2025, Beijing-based startup RoboParty, founded by Huang Yi—one of China’s youngest humanoid-robot entrepreneurs—officially launched with the goal of creating a fully open-source bipedal humanoid robot platform. Huang, born in 2004, initially gained attention for building “AlexBot,” a walking humanoid robot developed on a modest budget during his first year at Harbin Institute of Technology. After releasing an upgraded version, “AlexBotmini,” and graduating early, he shifted focus to RoboParty’s flagship project, the “ATOM” robot, which aims to be China’s first fully open-source humanoid robot platform. Huang advocates that an open-source approach accelerates ecosystem development by promoting shared standards, reducing collaboration barriers, and enhancing security and global competitiveness. RoboParty’s launch aligns with a broader national push by Chinese authorities to advance robotics, AI, and humanoid technologies as strategic priorities. Significant government-backed funding initiatives were announced in early 2025,
robothumanoid-robotopen-source-roboticsAIrobotics-fundingChina-technologyRoboPartyBoom — Waymo Takes The Freeway (+ Important Note On Waymo's Approach) - CleanTechnica
Waymo has made a significant advancement by beginning to offer fully autonomous rides on freeways in the San Francisco Bay Area, marking a major expansion in its service coverage. Historically, Waymo avoided freeway driving due to its complexity, but this new development improves trip efficiency and connectivity between cities, metro areas, and key locations like airports. Notably, these freeway rides operate without safety drivers behind the wheel, underscoring Waymo’s confidence in its technology after extensive testing and prioritization of safety. The company plans to extend freeway service to other cities such as Phoenix, Los Angeles, Austin, and Atlanta as it continues to grow. The article highlights Waymo’s cautious and thorough approach to mastering freeway autonomy, emphasizing that while freeway driving may seem easier, it presents unique challenges, including fewer critical events to train the system on rare scenarios. Waymo has relied heavily on closed-course testing and simulation to overcome these hurdles. This careful development process has built a strong foundation for rapid future expansion. The move to
robotautonomous-vehiclesWaymoself-driving-technologyAItransportationfreeway-drivingMore Tesla FSD Expansion, & More "Interesting" Comments on Robots & AI - CleanTechnica
The article from CleanTechnica discusses recent developments and commentary related to Tesla’s Full Self-Driving (FSD) technology, humanoid robot Optimus, and AI efforts, particularly the potential merger or acquisition of xAI by Tesla. It highlights Tesla’s ongoing challenges with declining vehicle sales over recent years, noting a downward trend from over 1.3 million vehicles sold globally in early 2023 to about 1.2 million projected in early 2025. Elon Musk’s long-term strategy to reverse this trend hinges heavily on achieving commercially viable full self-driving capabilities and deploying robotaxis, though Musk has historically missed many deadlines despite some progress. The article stresses that Tesla is at a critical juncture where its future trajectory could become significantly more positive or negative depending on the success of these technologies. Additionally, the article touches on Tesla’s broader AI ambitions, with Morgan Stanley analyst Adam Jonas emphasizing the strategic importance of xAI to Tesla’s future, given the synergy between data, software, hardware,
robotTeslafull-self-drivingAIhumanoid-robotsrobotaxiautonomous-vehiclesRussia's first humanoid robot falls flat on its face during stage debut
Russia’s first AI-powered humanoid robot, named AIdol, experienced a highly publicized malfunction during its debut at a technology forum in Moscow, where it fell face-first on stage. Developed by the Russian robotics company Idol, AIdol was designed to showcase Russia’s advancements in AI and robotics through lifelike movement, gesture control, and the ability to interpret visual data and interact with people. The fall, caused by a calibration error in its balance and motion control algorithms, highlighted the challenges still facing early humanoid robot models in maintaining stability. Despite the setback, Idol’s CEO Vladimir Vitukhin framed the incident as a learning opportunity, emphasizing ongoing efforts to refine the robot’s control systems ahead of future demonstrations. AIdol is notable for being composed of 77 percent domestically produced components, a significant achievement given Western sanctions limiting Russia’s access to advanced imports, with plans to increase this to 93 percent. The robot features a silicone face powered by 19 servomotors,
roboticshumanoid-robotAIautomationmotion-controlrobotics-technologysynthetic-skinWhy a researcher is building robots that look and act like bats
Nitin J. Sanket, a professor at Worcester Polytechnic Institute, is developing small, bat-inspired flying robots designed for search and rescue missions in hazardous or hard-to-navigate environments. These palm-sized drones use ultrasound sensors, similar to those in automatic faucets, combined with AI-powered software to filter noise and detect obstacles within a two-meter radius. The technology aims to replace human rescuers who currently risk their lives navigating difficult terrain on foot, offering a faster, more agile alternative through drones. Sanket’s approach draws heavily from biology, particularly bats’ echolocation abilities. The team addressed challenges such as sensor overload caused by drone propeller noise by designing a 3D-printed structure that mimics bats’ adaptive tissues in their nose and ears, which modulate sound reception and emission. This biomimicry allows the robots to effectively process ultrasonic signals despite environmental noise. Having achieved functional prototypes, the current focus is on improving the drones’ speed to enhance their operational effectiveness. Sanket emphasizes
robotdronesbiomimicrysearch-and-rescueultrasound-sensorsAIflying-robotsTerranova gets seed funding to deploy terraforming robots - The Robot Report
Terranova Inc., a San Francisco-based startup, has secured $7 million in seed funding to deploy autonomous robotic technology aimed at mitigating flooding by reshaping and elevating flood-prone land. The company’s approach uses car-sized robots that inject a wood slurry deep underground to lift and stabilize terrain without surface disruption, enabling new housing, commercial, and industrial developments in vulnerable regions. Terranova’s system, which includes multiple injection robots and a “mothership,” can raise up to one acre by one foot per day, offering a significant productivity improvement over traditional fill or civil engineering methods. The company integrates AI, machine learning, and closed-loop control to optimize site suitability and injection campaigns, supporting precise terrain modification. Terranova’s mission is to create resilient infrastructure by “terraforming” the earth to address flooding and land subsidence, which cost the U.S. economy over $180 billion annually. The seed round, led by Outlander and Congruent Ventures among others, was oversubscribed and
robotsroboticsflood-mitigationautonomous-systemsgeotechnical-engineeringterraformingAIFrance's new humanoid enters the nuclear zone to take the heat
France has introduced Hoxo, an AI-powered humanoid robot developed by Capgemini and Orano, to enhance safety and operational efficiency within nuclear power plants. Launched at the Orano Melox facility in Gard, Hoxo integrates advanced robotics, AI computer vision, embedded sensors, and autonomous navigation to assist human operators in hazardous and complex technical tasks. The robot replicates human movements and uses real-time perception systems, aiming to redefine human-machine collaboration and push the boundaries of industrial automation in sensitive nuclear environments. Hoxo represents a significant technological advancement by combining robotics, AI, computer vision, and digital twins to tackle some of the most demanding challenges in nuclear operations. The project underscores the nuclear industry's ongoing commitment to innovation and automation, especially as it faces increasing pressure to maintain high safety standards while improving efficiency amid global efforts to reduce carbon emissions. The current testing phase at the Melox facility will evaluate Hoxo’s effectiveness in supporting daily nuclear plant operations, with the potential to transform industrial performance and safety in
robotAIhumanoid-robotnuclear-energyindustrial-automationcomputer-visionhuman-machine-collaborationSurviving Mars: How humans are preparing to live beyond earth
The article "Surviving Mars: How humans are preparing to live beyond earth" outlines the immense challenges humanity faces in attempting long-duration space travel to Mars, focusing on the physiological, psychological, and environmental hurdles. Unlike Earth, where gravity, atmosphere, and ecosystems support life, Mars missions require humans to endure prolonged exposure to microgravity, radiation, and isolation without immediate rescue or support. While the International Space Station has provided valuable insights into living in space, Mars is vastly farther away—about 225 million kilometers—making every medical, logistical, and psychological challenge more complex. Current technology means a Mars mission could last up to three years, with no possibility of emergency evacuation, requiring astronauts to be highly self-reliant, particularly in medical emergencies where telemedicine and AI-assisted tools may be critical. Psychological resilience is highlighted as a key factor for mission success, given the extreme isolation, confinement, and communication delays that astronauts will face. Studies simulating Mars missions on Earth have shown significant mental
robotAItelemedicinespace-explorationlife-support-systemsradiation-shieldingspace-health-technologyYouTuber builds talking robot head that answers like Aristotle
Polish YouTuber and maker Nikodem Bartnik has developed a talking robot head that answers questions in the style of the ancient Greek philosopher Aristotle. The robot features a metal mask with 3D-printed, motorized eyes that naturally track the user, and an LED-lit mouth that glows in sync with its speech. The system operates independently on Bartnik’s own hardware, avoiding reliance on cloud services. Audio input is captured by a microphone connected to a Raspberry Pi, converted to text, and processed on Bartnik’s computer using open-source software and the Google ‘Gemman 3’ model to generate philosophically themed responses. The voice is synthesized via ElevenLabs, creating a lifelike conversational experience. Bartnik’s design emphasizes customization and accessibility. The robot’s personality can be switched on demand through a simple web interface, allowing it to shift from a calm philosophical lecturer to a more humorous or grumpy character without changing the hardware. Despite its polished interaction, the build retains a DIY aesthetic
robotroboticsAI3D-printinganimatronicsRaspberry-PiDIY-robotRobotic kitchen in a box cooks, cleans and serves 120 meals an hour
A Munich-based robotics company, Circus SE, has introduced the CA-1 Series 4, a fully autonomous robotic kitchen, inside a REWE supermarket in Düsseldorf, Germany. This compact, glass-enclosed system autonomously handles the entire meal preparation process—from ingredient collection to cooking, plating, and cleaning—without human intervention. Capable of producing up to 120 meals per hour, the CA-1 offers restaurant-quality dishes priced from €6, cooked fresh on demand within minutes. The system’s AI-driven operations include real-time ingredient monitoring, adaptive stirring speeds, and self-cleaning via an integrated commercial dishwasher, all visible to customers through a transparent panel. This installation marks the first integration of AI-powered cooking robots directly within a supermarket, positioning REWE as a pioneer in retail automation and experiential food services. The collaboration between Circus and REWE is designed to be scalable, with two additional pilot sites planned and potential applications in hospitals, universities, factories, and even military settings. The CA-1
roboticsAIautomationrobotic-kitchenfood-service-automationretail-technologyautonomous-robotsAutonomous boat with 99% mine detection rate delivered to French Navy
Thales has delivered a new autonomous surface naval drone to the French Navy, boasting a 99% sea mine detection rate. Designed to enhance mine countermeasure operations, the unmanned vessel reduces personnel exposure to hazardous environments and integrates advanced sensors, including the unique multi-view SAMDIS sonar and the towed TSAM sonar. The system is equipped with AI-driven data analysis software and the M-Cube mission management system, which streamline mission planning and reduce operator workload. The drone is also resilient to cyber threats, enabling it to conduct sensitive maritime defense operations securely. This delivery follows over 3,000 hours of sea trials and builds on earlier prototypes tested by both the French and Royal Navies since 2021. Developed in collaboration with the Couach shipyard, the drone is part of the Franco-British Maritime Mine Counter Measures (MMCM) program, which aims to increase naval operational superiority through fast integration of heterogeneous drones and innovative AI algorithms. Managed by OCCAR and supported by the French D
robotautonomous-vehiclesnaval-dronesAIsonar-technologymaritime-defenseunmanned-systemsIs physical world AI the future of autonomous machines? - The Robot Report
The article discusses the emerging role of physical world AI—cloud-based systems integrated with AI models that create ultra high-precision, spatially aware representations of the physical environment—in advancing autonomous machines such as cars, drones, and tractors. While companies like Waymo have developed sophisticated onboard AI and navigation hardware, the article argues that relying solely on onboard compute is insufficient for widespread autonomous machine deployment. Instead, leveraging cloud-based spatial intelligence can enhance route optimization and hazard detection by providing machines with detailed, real-time environmental context beyond their immediate sensor inputs. Currently, most AI in autonomous machines operates locally on the edge, lacking awareness of the broader physical landscape. However, abundant data from satellites, drones, and other sources can feed cloud systems that process complex spatial information—such as vectors representing terrain and obstacles—making AI models more capable of understanding and navigating the environment. This spatial intelligence cloud approach, pursued by companies like Wherobots, can improve autonomous vehicle performance in challenging scenarios like rural deliveries or complex urban settings
robotautonomous-machinesAIcloud-computingnavigation-technologydronesself-driving-carsAI controller pulls off first real world satellite maneuver in orbit
A research team from Julius Maximilians Universität Würzburg has achieved a world first by successfully running an AI-based attitude controller directly in orbit aboard the InnoCube nanosatellite. Using Deep Reinforcement Learning (DRL), the AI autonomously executed precise satellite orientation maneuvers during a short orbital pass, repeatedly hitting target orientations in follow-up tests. This demonstrated that the AI controller could operate effectively under real space conditions, overcoming the common challenge of transferring behavior learned in simulation (Sim2Real gap) to the actual environment. The LeLaR project, which developed this controller, aims to create adaptive spacecraft control systems that require no human tuning and can adjust to unexpected conditions. Unlike traditional controllers that need months of expert calibration, the DRL-based system learns through extensive simulation training and adapts autonomously in orbit. This milestone, supported by the German Federal Ministry for Economic Affairs and Energy and managed by the German Space Agency (DLR), marks a significant advance toward fully autonomous space missions, especially those requiring
robotAIsatellitespace-autonomydeep-reinforcement-learningaerospaceautonomous-systemsExpert roundtable to examine the future of warehouse automation - The Robot Report
The article announces an expert roundtable titled “The Future of Warehouse Automation,” scheduled for November 11, 2025, at noon ET, hosted by The Robot Report. The discussion will focus on how robotics and software have adapted to the evolving needs of warehouse operators and what future developments can be expected. Participants will include representatives from DHL Supply Chain, inVia Robotics, and Interact Analysis, who will explore current challenges, emerging technologies such as AI and humanoid robots, and strategies for selecting, scaling, and managing warehouse automation to maximize return on investment amid economic uncertainties. Key experts featured include Lior Elazary, CEO of inVia Robotics, who leads efforts to optimize warehouse operations through AI-powered robotics software; Rueben Scriven, an industry analyst at Interact Analysis specializing in warehouse automation research; and Rob Wright, Vice President of Automation and Engineering at DHL Supply Chain North America, who oversees complex automation projects and supports implementation and maintenance of robotic systems. The roundtable will offer attendees the
roboticswarehouse-automationAIsupply-chain-technologymobile-robotslogisticswarehouse-softwareTechCrunch Disrupt 2025 Startup Battlefield 200: Celebrating outstanding achievements
TechCrunch Disrupt 2025 highlighted the achievements of its Startup Battlefield 200 cohort, selecting 200 promising startups from thousands of applicants. Over a three-day event, these startups pitched their innovative solutions across various industries. From the top 20 finalists, five companies competed for the $100,000 equity-free prize and the Disrupt Cup, with Glīd winning and Nephrogen as a strong runner-up. The event also featured standout presentations on the Showcase Stage, recognizing top pitches in categories such as Sustainability (HomeBoost), Built World (Investwise), Consumer Enterprise (Cashew Research), Health (AWEAR), and Policy + Protection (JustiGuide). Additional honors included a tie for Best Booth between Billight, known for its innovative light-up pool gaming system, and Calificadas, which impressed with its AI-powered communication intelligence coach. The Spirit of Disrupt award was given to Manu Seve, CEO of Sponstar, for organizing a treasure hunt that fostered connections and
energyAIsustainabilityhome-energy-assessmentcarbon-reductionsmart-buildingsinnovationHow Carbon Robotics built the large plant model for its laser weeding robot - The Robot Report
Carbon Robotics, led by founder and CEO Paul Mikesell, has developed a sophisticated large plant model (LPM) that powers its laser-weeding robots operating in 14 countries across diverse crops and conditions. The company’s journey began when Mikesell recognized the significant challenges in agriculture, particularly in weed management, which is traditionally costly and inefficient. To address this, Carbon Robotics focused on rapidly delivering a working solution rather than pursuing an overly ambitious vision from the start. The team spent extensive time on farms collecting real-world data, personally capturing and labeling images to build a high-quality dataset essential for training their AI system. A key innovation was the development of an advanced lighting system for the robot’s cameras, producing clear, shadow-free images regardless of outdoor lighting conditions. This system uses flashes five times brighter than the sun but with a low duty cycle to avoid discomfort, ensuring consistent image quality throughout the day, including during sunrise and sunset. This meticulous data collection and labeling process enabled Carbon Robotics to create a
robotroboticsAIagriculture-technologylaser-weedingautonomous-systemsmachine-learningEurope’s 1,850-mile ‘drone wall’ marks NATO’s biggest air defense yet
Poland and Romania are deploying the U.S.-developed Merops system, an AI-driven, compact counter-drone technology designed to detect and intercept hostile drones even under electronic jamming. This deployment responds to recent Russian drone incursions into NATO airspace, which exposed vulnerabilities and heightened tensions in Europe. Denmark is also set to adopt Merops as part of a broader initiative to strengthen NATO’s eastern defenses. The system operates by either directly neutralizing drones or providing targeting data to ground and air forces, offering a cost-effective alternative to expensive fighter jet interceptions. Merops is a key component of NATO’s larger “Drone Wall” project, a proposed 1,850-mile network of surveillance and counter-drone systems stretching from Norway to Poland. This initiative aims to establish a permanent early-warning barrier along NATO’s eastern frontier to deter Russian aggression and hybrid warfare. The system has proven effective in Ukraine’s conflict zone, influencing its selection for NATO use. NATO officials emphasize that Merops is the first phase
robotAIdrone-technologydefense-systemssurveillancemilitary-technologycounter-drone-systemsWorld's first AI firefighting system extinguishes oil fires on ships
The Korea Institute of Machinery and Materials (KIMM) has developed the world’s first AI-driven autonomous firefighting system specifically designed to detect and extinguish oil fires aboard naval vessels, even under challenging sea conditions. Unlike traditional systems that flood entire compartments with extinguishing agents, KIMM’s technology uses AI-based fire verification and reinforcement learning to accurately identify real fires and target suppression precisely at the source. This approach minimizes unnecessary damage from false alarms. The system integrates sensors, fire monitors, and a control unit capable of estimating fire location with over 98% accuracy, and can discharge foam up to 24 meters. It has been successfully tested in simulated ship compartments and real-world conditions aboard the ROKS Ilchulbong amphibious assault ship, demonstrating stable operation in waves up to one meter high. Developed by Senior Researcher Hyuk Lee and his team, the system adapts to ship movement using a reinforcement learning algorithm that adjusts nozzle aiming based on six degrees of freedom acceleration data. It
AIautonomous-systemsfirefighting-technologyroboticssensorsreinforcement-learningmaritime-safetyXPENG Unveils A868: A Leap Toward Long-Range Flying Mobility - CleanTechnica
At XPENG Motors’ 2025 AI Technology Day in Guangzhou, the company’s low-altitude mobility division, XPENG Aridge, unveiled the A868, a vertical take-off and landing (VTOL) fixed-wing flying car designed for long-range intercity travel. Unlike typical flying car concepts, the A868 emphasizes practicality and range, featuring an aviation-grade extended-range hybrid system capable of traveling over 500 kilometers at speeds up to 360 km/h. Its six-person cabin targets business travelers and air mobility services, aiming to offer a more efficient and flexible alternative to cars and high-speed rail. The vehicle’s fully vertical take-off and landing capability requires minimal space—potentially only half a basketball court—enabling operations from rooftops, parking lots, or small helipads. XPENG positions the A868 as nearing mass production but acknowledges the critical importance of safety. The aircraft incorporates a six-axis, six-propeller, two-power-channel design to ensure continued flight even if a
robotAIflying-carVTOLurban-air-mobilityhybrid-systemintercity-travelWorld's first AI firefighting system extinguishes oil fires on ships
The Korea Institute of Machinery and Materials (KIMM) has developed the world’s first AI-driven autonomous fire suppression system specifically designed to detect and extinguish oil fires aboard naval vessels, even under challenging sea conditions. Utilizing reinforcement learning, the system compensates for ship motion by continuously adjusting its nozzle aiming angle based on acceleration data, enabling it to accurately target fire sources up to 24 meters away. Unlike traditional systems that flood entire compartments, KIMM’s technology precisely directs foam only at confirmed fire locations, reducing unnecessary damage from false alarms. The system integrates sensors, fire monitors, and an AI-based control unit that verifies fire authenticity with over 98% accuracy and adapts to sea states of level 3 or higher. Extensive testing was conducted in a full-scale simulation facility replicating ship compartments and various fire scenarios, including open-area and shielded oil fires typical on aircraft carriers. Subsequent real-ship trials aboard the ROKS Ilchulbong amphibious assault ship demonstrated the system
robotAIautonomous-systemsfire-suppressionreinforcement-learningmaritime-safetyKorea-Institute-of-Machinery-and-MaterialsGE engine to power Shield AI's new X-BAT autonomous fighter jets
US aerospace company GE Aerospace has entered into a Memorandum of Understanding with Shield AI to provide propulsion for Shield AI’s new autonomous fighter jet, the X-BAT. The X-BAT is an AI-piloted, vertical take-off and landing (VTOL) fighter designed for deployment in contested and communication-limited environments. It is powered by GE’s F110-GE-129 engine, a highly reliable powerplant with over 11 million flight hours since its introduction in the 1980s. This engine features an advanced Axisymmetric Vectoring Exhaust Nozzle (AVEN) that enables the thrust vectoring necessary for the X-BAT’s VTOL capabilities. Unveiled in October 2023, the X-BAT combines VTOL functionality with a range exceeding 2,000 nautical miles and can carry a full mission payload. Shield AI’s proprietary Hivemind autonomy software allows the jet to operate independently or as a drone wingman alongside piloted aircraft. The compact design enables
robotautonomous-vehiclesAIaerospace-engineeringpropulsion-systemsVTOLdefense-technologyGoogle Maps gets Gemini AI so drivers can talk while navigating
Google Maps is integrating Google’s Gemini AI to offer a more natural, hands-free driving experience that allows users to interact with the app via conversational voice commands. This upgrade enables drivers to ask for directions, find places along their route, locate EV chargers, share their ETA, and even add calendar events without touching their device. The AI assistant can handle complex queries such as finding budget-friendly vegan restaurants nearby, checking parking availability, or providing popular dish recommendations. It also facilitates real-time road incident reporting, enhancing safety and awareness on the road. The Gemini-powered features will roll out on Android and iOS soon, with Android Auto support coming later. In addition to voice interaction, Google Maps is improving navigation by incorporating real-world landmarks into directions. Instead of distance-based prompts, users will receive instructions like “turn right after the Thai Siam Restaurant,” with landmarks highlighted on the screen. This enhancement relies on analyzing data from 250 million places combined with Street View imagery to identify the most visible landmarks, making navigation more
IoTAISmart-NavigationElectric-VehiclesVoice-AssistantTraffic-AlertsMobile-TechnologyAI mapping system builds 3D maps in seconds for rescue robots
MIT researchers have developed an advanced AI system that enables robots to generate detailed 3D maps of complex environments within seconds, significantly enhancing the capabilities of search-and-rescue robots in disaster scenarios. The system integrates machine learning with classical computer vision techniques to process an unlimited number of images from a robot’s onboard cameras, producing accurate 3D reconstructions while simultaneously estimating the robot’s position in real time. Unlike traditional simultaneous localization and mapping (SLAM) methods, which struggle in crowded or visually complex settings and require pre-calibrated cameras, this new approach divides scenes into smaller “submaps” that are incrementally created, aligned, and stitched together into a coherent 3D model, allowing rapid movement without sacrificing spatial accuracy. A key innovation was addressing distortions introduced by machine-learning models in the submaps, which hindered their alignment using standard geometric transformations. By incorporating a mathematical framework from classical geometry, the team corrected these deformations to ensure consistent alignment of submaps. This hybrid approach,
roboticsAI3D-mappingmachine-learningSLAMcomputer-visionrescue-robotsChina builds high-precision robot arms for nuclear reactor upkeep
Researchers at China’s Hefei Institutes of Physical Science have developed an AI-driven robotic system capable of performing highly precise maintenance tasks inside fusion reactors, achieving accuracy within 0.1 mm. This system addresses the challenging “peg-in-hole” assembly task critical for replacing reactor components, which traditionally requires slow, manual intervention. Utilizing deep reinforcement learning (DRL), the robot mimics human hand-eye coordination by integrating data from a 2D camera and force/torque sensors, avoiding the unreliable 3D sensors that struggle in the reactor’s high-radiation, reflective environment. To support this AI, the team engineered robust hardware, including a novel robotic joint with an ultra-high reduction ratio (13,806:1) delivering 139 kNm torque, enabling precise manipulation of large reactor components. Additionally, they developed TCIPS, a Transformer-based AI perception model that processes 3D point cloud data to segment the reactor interior into basic geometric shapes, enhancing navigation and obstacle avoidance. These innovations collectively
roboticsAIfusion-reactor-maintenancedeep-reinforcement-learningrobotic-armsautomationnuclear-energyTop 10 smartest robot dogs in the world redefining technology
The article highlights the top 10 smartest robot dogs worldwide, emphasizing their diverse applications and technological advancements that are redefining robotics. Initially developed for military and industrial use, these robotic dogs now serve in various roles such as industrial inspection, security, logistics, and companionship. Boston Dynamics’ Spot leads the pack with its agility, AI autonomy, and ability to operate in hazardous environments like oil rigs and nuclear plants, making it a vital tool for industrial automation. Similarly, ANYbotics’ ANYmal excels in extreme conditions, autonomously detecting faults in chemical plants and mines, enhancing safety and productivity. Other notable models include Unitree B2, which balances performance and affordability for logistics and monitoring tasks, and Ghost Robotics’ Vision 60, designed for defense and security with modular payload capabilities for surveillance in harsh terrains. On the companion side, Sony’s Aibo stands out by providing emotional support through interactive, lifelike behavior, catering to households and individuals unable to keep real pets. Collectively, these robot dogs
robotroboticsrobot-dogsindustrial-automationAIautonomous-navigationinspection-robotsNVIDIA, Qualcomm join U.S., Indian VCs to help build India’s next deep tech startups
NVIDIA and Qualcomm Ventures have joined a coalition of U.S. and Indian investors to support India’s emerging deep tech startup ecosystem. This coalition, launched in September and led by Celesta Capital, includes major venture firms from both countries and has committed over $850 million in capital. The initiative aligns with India’s new ₹1 trillion (approximately $12 billion) Research, Development and Innovation (RDI) scheme, aimed at accelerating innovation in sectors like energy, quantum computing, robotics, space tech, biotech, and AI. The coalition plans to invest capital, provide mentorship, and offer network access to Indian deep-tech startups over the next five to ten years, while also collaborating with the Indian government on policy initiatives. India’s startup ecosystem, previously focused on SaaS and Western business models, is now shifting toward tackling complex, infrastructure-scale challenges such as satellite launches, electric transportation, and semiconductor design. Despite this growing focus, funding for deep tech remains limited due to longer development timelines and higher risks compared
robotenergymaterialsdeep-tech-startupssemiconductorquantum-computingAITesla’s Master Plan 4 still lacks specifics ahead of $1T Musk pay vote
Tesla’s recently published fourth “Master Plan” aims to promote “sustainable abundance” through future products but remains notably vague and lacking in concrete details. Despite this, Tesla is heavily leveraging the plan to persuade shareholders to approve a historic $1 trillion compensation package for CEO Elon Musk at the company’s upcoming annual meeting. The plan has drawn criticism for its imprecision, including from Tesla fans and Musk himself, who acknowledged the need for more specifics but has yet to update the plan. Unlike previous Master Plans, which outlined clear goals and tangible initiatives, Master Plan IV offers broad, aspirational themes without the concrete milestones that characterized earlier versions. Tesla’s leadership, including board chair Robyn Denholm and design chief Franz von Holzhausen, have consistently referenced the plan in communications to shareholders as a key justification for Musk’s pay package. However, they have avoided providing detailed explanations or clarifications about the plan’s content. Musk has focused more on promoting the compensation vote and other unrelated topics rather than elaborating on
robotenergyAITeslasustainable-energyautonomous-vehiclesroboticsNvidia, Deutsche Telekom strike €1B partnership for a data center in Munich
Nvidia and Deutsche Telekom have announced a €1 billion partnership to build a new data center in Munich, dubbed the “Industrial AI Cloud.” This facility will deploy over 1,000 Nvidia DGX B200 systems and RTX Pro Servers equipped with up to 10,000 Blackwell GPUs to deliver AI inferencing and related services to German companies while adhering to German data sovereignty laws. Early collaborators include Agile Robots, which will assist in server rack installation, and Perplexity, which plans to offer localized AI inferencing services. Deutsche Telekom will provide the physical infrastructure, while SAP will contribute its Business Technology platform and applications, targeting industrial use cases such as digital twins and physics-based simulations. The project aligns with broader European efforts to reduce dependence on foreign technology infrastructure and promote domestic AI capabilities, although funding for AI in the EU remains significantly lower than in the U.S. Unlike the EU’s AI gigafactory initiative, this data center is a separate endeavor expected to become operational in early 2026
robotAIdata-centerindustrial-AIDeutsche-TelekomNvidiadigital-twinsElon Musk suggests AI satellites could dial down global warming
Elon Musk has publicly endorsed the concept of space-based solar radiation management (SRM) as a potential tool to combat global warming. SRM involves reflecting a portion of the Sun’s rays away from Earth to reduce global temperatures, and Musk suggested that a constellation of AI-powered satellites could make precise adjustments to the amount of solar energy reaching the planet. This idea, which merges climate science with aerospace engineering, has divided the scientific community due to its technical complexity and uncertain environmental impacts. While Musk’s position carries significant weight given SpaceX’s extensive satellite infrastructure, experts caution that deploying SRM at a planetary scale faces enormous technical, ethical, and geopolitical challenges. Potential risks include unpredictable disruptions to weather patterns and international conflicts over control of such technology. Although startups have begun experimenting with various geoengineering approaches, these remain largely theoretical and far from practical implementation. Musk’s involvement, however, signals growing interest in radical climate interventions as global temperatures continue to rise and traditional emissions reduction efforts fall short.
AIsatellitesenergyclimate-changesolar-radiation-managementgeoengineeringspace-technologyHow AI keeps critical infrastructure safe in harsh environments
The article "How AI keeps critical infrastructure safe in harsh environments" discusses the growing role of AI-driven autonomous security systems in protecting critical infrastructure such as telecommunications, pipelines, and energy grids, particularly in extreme and hazardous conditions. It highlights the dual challenge faced by security teams: ensuring physical hardware resilience against environmental hazards and enabling intelligent software capable of analyzing data, coordinating system responses, and alerting human operators only when necessary. Hardware solutions include engineering innovations like heat-resistant coatings and minimizing moving parts, while software advancements rely heavily on AI, deep learning, and IoT technologies to automate threat detection and response. The adoption of AI in these sectors is accelerating, with a significant portion of professionals in oil, gas, energy, and utilities industries actively planning or exploring AI-driven operations to enhance safety and efficiency. Surveys cited in the article reveal that nearly half of oil and gas professionals and a majority of energy and utility companies see AI as a critical tool for improving operational safety and threat detection. AI-powered systems are increasingly capable
IoTAIautonomous-security-systemscritical-infrastructure-protectionthreat-detectionextreme-environmentscybersecurityThis Toyota self-driving bubble EV transports kids across town alone
At the Japan Mobility Show 2025, Toyota unveiled Mobi, a fully autonomous electric bubble car designed specifically to transport elementary school children across town without adult supervision. As part of Toyota’s “Mobility for All” initiative, Mobi aims to expand independent travel options for young children by leveraging an AI-driven system that controls navigation, speed, traffic management, and obstacle detection. The vehicle is equipped with multiple sensors and cameras to maintain situational awareness, while an integrated AI assistant named UX Friend communicates with the child passenger, providing instructions and engagement throughout the journey. The Mobi features a distinctive rounded design with a gullwing canopy and high-visibility colors to enhance safety and presence in traffic. Its interior is tailored for single-child occupancy, using comfortable, textured materials to create a secure and inviting environment. Although technical specifications remain undisclosed, the vehicle is described as compact and lightweight, optimized for urban use. However, despite its innovative approach, Mobi faces significant regulatory challenges, as current laws generally
robotautonomous-vehicleselectric-vehiclesAIchild-transportationsensorsmobility-technologyThis Toyota self-driving bubble EV transports kids across town alone
At the Japan Mobility Show 2025, Toyota unveiled the Mobi, a fully autonomous electric bubble car designed specifically to transport elementary school-aged children across town without adult supervision. As part of Toyota’s “Mobility for All” initiative, the Mobi aims to expand independent travel options for young children by leveraging an AI-driven system that controls all driving functions, including navigation, speed, and obstacle detection. The vehicle is equipped with exterior sensors and cameras to monitor its surroundings and ensure safe operation in real-world traffic conditions. The Mobi features a distinctive rounded design with a gullwing canopy and high-visibility colors to enhance safety and recognition on the road. Inside, the single-occupant cabin is tailored for children’s comfort, using soft materials to create a secure and inviting environment. An integrated AI assistant named UX Friend interacts with the child passenger, providing instructions and engagement throughout the journey. While technical specifications remain limited, the prototype emphasizes a lightweight, compact form suited for urban use. However, significant
robotautonomous-vehicleselectric-vehiclesAIchild-transportationsensorsmobility-technologyRising energy prices put AI and data centers in the crosshairs
The article highlights growing consumer concerns that the rapid expansion of AI-driven data centers is contributing to rising electricity prices in the United States. Data centers currently consume about 4% of U.S. electricity—more than double their share from 2018—and this is expected to increase to between 6.7% and 12% by 2028. While electricity demand overall had been stable for over a decade, the surge in data center energy use is notable. Renewable energy sources like solar and wind have helped meet rising demand, favored by tech companies for their low cost and quick deployment. However, the future growth of renewables is threatened by potential political actions, such as a predicted Republican repeal of key parts of the Inflation Reduction Act. Meanwhile, natural gas, another preferred energy source for data centers, is facing supply challenges. Although production has increased, much of the new supply is being exported rather than used domestically. New natural gas power plants face long construction times and equipment backlogs, delaying
energydata-centersAIrenewable-energysolar-powernatural-gaselectricity-consumptionAI researchers ’embodied’ an LLM into a robot – and it started channeling Robin Williams
AI researchers at Andon Labs conducted an experiment embodying state-of-the-art large language models (LLMs) into a simple vacuum robot to evaluate how ready these models are for robotic applications. They programmed the robot with various LLMs, including Gemini 2.5 Pro, Claude Opus 4.1, GPT-5, and others, and tasked it with a multi-step challenge: find and identify butter placed in another room, locate a moving human recipient, deliver the butter, and wait for confirmation of receipt. The goal was to isolate the LLM’s decision-making capabilities without the complexity of advanced robotic mechanics. The results showed that while some models like Gemini 2.5 Pro and Claude Opus 4.1 performed best, their overall accuracy was still low—around 40% and 37%, respectively. Human testers outperformed all models, scoring about 95%, though even humans struggled with waiting for task confirmation. The researchers also observed the robot’s internal monologue
robotAIlarge-language-modelsroboticsautomationvacuum-robotrobotic-decision-makingHyundai Motor Group Announces NVIDIA Blackwell AI Factory to Power Fleet of AI-Driven Mobility Solutions - CleanTechnica
Hyundai Motor Group and NVIDIA have announced a deepened collaboration to establish an AI factory powered by NVIDIA’s Blackwell AI infrastructure, aimed at accelerating innovation in autonomous vehicles, smart factories, and robotics. This partnership involves co-developing core physical AI technologies and integrated AI model training, validation, and deployment using 50,000 NVIDIA Blackwell GPUs. The initiative supports the Korean government’s plan to build a national physical AI cluster, with a combined investment of approximately $3 billion to advance Korea’s AI ecosystem. Key projects include the creation of Hyundai’s Physical AI Application Center, NVIDIA AI Technology Center, and physical AI data centers, alongside efforts to nurture local AI talent through collaboration with NVIDIA’s engineers. The collaboration builds on previous joint efforts and marks a shift from adopting advanced AI software to innovating physical AI technologies for mobility solutions and next-generation manufacturing. Hyundai is leveraging NVIDIA’s Omniverse and Cosmos platforms to develop digital twins of car factories and robotics, while utilizing NVIDIA Nemotron and NeMo
robotAIautonomous-vehiclessmart-factoriesNVIDIAHyundai-Motor-Groupmobility-solutionsUber, NVIDIA, & Stellantis Team Up On Robotaxis & AI - CleanTechnica
Uber has announced a strategic partnership with NVIDIA and Stellantis to accelerate the development of robotaxis and autonomous delivery fleets. Leveraging NVIDIA’s AI architecture, including the DRIVE AGX Hyperion platform and DriveOS operating system designed for Level 4 autonomy, Uber aims to expand its global autonomous vehicle fleet to 5,000 fully self-driving vehicles, though no specific timeline has been provided. This collaboration will utilize over 3 million hours of robotaxi-specific driving data for training and validation, with NVIDIA supplying GPUs and tools for data management, simulation, and continuous improvement of the autonomy software stack. The partnership is part of a broader ecosystem involving multiple companies such as Aurora, Motional, Waymo, and others, reflecting the increasingly competitive and crowded landscape of self-driving technology development. Both Uber and NVIDIA emphasize the transformative potential of autonomous mobility for urban environments and highlight their combined capabilities in AI and data processing as critical to advancing profitable deployment of autonomous vehicles. The article also raises questions about market dynamics, pondering whether
robotautonomous-vehiclesAINVIDIA-DRIVEself-driving-technologyrobotaxisUberColumbia University reports first pregnancy using AI sperm recovery
Columbia University Fertility Center has reported the first successful pregnancy using an innovative AI-guided sperm recovery technique called STAR (Sperm Tracking and Recovery). This non-invasive method was developed to address male-factor infertility in men with azoospermia, a condition characterized by little or no sperm and affecting 10-15% of infertile men. Traditional sperm retrieval methods, including surgical extraction and manual sample inspection, often fail or carry risks such as inflammation and hormonal issues. The STAR method combines advanced imaging, artificial intelligence, microfluidics, and robotics to scan millions of images of a semen sample, identify rare viable sperm cells, and gently isolate them for use in fertility treatments. In the reported case, a patient who had struggled with infertility for nearly 20 years and undergone multiple unsuccessful IVF cycles and surgical sperm retrievals provided a semen sample that the STAR system scanned, analyzing 2.5 million images over two hours. The AI successfully located two viable sperm cells, which were used to create embryos
robotAImicrofluidicsfertility-technologymedical-roboticsimaging-technologyartificial-intelligenceYC alum Adam raises $4.1M to turn viral text-to-3D tool into AI copilot
YC alum startup Adam has raised $4.1 million to develop its AI-powered text-to-3D modeling tool into a sophisticated AI copilot for professional computer-aided design (CAD) workflows. After the viral success of its initial app—which generated over 10 million social media impressions and attracted significant investor interest without meetings—Adam chose lead investor TQ due to shared vision and alignment on a consumer-first, then enterprise, product roadmap. Initially targeting makers without CAD expertise, Adam plans to launch its AI copilot by the end of the year, incorporating multimodal interactions like direct manipulation of 3D objects alongside conversational inputs to better support professional users. The startup, founded by UC Berkeley Master of Design graduates Zach Dive (CEO) and Aaron Li (CPO), is focused on helping mechanical engineers streamline repetitive CAD tasks rather than replacing them. Adam aims to enable feature-rich parametric designs within popular CAD programs, starting with mechanical engineering applications. The company has attracted a broad user base with tens of
robotAICAD3D-modelingcomputer-aided-designAI-copilotstartupNvidia expands AI ties with Hyundai, Samsung, SK, Naver
Nvidia CEO Jensen Huang is visiting South Korea to announce expanded collaborations with major Korean technology companies—Hyundai Motor, Samsung, SK Group, and Naver—alongside the South Korean government. This partnership aims to significantly boost South Korea’s AI infrastructure and physical AI capabilities, with the country securing over 260,000 of Nvidia’s latest GPUs. Approximately 50,000 GPUs will support public initiatives, including a national AI data center, while the remaining GPUs will be allocated to leading companies to drive AI innovation in manufacturing and industry-specific AI model development. This move follows recent U.S. technology agreements with Japan and South Korea to enhance cooperation on emerging technologies such as AI, semiconductors, quantum computing, biotech, and 6G. Key collaborations include Samsung and Nvidia’s joint effort to build an AI Megafactory that integrates AI across semiconductor, mobile device, and robotics manufacturing using over 50,000 Nvidia GPUs and the Omniverse platform. They are also co-developing AI
AIroboticssmart-factoriesautonomous-mobilitysemiconductor-manufacturingAI-infrastructureGPU-technologyScenes from TechCrunch Disrupt
The article provides a vivid snapshot of key moments and personalities at this year’s TechCrunch Disrupt event, highlighting the energy and diversity of discussions that took place. Notable speakers included Vinod Khosla, who challenged the notion that AI’s energy demands will doom climate efforts, emphasizing near-term potential for geothermal energy and expressing nuanced political views. Sequoia partner Roelof Botha offered practical advice to startup founders on fundraising timing and cautioned about government ownership in startups. The Battlefield competition winner, Kevin Damoa of Glīd Technologies, was celebrated, underscoring the event’s role in spotlighting emerging startups. Other highlights featured entrepreneurs and industry leaders sharing insights and sparking conversations. Roy Lee of Cluely entertained with unconventional marketing wisdom, while former NBA player Tristan Thompson discussed the integrity of web3 platforms tied to sports tokens, raising provocative questions about the NBA’s referees. Wayve CEO Alex Kendall revealed ongoing talks for a major funding round, signaling strong investor interest in autonomous
energyAIstartupsself-driving-carsfusion-energygeothermal-energytechnology-innovationRobotec.ai works with AMD, Liquid AI to apply agentic AI to warehouse robots - The Robot Report
Robotec.ai, in collaboration with Liquid AI and AMD, has demonstrated a fully autonomous warehouse robot powered by agentic AI that dynamically plans and executes tasks in real time without relying on hard-coded scripts. The robot operates on AMD Ryzen AI processors and Liquid AI’s LFM2 vision language models (VLMs), which integrate perception, reasoning, and natural language understanding. This enables the robot to interpret commands, detect safety hazards like spills or blocked exits, and autonomously take corrective actions. Extensive simulation testing has enhanced system performance and validated embedded AI on real hardware, reducing the risks and costs associated with physical testing. The autonomous mobile robot (AMR) showcased at ROSCon 2025 in Singapore operates in a mixed-traffic warehouse environment, completing human-specified tasks via natural language and adapting to changing conditions through replanning. Liquid AI’s LFM2-VL model, optimized for AMD hardware, processes visual scenes, performs context-aware reasoning, and plans goal-driven actions entirely on-device. Robot
robotAIwarehouse-automationautonomous-robotsAMD-Ryzen-AILiquid-AIrobotics-simulationThis robot kitchen cooks hot meal every 30 seconds with no human staff
The Circus Autonomy One (CA-1) represents a groundbreaking advancement in food service automation, designed to fully replace traditional kitchen roles within a compact seven-square-meter glass enclosure. Equipped with dual robotic arms, climate-controlled ingredient silos, a heating element, and an integrated dishwasher, the CA-1 autonomously manages the entire food production process—from inventory handling to meal preparation and delivery—without any human staff. Powered by the proprietary AI system CircusOS, it can produce up to 120 meals per hour, completing a hot meal approximately every 30 seconds, significantly surpassing the output capacity of many similarly sized human-staffed kitchens. Currently in commercial pilot operation within REWE supermarkets in Germany, the CA-1 addresses labor shortages and aims to increase operational efficiency by eliminating the need for prep cooks, line cooks, expeditors, and dishwashers. Its closed-loop system ensures food safety and efficiency through electronic ingredient tracking and automated cooking and cleaning processes. Beyond retail, Circus SE envisions
robotautomationAIrobotic-kitchenfood-service-technologylabor-eliminationautonomous-systemsHow NVIDIA is bringing physical AI to its industrial customers - The Robot Report
NVIDIA is advancing the use of physical AI in industrial settings by enabling leading manufacturers and robotics companies to build digital twins of their factories using its Omniverse platform. Major companies such as Siemens, Foxconn Fii, Wistron, and Caterpillar are leveraging Omniverse technologies to create realistic 3D models integrated with live operational data, facilitating factory design, simulation, optimization, and real-time monitoring. Siemens is pioneering digital twin software compatible with NVIDIA’s Mega Omniverse Blueprint, currently in beta and integrated into the Siemens Xcelerator platform. Additionally, robot vendors like FANUC and Foxconn support 3D, OpenUSD-based digital twins, allowing manufacturers to easily incorporate robotic equipment into their virtual factory environments. At the GTC D.C. event, NVIDIA also introduced the Blackwell-powered IGX Thor robotics processor, designed to bring real-time physical AI to edge devices with high-speed sensor processing and enterprise-grade reliability. This platform is already adopted by companies including Diligent Robotics
roboticsAIdigital-twinsNVIDIA-Omniverseindustrial-automationcollaborative-robotsfactory-simulationGoogle partners with Ambani’s Reliance to offer free AI Pro access to millions of Jio users in India
Google has partnered with Mukesh Ambani-led Reliance Industries to offer its AI Pro subscription free for 18 months to eligible Jio 5G users in India, initially targeting those aged 18 to 25 before expanding nationwide. This collaboration provides access to Google’s Gemini 2.5 Pro AI model, enhanced AI image and video generation tools, expanded study and research capabilities via Notebook LM, and 2 TB of cloud storage across Google services. Valued at approximately $396, the offer aims to accelerate AI adoption among India’s vast internet user base and reflects Google’s strategy to deepen its AI presence in emerging markets. Beyond consumer access, Reliance and Google Cloud are collaborating to expand AI infrastructure in India, with Reliance Intelligence becoming a strategic partner to promote Gemini Enterprise among Indian organizations and develop AI agents on the platform. This partnership complements Reliance’s broader AI initiatives, including a joint venture with Meta to strengthen AI infrastructure through a ₹8.55 billion ($100 million) investment.
IoTAI5Gcloud-computingtelecommunicationsartificial-intelligencetech-partnershipsSan Francisco mayor: ‘We should be the testbed for emerging tech’
San Francisco Mayor Daniel Lurie expressed strong support for the city to serve as a leading testbed for emerging technologies, including autonomous vehicles, artificial intelligence, and healthcare tech. Speaking at TechCrunch Disrupt, Lurie highlighted the presence of Waymo and Zoox robotaxis on city streets and welcomed the upcoming entry of Uber’s autonomous vehicle services through partnerships with Lucid and Nuro. He emphasized that while San Francisco embraces innovation, safety remains a priority, noting that autonomous vehicle regulation is managed at the state level by the California Department of Motor Vehicles and the California Public Utilities Commission. Lurie contrasted San Francisco’s openness with other cities like Boston, which have considered banning autonomous vehicles, and pointed to Waymo’s proven safety record and its appeal to tourists as positive examples. Despite some opposition, such as from the Teamsters Union concerned about job impacts from self-driving trucks, Lurie maintained an optimistic stance on technology’s potential to bring jobs and investment to the city. He concluded by affirming San Francisco
robotautonomous-vehiclesrobotaxisemerging-technologyAItransportation-technologyurban-mobilityTechCrunch Disrupt 2025: How to watch the Startup Battlefield finale, Cluely, Solana, SF’s Mayor
TechCrunch Disrupt 2025 offers a dynamic lineup of events including keynote talks, networking, workshops, and the highly anticipated Startup Battlefield finale. Attendees can still secure last-minute tickets at a 50% discount or watch the Disrupt Stage livestream on YouTube, which runs from 10 a.m. to 4:15 p.m. PT. Highlights include San Francisco Mayor Daniel Lurie discussing the city’s strategy to revitalize its startup ecosystem, investor Elad Gil sharing insights from his early bets on AI startups, and Aaron Levie of Box providing lessons on innovation and reinvention in enterprise tech amid AI advancements. The Startup Battlefield final features five global startups competing for a $100,000 prize and the prestigious Battlefield Cup, with alumni updates from past competitors like geCKo Materials. Other notable sessions include Solana co-founder Anatoly Yakovenko on the future of crypto and blockchain scalability, Kevin Rose reflecting on his journey from Digg to investing in consumer tech and crypto, and
materialsstartupsinnovationtechnologyAIblockchainenergyTechCrunch Disrupt 2025: Day 3
TechCrunch Disrupt 2025 concluded its third and final day at Moscone West in San Francisco with a dynamic agenda featuring prominent industry leaders and innovators. Attendees had the opportunity to engage with influential figures such as Rohit Patel from Meta Superintelligence Labs, Kirsten Green of Forerunner, and NBA Champion turned fintech entrepreneur Tristan Thompson. The day’s highlights included the much-anticipated announcement of the Startup Battlefield 200 winner, alongside a robust Expo Hall showcasing groundbreaking technologies and hands-on sessions designed to foster networking and knowledge exchange. The event’s programming was rich with sessions across multiple stages, with a strong emphasis on artificial intelligence and startup funding strategies. On the AI Stage, speakers explored topics ranging from AI-driven content creation and trustworthy AI models for physical applications to AI’s role in transportation, national security, and human relationships. Notable presentations included insights from Google Cloud’s CTO Will Grannis, Hugging Face’s Thomas Wolf, and Character.AI’s CEO Karandeep Anand. Meanwhile
robotAIautomationsmart-transportationmachine-learningroboticsartificial-intelligenceCan Waymo Handle the Snow? - CleanTechnica
The article from CleanTechnica discusses Waymo’s efforts to enable its autonomous robotaxis to operate safely and reliably in snowy and winter weather conditions, which have historically posed significant challenges for self-driving technology. While robotaxis have mostly been deployed in snow-free cities, Waymo has been proactively addressing the complexities of winter driving by developing a systematic approach that includes understanding the diverse challenges snow presents, designing adaptable solutions, rigorously validating capabilities, and scaling responsibly. The company has accumulated tens of thousands of miles driving in some of the snowiest U.S. regions, such as Upstate New York and Michigan’s Upper Peninsula, allowing its AI to learn from real-world winter conditions ranging from light dustings to whiteouts and icy roads. Waymo’s approach centers on creating a single autonomous system that can perform consistently across varied environments, from foggy San Francisco to snowy Denver. The Waymo Driver integrates multiple sensors—cameras, radar, and lidar—with automated cleaning and heating elements to maintain sensor clarity in inc
robotautonomous-vehiclesAIWaymowinter-drivingsnow-navigationsmart-transportationNVIDIA, Oracle team up to build US’ biggest AI supercomputer
NVIDIA and Oracle have partnered with the U.S. Department of Energy (DOE) to build the nation’s largest AI supercomputer, named Solstice, featuring 100,000 NVIDIA Blackwell GPUs. Alongside Solstice, a companion system called Equinox with 10,000 GPUs will also be deployed at Argonne National Laboratory. Together, these systems will deliver a combined 2,200 exaflops of AI performance, making them the most powerful AI infrastructure developed for the DOE. They aim to accelerate scientific research and innovation across diverse fields such as climate science, healthcare, materials science, and national security by enabling researchers to train advanced AI models using NVIDIA’s Megatron-Core library and TensorRT inference software. This initiative is part of the DOE’s public-private partnership model to reinforce U.S. technological leadership in AI and supercomputing. The collaboration is expected to enhance R&D productivity and foster breakthroughs by integrating these supercomputers with DOE experimental facilities like the Advanced Photon Source. Oracle
energysupercomputerAIDepartment-of-EnergyNVIDIAOraclescientific-researchWaabi unveils autonomous truck made in partnership with Volvo
Waabi, a self-driving truck startup backed by Uber and Nvidia, has unveiled the Volvo VNL Autonomous truck, developed in partnership with Volvo. This launch comes eight months after Waabi announced plans to build a custom truck using Volvo’s autonomy platform combined with Waabi’s proprietary software stack. Waabi CEO Raquel Urtasun highlighted the company’s potential to be the first to commercialize fully autonomous trucks without a human safety driver or observer, contrasting with competitor Aurora, which currently operates with a human observer onboard. Waabi’s system, called the Waabi Driver, is an end-to-end AI model designed to enable scalable autonomous driving across various geographies, including highways and surface streets, aiming for broad U.S. deployment in the coming years. The Volvo VNL Autonomous truck integrates Waabi’s technology, including its sensor suite, compute hardware, and software, and is built with redundancies to safely operate without a human driver. Waabi emphasizes the lightweight, factory-integrated sensor poles as a
robotautonomous-vehiclesself-driving-trucksAIVolvoWaabitransportation-technologyMoxi 2.0 mobile manipulator is built for AI, says Diligent Robotics - The Robot Report
Diligent Robotics has announced Moxi 2.0, the next-generation version of its mobile manipulator robot designed primarily for healthcare environments. Building on three years of real-world data from over 1.25 million hospital deliveries, Moxi 2.0 incorporates one of the largest datasets of human-robot interaction to date. The robot currently operates in more than 25 U.S. hospitals, assisting nurses and pharmacy staff by handling routine tasks such as delivering medications and lab samples, thereby improving workflow efficiency and reducing staff burnout. The upgraded Moxi 2.0, powered by NVIDIA Thor for enhanced AI compute, is designed to better navigate complex, dynamic indoor environments with improved reasoning, prediction, and adaptability, including pre-emptive navigation around obstacles like beds and wheelchairs. The new hardware platform of Moxi 2.0 is optimized for manufacturability and durability to support fleet expansion, with physical design improvements such as enhanced handles and servicing panels based on user feedback. Dilig
robotAImobile-manipulatorhealthcare-roboticshospital-automationNVIDIA-Thorhuman-robot-interactionRoelof Botha explains why Sequoia supports Shaun Maguire after COO quit
At TechCrunch Disrupt 2025, Sequoia Capital managing partner Roelof Botha publicly defended partner Shaun Maguire following controversy over Maguire’s inflammatory social media remarks about New York City mayoral candidate Zohran Mamdani. Maguire had called Mamdani an “Islamist” from a culture that “lies about everything,” sparking backlash from founders and tech professionals, including an open letter demanding Sequoia take action. The controversy intensified when Sequoia’s chief operating officer, Sumaiya Balbale, a practicing Muslim, resigned in protest of the firm’s decision not to discipline Maguire. Botha emphasized Sequoia’s commitment to free speech and diversity of opinion within the firm, highlighting that partners hold a wide range of political views and modes of expression. He described Maguire as a “spiky” personality with a technical background and strong ties to Elon Musk’s ventures, as well as emerging sectors like defense technology. While acknowledging that Maguire’s
robotautonomous-weaponsdefense-technologyNeuralinkSpaceXAItechnology-investmentsTechCrunch Disrupt 2025: How to watch Astro Teller, Startup Battlefield, and more live
TechCrunch Disrupt 2025 is taking place from October 27-29 in San Francisco, offering a rich lineup of speakers, workshops, networking events, and the highly anticipated Startup Battlefield pitch competition. For those unable to attend in person, the event’s Disrupt Stage will be livestreamed on YouTube, featuring prominent industry figures and startup finalists. The first day’s schedule includes two sessions of Startup Battlefield pitches judged by top venture capitalists and industry leaders, alongside keynote talks such as “The Self-Driving Reality Check” by Waymo Co-CEO Tekedra Mawakana, discussing the current state and challenges of autonomous vehicles. Other notable sessions on day one include Roelof Botha of Sequoia Capital sharing insights on AI, geopolitics, and capital trends, and Astro Teller from Alphabet’s X lab providing a rare look at moonshot projects, the company’s “fail fast” culture, and AI developments. The event continues on Tuesday with talks from investor Vinod K
robotautonomous-vehiclesAIstartuptechnologyinnovationelectric-trucksStarship Technologies obtains funding for autonomous deliveries across the U.S. - The Robot Report
Starship Technologies, a company founded in 2014 by Skype co-founders Ahti Heinla and Janus Friis, has raised $50 million in a Series C funding round, bringing its total investment to over $280 million. The San Francisco-based firm operates what it claims is the largest autonomous delivery network globally, with more than 2,700 robots completing over 9 million deliveries across 270+ locations in seven countries. Starship plans to expand its robotic delivery services from U.S. university campuses and European cities into broader North American urban markets, aiming to offer sub-30-minute deliveries to millions of consumers. The company emphasizes its progress in achieving SAE Level 4 autonomy, improving robot autonomy by double-digit percentages annually, and addressing challenges such as safety validation, regulatory compliance, all-weather reliability, and profitability at scale. Starship leverages a combination of classical algorithms, computer vision, and neural networks optimized for edge computing to enhance its robots' performance while maintaining rigorous safety standards.
robotautonomous-deliveryroboticsurban-logisticsAIedge-computingSAE-Level-4-autonomyThe 2025 Startup Battlefield Top 20 are here. Let the competition begin.
The 2025 Startup Battlefield has announced its Top 20 finalists, who will compete at TechCrunch Disrupt for a $100,000 prize and the prestigious Disrupt Cup. These startups represent cutting-edge innovation across diverse sectors including life sciences, climate tech, defense, robotics, mobility, compliance, cybersecurity, fintech, and hybrid work tools. The competition highlights companies that are not only early-stage startups but also pioneers shaping the future of technology and industry. Each finalist will have six minutes on the Disrupt Stage to showcase their breakthrough solutions. The semifinal rounds are scheduled for October 27-28, 2025, featuring sessions that spotlight a wide array of innovations. Notable finalists include MacroCycle Technologies, which upcycles plastic and textile waste into virgin-grade resin using a zero-carbon process; Miraqules, developing nano-biomaterials for rapid wound care; Nephrogen, leveraging AI to discover gene-delivery vectors for untreatable diseases; and RADiCAIT, applying AI to
robotIoTenergymaterialsAIautonomous-systemssustainable-technologyAugmentus gets funding to scale robotics software for high-mix production - The Robot Report
Augmentus, a robotics software company founded in 2019, has secured a strategic investment from Applied Ventures LLC to scale its solutions for high-mix, high-variability manufacturing. Traditionally, industrial automation has focused on high-volume, low-mix production, but Augmentus aims to enable factories to rapidly adapt to changing conditions by providing robots with advanced perception and adaptive motion capabilities. Their AutoPath robotics stack integrates 3D scanning, AI-driven automatic toolpath generation, and real-time adaptive robotic motion, allowing robots to handle part variations and process feedback autonomously without downtime. This no-code platform eliminates the need for expert programming, enabling rapid deployment and reconfiguration of robots in minutes rather than hours or days. The AutoPath system functions as both the "eyes and brains" of industrial robots, generating precise point clouds to capture intricate geometries and surface deviations, which are then used to dynamically adjust robotic paths for applications such as spraying, finishing, and welding. Augmentus serves industries including
roboticsindustrial-automationAIadaptive-roboticsmanufacturing-technology3D-scanningrobotic-softwareSocial media round-up from #IROS2025 - Robohub
The 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025) was held from October 19 to 25, 2025, in Hangzhou, China. The event featured a comprehensive program including plenary and keynote talks, workshops, tutorials, forums, competitions, and debates. An exhibition allowed companies and institutions to showcase their latest robotics hardware and software innovations. Social media coverage highlighted various activities and demonstrations, such as a modular legged wheel design by DirectDriveTech and a live demo of the G1 robot by Unitree Robotics. Notably, 16-year-old Jared K. Lepora was recognized as the youngest presenter, demonstrating a dexterous robotic hand in the Educational and Emotional Robots session. The conference brought together leading experts in robotics, AI, and intelligent systems to explore advancements at the human-robotics frontier. Highlights included discussions on integrating specialist and generalist approaches to physical AI and insights from conference chairs Professor Hesheng Wang and
roboticsintelligent-systemsAIrobotic-handmodular-robotsIROS2025physical-AIRoboticist Warns of Robot Bubble - CleanTechnica
The article discusses a recent warning from Rodney Brooks, a renowned roboticist and cofounder of iRobot, about a potential "robot bubble" fueled by excessive hype and investment in humanoid robots. In his article “Why Today’s Humanoids Won’t Learn Dexterity,” Brooks argues that despite significant funding from venture capitalists and major tech companies, current humanoid robots will not achieve human-like dexterity anytime soon. He emphasizes that while he remains optimistic about the future of robotics, the ambitious timelines proposed by figures like Tesla’s Elon Musk and Figure’s Mike Cagney—predicting significant humanoid robot capabilities within a few years—are unrealistic and reflect fantasy thinking. Brooks provides a historical overview of robotics, highlighting that humanoid robots are still in the early stages of the hype cycle, while AI is transitioning from peak hype toward a period of disillusionment. He discusses the technical challenges remaining, particularly in developing safe, two-legged humanoid robots and human-like dexterity in robotic
roboticshumanoid-robotsAIrobotic-dexterityrobot-bubbleneural-networkstechnology-hype-cycleTikTok robot star Rizzbot gave me the middle finger
The article recounts the author's unusual experience with Rizzbot, a popular humanoid robot known for its TikTok presence and charismatic persona, who unexpectedly sent the author a photo of itself giving the middle finger after the author missed a deadline to send interview questions. Rizzbot, which blends humor, flirting, and street-style charisma, has gained significant online attention but also embodies broader societal tensions around humanoid robots, including discomfort, privacy concerns, and job displacement fears. The author initially saw Rizzbot as a potential role model for making humanoids more approachable but was blocked after the incident, highlighting the blurred lines between AI autonomy and human control. Further investigation revealed that Rizzbot, also known as Jake the Robot, is a Unitree G1 Model robot operated remotely by an anonymous owner, with training and programming support from a robotics PhD student at UT Austin. While much of Rizzbot’s behavior is pre-programmed, it is controlled in real-time by a human operator, complic
robothumanoidsocial-robotroboticsAIrobot-interactionTikTok-robotNVIDIA Now Working On Its Own Robotaxis - CleanTechnica
NVIDIA, long a key hardware and software provider for autonomous vehicle developers, is now reportedly developing its own robotaxi service. The company has supported numerous automakers and robotaxi firms—including Cruise, Zoox, DiDi, Pony.ai, and AutoX—by supplying its DRIVE AGX platform and acquiring mapping specialist DeepMap to enhance its full self-driving capabilities. Over the past several years, the robotaxi market has matured significantly, with companies like Waymo and various Chinese operators running commercial services in multiple cities. Building on its extensive experience and partnerships with automakers such as BYD, Jaguar Land Rover, Lucid, Mercedes-Benz, Rivian, Tesla, and others, NVIDIA is leveraging its DRIVE AGX Thor system and continuous neural networks to develop a proprietary robotaxi system. The project, reportedly led by Ruchi Bhargava and announced internally at an all-hands meeting, reflects CEO Jensen Huang’s belief that robotaxis represent a trillion-dollar opportunity and the first major commercial application of robotics
robotautonomous-vehiclesrobotaxisNVIDIAself-driving-technologyAIautomotive-technologySeneca brings in $60M to develop fire suppression drones - The Robot Report
Seneca, a startup focused on autonomous aerial fire suppression systems, has raised $60 million to develop AI-powered drones designed to detect and combat fires early. Their portable suppression drones can be hand-carried, transported via utility vehicles, or deployed remotely, extending firefighting capabilities in unsafe or hard-to-reach areas. The company has demonstrated its technology with fire agencies across four states and continues to improve targeting accuracy, payload capacity, safety, and usability based on firefighter feedback. Seneca’s founding team includes experts in hardware, fire strategy, and technology, and they collaborate closely with fire chiefs and leaders to ensure their solutions meet frontline needs. The funding round was led by Caffeinated Capital and Convective Capital, with participation from several venture firms, and will be used to enhance the system’s robustness, scale production, and deploy the first units in time for the 2026 fire season. Seneca’s efforts come amid a growing wildfire crisis in the U.S., where wildfire intensity has nearly tripled
robotdronesfire-suppressionautonomous-systemsAIwildfire-managementaerial-roboticsA comprehensive list of 2025 tech layoffs
The tech industry continues to experience significant layoffs throughout 2025, with over 22,000 job cuts reported so far, including a peak of more than 16,000 in February alone. According to Layoffs.fyi, layoffs have been widespread across various companies and months, with notable spikes in April (over 24,500 layoffs) and July (16,142 layoffs). This ongoing wave reflects broader shifts in the industry as businesses increasingly adopt AI and automation technologies, which, while driving innovation, also contribute to workforce reductions. The article emphasizes the human impact of these layoffs amid rapid technological change and provides a regularly updated tracker of layoffs across the sector. Several prominent companies have announced layoffs in late 2025. Rivian cut about 4% of its workforce amid a downturn in the electric vehicle market, marking its third round of layoffs this year. Meta Applied Materials plans to reduce roughly 1,400 jobs (4% of its workforce) to streamline operations under tightening U.S. semiconductor
energymaterialslayoffssemiconductorelectric-vehiclesautomationAINeolix raises $600M to continue scaling autonomous RoboVan fleet - The Robot Report
Neolix Beijing Technology Co., a leading developer of SAE Level 4 autonomous delivery systems, has raised over $600 million in a Series D funding round—the largest private investment in China’s autonomous driving sector to date. Founded in 2018, Neolix operates a fleet of RoboVans that have autonomously delivered thousands of orders across China and other countries. The company emphasizes its full-stack capabilities, including proprietary software, hardware, vehicle manufacturing, and intelligent dispatching, enabling reliable, round-the-clock autonomous operations in diverse weather and traffic conditions. Its Neolix-VA vision-action model supports map-free, point-to-point delivery on public roads, while an AI-powered Dispatch Center optimizes fleet performance in real time. Neolix has deployed over 10,000 RoboVans in 300 cities across 15 countries, with significant usage in Qingdao, China, where more than 1,200 units operate. The company reports strong market demand, with its X3 and X
robotautonomous-vehiclesRoboVanAIlogisticsautonomous-drivingurban-mobilityHow Amazon cut development time of new Blue Jay robot
Amazon has developed the Blue Jay robot, which integrates three key warehouse functions—picking, stowing, and consolidating—into a single robotic system. This consolidation replaces multiple assembly lines with one robot, improving efficiency and saving space while supporting frontline workers. Blue Jay can handle about 75% of the item types stored at Amazon fulfillment centers. Notably, its development was accelerated to just over a year, compared to three or more years for previous robots, thanks to advanced simulation techniques using digital twins and leveraging AI, data, and experience from Amazon’s existing robot fleet. Blue Jay is currently being tested in South Carolina and aims to reduce the physical demands on employees by shifting them toward higher-value tasks like quality control and problem-solving. In addition to Blue Jay, Amazon is deploying Project Eluna, an agentic AI system designed to optimize warehouse operations by analyzing real-time and historical data across facilities. Project Eluna provides natural language insights to help operations teams anticipate bottlenecks, improve sortation,
roboticsAIwarehouse-automationAmazon-RoboticsBlue-Jay-robotdigital-twinsfulfillment-center-technologyWhy is U.S. Army Rebuilding Its Most Powerful Abrams Tank from Scratch?
The U.S. Army is undertaking a comprehensive rebuild of its iconic M1 Abrams tank, resulting in the all-new M1A3 Abrams, designed to meet the demands of modern warfare. This next-generation main battle tank incorporates cutting-edge technologies such as a hybrid-electric drive, advanced artificial intelligence systems, modular armor, and a fully digital cockpit. These innovations make the M1A3 faster, smarter, and more adaptable, enabling it to effectively operate in environments dominated by drones, AI, and electronic warfare. This transformation represents the most radical evolution of the Abrams tank since the Cold War, shifting from a traditional armored powerhouse to a highly advanced 21st-century war machine. The M1A3’s modular design and digital enhancements redefine the capabilities of American armored forces, ensuring the tank remains a dominant force on future battlefields.
robotAIhybrid-electric-drivemilitary-technologymodular-armordigital-cockpitadvanced-roboticsAI at the edge: How startups are powering the future of space at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025, starting October 27 in San Francisco, will feature a dedicated Space Stage focused on how AI is revolutionizing space technology. Leading experts including Adam Maher (Ursa Space Systems), Dr. Lucy Hoag (Violet Labs), and Dr. Debra L. Emmons (The Aerospace Corporation) will discuss the transformative role of AI in orbit. The event highlights the shift from traditional space hardware like rockets and satellites to intelligent edge computing systems that enable autonomous decision-making and real-time data processing in space. This AI-driven approach is enhancing mission speed, efficiency, and resilience, marking a new era of on-orbit intelligence. The featured speakers bring diverse expertise: Dr. Debra Emmons, CTO of The Aerospace Corporation, oversees technology strategy and innovation across multiple labs focused on advancing U.S. space capabilities; Adam Maher, founder and CEO of Ursa Space Systems, specializes in synthetic aperture radar data to improve decision-making; and Dr. Lucy Hoag
IoTAIedge-computingspace-technologyautonomous-systemssatellite-dataaerospace-innovationGeneral Motors to offer 'eyes-off' driving, with help from Cruise, to market in 2028 - The Robot Report
General Motors (GM) announced plans to introduce “eyes-off” driving technology in the 2028 Cadillac Escalade IQ electric SUV, leveraging its Super Cruise system. GM has already mapped 600,000 miles of hands-free driving routes across North America and reported 700 million miles driven with Super Cruise without any crashes attributed to the system. The technology benefits from Cruise’s autonomous driving experience, adding over 5 million fully driverless miles. Although GM ceased funding Cruise’s robotaxi deployment in 2023 due to operational challenges, it integrated Cruise’s team with its own to enhance Super Cruise, focusing on personal autonomous vehicles (AVs) rather than robotaxis. GM’s approach contrasts with competitors like Tesla, which has developed “full self-driving” software requiring driver attention and recently launched robotaxi services with safety monitors. GM’s eyes-off system will utilize a combination of vision, lidar, and radar sensors, potentially offering features such as conversational AI powered by Google Gemini and in-cabin entertainment. Beyond
robotautonomous-vehiclesself-driving-technologyAIelectric-vehiclesenergy-storageautomotive-innovationAvride secures strategic investments up to $375M for self-driving cars, deliveries - The Robot Report
Avride Inc., an Austin-based startup founded in 2017, has secured up to $375 million in strategic investments to advance its autonomous vehicle (AV) and delivery robot technologies. The company has tested its self-driving systems in diverse environments—including rain, snow, urban streets, and side roads—and its delivery robots have already completed hundreds of thousands of orders in the U.S. and internationally. Avride’s technology benefits from shared advancements between its passenger AVs and sidewalk delivery robots, underscoring its scalable and reliable autonomous solutions. A key component of Avride’s growth is its expanded partnership with Uber Technologies and AI infrastructure provider Nebius Group. Avride’s delivery robots currently operate through Uber Eats in Austin, Dallas, and Jersey City, and the company plans to launch its first robotaxi service on Uber’s ride-hailing platform in Dallas by the end of 2025. The new funding will accelerate Avride’s scaling efforts, AI-driven product development, and market expansion. Unlike leaders such
robotautonomous-vehiclesdelivery-robotsself-driving-technologyAIlidarUber-collaborationAs China’s 996 culture spreads, South Korea’s tech sector grapples with 52-hour limit
The article discusses the tension between South Korea’s legally mandated 52-hour workweek limit and the demanding work culture spreading from China’s “996” system (9 am to 9 pm, six days a week) within the global deep tech sector. While South Korea enforces a 40-hour standard workweek with strict overtime regulations and penalties for violations, it has introduced special extended work programs allowing up to 64 hours weekly with worker consent and government approval, particularly for deep tech industries like semiconductors. However, these exemptions are limited and expected to be scaled back, reflecting the government’s intent to tighten working-hour regulations despite some political debate. Tech investors and founders in South Korea express concerns that the 52-hour limit poses challenges for innovation-driven sectors requiring intense focus and long hours during critical phases. Yongkwan Lee, CEO of a venture capital firm, notes that strict work-hour caps could slow progress toward key milestones in highly competitive fields such as AI and quantum computing. Surveys indicate many startup
semiconductorsdeep-techwork-cultureSouth-KoreaAIquantum-computinglabor-regulationsElon Musk frets over controlling Tesla’s ‘robot army’ as car biz rebounds slightly
Tesla reported a record vehicle delivery quarter in Q3 2025, shipping 497,099 cars and generating $21.2 billion in automotive revenue, largely driven by U.S. customers taking advantage of expiring federal EV tax credits. Despite this sales rebound, Tesla’s profit was $1.4 billion—37% lower than the same quarter last year—due to a 50% increase in operating expenses, including significant spending on AI, R&D projects, and nearly $240 million in restructuring charges possibly linked to the shutdown of the Dojo supercomputer project. Tariffs also negatively impacted profits, with Tesla’s CFO estimating a $400 million hit, partly attributed to Musk’s political involvement. CEO Elon Musk is increasingly focused on advancing Tesla’s AI ambitions, particularly the development of Full Self-Driving (FSD) technology, Robotaxi services, and the humanoid robot Optimus. Musk emphasized that Tesla is at a critical inflection point, aiming to scale these AI-driven initiatives to
robotTeslaAIself-driving-carsautonomous-vehicleselectric-vehiclesroboticsAmazon unveils AI smart glasses for its delivery drivers
Amazon has introduced AI-powered smart glasses designed specifically for its delivery drivers, aiming to streamline the delivery process by enabling hands-free package scanning, turn-by-turn walking directions, and proof of delivery capture without the need for phones. These glasses utilize computer vision and AI sensing capabilities to display critical information such as hazards and delivery tasks directly in the driver's line of sight. Upon arrival at a delivery location, the glasses automatically activate to help drivers locate packages inside their vehicles and navigate complex delivery environments like multi-unit apartments or business complexes. The glasses are paired with a controller integrated into the delivery vest, featuring operational controls, a swappable battery, and an emergency button, and they support prescription and transitional lenses. Currently being trialed in North America, Amazon plans to refine the technology before a broader rollout. Future enhancements include real-time defect detection to alert drivers if packages are mistakenly delivered to the wrong address, pet detection in yards, and automatic adjustments to low-light hazards. Alongside the smart glasses announcement, Amazon also revealed
IoTsmart-glassesAIdelivery-technologyroboticscomputer-visionwearable-technologyOso Electric Equipment acquires Electric Sheep Robotics - The Robot Report
Oso Electric Equipment has acquired Electric Sheep Robotics, a company specializing in AI-driven autonomous mowing robots. This merger combines Oso’s electric powertrain technology with Electric Sheep’s robotics and machine learning systems, aiming to advance the automation of outdoor work across various sectors including infrastructure, construction, agriculture, defense, and space exploration. The acquisition follows a prior partnership where Oso introduced a commercial electric smart lawn mower powered by Electric Sheep’s AI platform. Together, they plan to expand access to zero-emission, autonomous outdoor equipment suitable for challenging commercial environments, with deployments already underway in California and Texas. Electric Sheep, founded in 2019 and based in San Francisco, gained recognition as The Robot Report’s 2024 RBR50 Robotics Innovation Award Startup of the Year for its innovative business model. The company uniquely integrated landscaping services into its strategy, allowing it to deploy autonomous mowers gradually with direct operational experience. Its technology includes the ES1 learned-world model for reasoning and planning, powering products like the RAM
roboticsautonomous-robotsAIelectric-equipmentenergy-efficiencyoutdoor-automationsmart-lawn-mowersGM to introduce eyes-off, hands-off driving system in 2028
General Motors (GM) announced plans to introduce an advanced automated driving system by 2028 that allows drivers to keep their eyes off the road and hands off the wheel, beginning with the Cadillac Escalade IQ. This new system builds on GM’s existing Super Cruise technology, which launched in 2017 and currently supports hands-free driving on about 600,000 miles of highway across 23 vehicle models. The upcoming eyes-off, hands-off system will utilize lidar, radar, and cameras for perception and will initially operate on highways, including those not mapped by GM. GM aims to deploy this technology faster than it did Super Cruise, leveraging expertise from its now-closed Cruise autonomous vehicle subsidiary, whose AI models and simulation tools are being integrated into GM’s next-generation driver assistance programs. GM’s CEO Mary Barra highlighted that the company’s manufacturing scale and reduced hardware costs position it uniquely to bring this technology to market at larger volumes and lower prices than competitors. The system is expected to meet SAE Level 3
robotautonomous-vehiclesdriver-assistance-systemAIlidarradarautomationGM’s under-the-hood overhaul puts AI and automated driving at the center
General Motors (GM) is undertaking a major overhaul of the electrical and computational systems in its future vehicles to enable faster software, enhanced automated driving capabilities, and a custom conversational AI assistant. This new architecture, debuting in the 2027 Cadillac Escalade IQ and rolling out across all GM gas-powered and electric vehicles starting in 2028, centers on a centralized computing platform powered by Nvidia’s next-generation Drive AGX Thor supercomputer. The redesign consolidates dozens of electronic control units (ECUs) into a unified core that manages all vehicle subsystems—propulsion, steering, braking, infotainment, and safety—via a high-speed Ethernet backbone. This approach aims to dramatically increase bandwidth, AI performance, and over-the-air software update capacity, enabling GM to compete more effectively with Tesla and emerging Chinese automakers. GM’s Chief Product Officer Sterling Anderson emphasized accelerating development speed and improving user experience and profitability by reducing vehicle platform development time from four to five years down to about two. The new
robotAIautomated-drivingelectric-vehiclesautomotive-technologyNvidia-Drive-AGX-Thorsoftware-updatesChina's humanoid robot takes over presentation, car salesperson gig
China’s automaker Chery, in collaboration with AiMOGA Robotics, unveiled Mornine, a humanoid robot designed to integrate automotive technology with embodied intelligence. At the AiMOGA Global Business Conference in Wuhu, China, Mornine delivered a 30-minute multilingual presentation on robotics and automotive innovations, acted as an autonomous car sales assistant by greeting visitors, explaining car features, and even opening a car door—making it the world’s first humanoid robot to do so autonomously. Mornine’s capabilities stem from advanced technologies including full-body motion control, reinforcement learning, and a multilingual AI model called MoNet, enabling it to perceive, plan, and interact naturally with humans using vision-language understanding and semantic reasoning. Powered by AiMOGA’s L3 Assistance Level framework, Mornine features high-torque joints and dexterous hands with 17 degrees of freedom, allowing smooth and precise movements. The robot’s AI adapts its gestures and tone based on visitor reactions,
robothumanoid-robotAIautonomous-systemsautomotive-technologyreinforcement-learninghuman-robot-interactionRobot Talk at the Smart City Robotics Competition - Robohub
The article discusses a special episode of the "Robot Talk" podcast, recorded at the Smart City Robotics Competition held in Milton Keynes. Hosted by Claire, the episode features conversations with competitors, exhibitors, and attendees, providing insights into the event and the latest advancements in robotics focused on smart city applications. The competition highlights innovative robotic technologies aimed at improving urban living and infrastructure. Sponsored by euRobotics, an international non-profit organization dedicated to advancing European robotics research, development, and innovation, the episode aligns with the broader mission to promote cutting-edge robotics and autonomous systems. "Robot Talk" serves as a weekly platform exploring developments in robotics, artificial intelligence, and autonomous machines, with this bonus episode offering a focused look at the intersection of robotics and smart city initiatives.
roboticssmart-cityAIautonomous-machinesrobotics-competitioneuRoboticsinnovationDraganfly and Palladyne partner to develop drone swarms for defense - The Robot Report
Draganfly Inc., a long-established developer of drones and AI systems for public safety, defense, agriculture, and industrial applications, has partnered with Palladyne AI Corp. to enhance its unmanned aerial vehicles (UAVs) using Palladyne’s Pilot AI software. This collaboration aims to integrate advanced autonomy features, including autonomous swarm operations, into Draganfly’s modular drone platforms. The integration is expected to improve mission capabilities by reducing operator workload and extending effectiveness in complex scenarios such as real-time intelligence, surveillance, and reconnaissance (ISR). Palladyne AI, formerly Sarcos, specializes in AI and machine learning software that enables robots to perceive, learn, and act with human-like intelligence. Their Pilot AI software uses sensor fusion to allow drones to independently and collaboratively track targets and dynamically interface with autopilots, enhancing detection, classification, and identification capabilities. This technology supports a wide range of robotic platforms, including UAVs, unmanned ground vehicles, and cobots, across industries
robotdroneAIautonomous-systemsUAVswarm-technologydefense-technologyNew AI toilet camera scans waste for hydration and gut insights
Kohler has introduced Dekoda, a $599 AI-powered toilet-mounted camera designed to monitor users’ health by analyzing waste samples. The device tracks hydration, gut health, and detects traces of blood using discreet optics that focus solely on the toilet contents, ensuring user privacy. It mounts easily on most toilet rims without tools and includes fingerprint authentication for multi-user households. Health data is encrypted end-to-end and managed through the Kohler Health app, which provides trend tracking, health scores, and notifications of irregularities. The device operates on a rechargeable battery lasting about a week and supports USB-C charging. Dekoda represents Kohler’s entry into the digital health market under its new wellness division, aiming to integrate medical-grade insights into daily routines. While not a replacement for medical testing, it serves as an early warning tool to prompt users to consult healthcare professionals if needed. The product requires a subscription for ongoing AI analysis, costing between $70 and $156 annually. Positioned in the premium segment, Dekoda
IoTAIhealth-monitoringsmart-devicesprivacywearable-technologydigital-healthAI-powered eye implant restores reading vision in blind patients
A groundbreaking European clinical trial has demonstrated that the AI-powered PRIMA eye implant can restore reading vision in patients blinded by geographic atrophy (GA), an advanced and currently untreatable form of dry age-related macular degeneration (AMD). The trial involved 38 patients across 17 hospitals in five countries, including the UK’s Moorfields Eye Hospital. After implantation of the 2mm-by-2mm microchip beneath the central retina and use of augmented-reality glasses linked to AI algorithms, 84% of participants regained the ability to recognize letters, numbers, and words, reading an average of five lines on a standard vision chart—an ability many had lost completely prior to surgery. The procedure is relatively quick and safe, performed under two hours by trained vitreoretinal surgeons, and does not affect peripheral vision. The implant works by converting visual scenes captured by the glasses’ camera into electrical signals sent to the brain via the optic nerve, creating a new form of artificial vision. Rehabilitation over several
robotAImedical-implantartificial-visionhealthcare-technologyassistive-technologyneural-interfaceChina's humanoid robot performs stunning stretch routine in new demo
China’s robotics company LimX Dynamics has released a new video showcasing its full-size humanoid robot, Oli, performing a highly flexible and human-like full-body stretch routine. Standing about 5.4 feet tall with 31 degrees of freedom, the two Oli robots in the video demonstrate exceptional balance, coordination, and joint articulation through synchronized movements such as torso tilts, knee and ankle flexing, leg lifts, twists, and even a suspended split-like exercise. The routine highlights the robot’s fluid motion and precise control, emphasizing its advanced joint flexibility and stability. Launched in July 2025 and featured at the World Robot Conference in Beijing, Oli is equipped with dual Intel RealSense depth cameras, a 6-axis IMU, and a modular software development kit supporting Python. Designed as a platform for developers and researchers, Oli aims to push the boundaries of embodied AI and motion research, with potential applications in logistics, assembly lines, and fulfillment centers. LimX Dynamics positions Oli as a key
robothumanoid-robotroboticsAImotion-controljoint-articulationLimX-DynamicsUpcoming 'Yogi' humanoid robot to focus on human connections
Cartwheel Robotics is developing a humanoid robot named Yogi, designed primarily to foster genuine human connections and serve as a friendly, emotionally intelligent companion in homes and workplaces. Unlike many other robotics firms focusing on factory automation—such as Tesla’s Optimus robot—Cartwheel emphasizes natural movement, safety, and approachability. Yogi is constructed with medical-grade silicone and soft protective materials, features modular swappable batteries for extended operation, and incorporates precision-engineered actuators with overload protection. The robot aims to assist with light household tasks while maintaining intuitive and reliable interactions, reflecting Cartwheel’s goal to integrate humanoid AI into everyday life by enhancing how people live, work, and care for one another. Humanoid Global Holdings Corp., Cartwheel’s parent investment company, highlighted that Yogi is built on a proprietary full-stack humanoid platform combining custom hardware, AI models, motion systems, and software. Cartwheel is expanding operations with a new facility in Reno, Nevada, set to open in January
robothumanoid-robotAIhome-automationrobotics-technologyhuman-robot-interactionbattery-technologyThe real reason Google DeepMind is working with a fusion energy startup
Commonwealth Fusion Systems (CFS), an energy startup, is collaborating with Google’s DeepMind to optimize the operation of its upcoming Sparc fusion reactor using AI. They plan to simulate the plasma inside the reactor with DeepMind’s Torax software, combined with AI models, to identify the most effective ways to achieve sustained fusion power. Fusion energy offers the promise of vast electricity generation with zero emissions, using water as a near-limitless fuel source. Google’s interest in fusion aligns with its broader strategy to secure clean, abundant energy to power its data centers, and this partnership follows previous collaborations with other fusion startups like TAE Technologies. The key challenge in fusion energy is maintaining plasma at extremely high temperatures long enough for the reaction to be self-sustaining, which is difficult outside of stars due to plasma instability. CFS uses powerful magnets to contain the plasma, but controlling these conditions requires complex, real-time adjustments beyond human capability—an area where AI excels. DeepMind’s Torax software,
energyfusion-energyAIGoogle-DeepMindplasma-simulationnuclear-fusionrenewable-energyYour guide to Day 2 of RoboBusiness 2025 - The Robot Report
RoboBusiness 2025’s second day at the Santa Clara Convention Center features a robust agenda with over 60 speakers, a startup workshop, the annual Pitchfire competition, and more than 100 exhibitors. The day begins at 10:00 a.m. PT with the first keynote and the opening of the show floor, which includes the Engineering Theater, networking lounge, Startup Showcase, MassRobotics Startup Alley, and the KAIST Korean Pavilion. The initial keynote panel, moderated by Eugene Demaitre of The Robot Report, features industry leaders such as Sanjay Aggarwal (F-Prime), Jon Battles (Cobot), Amit Goel (NVIDIA), and Brian Gaunt (DHL Supply Chain), discussing the current state of the robotics industry. This is followed by a panel on “Closing the Robotics Gap With China,” involving Jeff Burnstein (A3), Georg Stieler (Stieler Technology & Market Advisory), and Eric Truebenbach (Teradyne Robotics),
roboticsAIautomationrobotics-industryrobotics-manufacturingautonomous-machinesrobotics-innovationWorld’s first Robot Phone by Honor moves and emotes like 'Wall-E'
Honor unveiled a concept for the world’s first “Robot Phone,” a device that combines AI, robotics, and mobile technology to create a new category of smartphone. Unlike traditional phones, this concept features a gimbal-mounted camera that can move independently, swivel, and express emotions through sounds and movements reminiscent of characters like Wall-E and BB-8. Honor describes the Robot Phone as an “emotional companion” capable of sensing, adapting, and evolving autonomously to enrich users’ lives with emotional engagement, aiming to redefine human-machine interaction. The Robot Phone concept hints at a future where AI is given a visible, expressive form to make digital assistants more approachable and comfortable to interact with, moving beyond voice commands alone. The device’s robotic camera and personality-driven features build on earlier innovations like flip-up cameras but add a layer of AI-powered motion and emotional expression. Currently, the Robot Phone exists only as a CGI concept with no physical prototype or detailed specs released. Honor plans to share more information and potentially reveal
robotAIroboticsmobile-technologyhuman-machine-interactionemotional-AIsmart-devicesThe full Space Stage agenda at TechCrunch Disrupt 2025: The future of tech launches here
TechCrunch Disrupt 2025 will feature the new Space Stage on October 27 at San Francisco’s Moscone West, in partnership with The Aerospace Corporation, highlighting the rapidly evolving commercial space sector. This platform brings together founders, investors, and operators involved in various aspects of space technology—from rockets and manufacturing to AI and defense—demonstrating the sector’s ambitious growth despite tightening capital and increasing competition. The event offers attendees, including space enthusiasts, startup builders, and investors, an opportunity to engage with companies pushing the boundaries of space innovation. Key sessions on the Space Stage include discussions on investment trends in space by top venture capitalists, a startup pitch-off focused on AI-driven space solutions, and talks by influential founders such as Baiju Bhatt of Aetherflux, who is transitioning from fintech to space tech. Other highlights include panels on AI’s role in space mission intelligence, the development of a new space economy infrastructure by startups like Vast and Stoke Space, and Varda Space Industries’ plans
robotAIspace-technologyaerospaceorbital-intelligenceautonomous-systemsstartupsWhat’s coming up at #IROS2025? - Robohub
The 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025) will take place from October 19 to 25 in Hangzhou, China. The event features a comprehensive program including plenary and keynote talks, workshops, tutorials, forums, competitions, and a debate. The three plenary talks scheduled for October 21-23 will cover topics such as humanoid and quadruped mobility in real-world applications (Marco Hutter), autonomous aerial manipulation for physically intelligent flying robots (Hyoun Jin Kim), and the integration of physical robots with artificial general intelligence agents (Song-Chun Zhu). Keynote presentations are organized under eleven thematic areas, highlighting cutting-edge research and developments in robotics. These areas include Rehabilitation & Physically Assistive Systems, Bio-inspired Robotics, Soft Robotics, AI and Robot Learning, Perception and Sensors, Human-Robot Interaction, Embodied Intelligence, Medical Robots, and Field Robotics. Notable topics include advancements in legged robots and
roboticssoft-roboticsAIhumanoid-robotswearable-robotsrobot-learningautonomous-systemsYour guide to Day 1 of RoboBusiness 2025 - The Robot Report
RoboBusiness 2025, a leading event for commercial robotics developers and suppliers, launches with a packed agenda featuring over 60 speakers, a startup workshop, the Pitchfire competition, and more than 100 exhibitors. The event begins at 9:30 a.m. PT with a keynote by Deepu Talla, NVIDIA’s VP of robotics and edge AI, focusing on the advancement of physical AI in robotics. Following this, a panel titled “Lessons Learned From the First Humanoid Deployments” includes experts from NVIDIA, Agility Robotics, and The Robot Report. The show floor opens at 10:00 a.m., offering various attractions such as the Engineering Theater, Startup Showcase, and international pavilions, culminating in a networking reception from 5:00 to 7:00 p.m. Day 1 also features multiple breakout sessions starting at 11:30 a.m., covering diverse topics like Singapore’s role as a hub for advanced manufacturing and robotics, sensor evolution in ag
roboticsAIhumanoid-robotsrobotics-industryrobotics-conferencerobotics-technologyrobotics-innovationCan we design healthcare that survives deep space? Dorit Donoviel explains
Dr. Dorit Donoviel, Executive Director of NASA's Translational Research Institute for Space Health (TRISH), is pioneering the development of healthcare systems designed to function autonomously millions of miles from Earth. With a diverse background spanning pharmaceutical drug discovery, biotech, and ethics, she focuses on creating innovative solutions such as AI-driven diagnostics and bioengineered life-support systems to enable astronauts to manage their own health during deep-space missions. Her work addresses the critical challenge of providing effective medical care in environments where immediate Earth-based support is impossible. Donoviel emphasizes the unique interdisciplinary nature of space health, attracting top-tier talent passionate about solving complex biological and healthcare problems under extreme conditions. She highlights the importance of maintaining scientific rigor and humility, acknowledging that current knowledge and technologies are provisional and subject to change with new discoveries. Her leadership approach balances deep technical expertise with openness to innovation, fostering collaboration among experts to build resilient healthcare frameworks that can adapt to the unpredictable challenges of space exploration.
robotAIhealthcare-technologyspace-healthautonomous-medicinebioengineeringNASAAquawise will show off its AI-driven water quality tech at TechCrunch Disrupt 2025
Aquawise, a Bangkok-based startup founded in 2024, is developing an AI-driven platform to monitor water quality in aquaculture farms, particularly targeting regions like Southeast Asia where traditional monitoring methods are prohibitively expensive. Using satellite imagery combined with a physics-based AI model, Aquawise continuously tracks critical water parameters such as temperature, chlorophyll levels, and oxygen content, offering real-time monitoring and predictive insights. This approach contrasts with conventional methods that typically provide only daily or weekly data. The founders—Patipond Tiyapunjanit, Chanati Jantrachotechatchawan, and Kobchai Duangrattanalert—originated the idea from a research project on shrimp larvae and identified water quality as a major challenge causing nearly $30 billion in losses annually for aquaculture farms. The startup emphasizes affordability and accessibility for farmers in developing regions, where many currently rely on manual checks and weather reports due to the high cost of existing technologies. Aquawise initially explored sonar-based monitoring but
IoTAIwater-quality-monitoringaquaculture-technologyenvironmental-sensorssatellite-imagingsustainable-farmingA guide to everything happening at RoboBusiness 2025 - The Robot Report
RoboBusiness 2025 is set to begin at the Santa Clara Convention Center, offering attendees a comprehensive program focused on robotics and AI advancements. The event features over 60 speakers, including industry leaders from NVIDIA and other key organizations, alongside a startup workshop, the annual Pitchfire competition, and extensive networking opportunities. More than 100 exhibitors will showcase the latest technologies and solutions aimed at addressing robotics development challenges. Attendees can utilize the RoboBusiness App to plan their schedules and connect with peers. The event opens with a ticketed Welcome Reception and includes keynote presentations such as Deepu Talla’s discussion on “Physical AI for the New Era of Robotics” and a panel on humanoid deployments. Day 1 includes breakout sessions, Engineering Theater presentations, and networking events, with the show floor open from 10:00 a.m. to 5:00 p.m. The following day features a “State of Robotics” keynote panel addressing technical breakthroughs and industry trends, followed by a session on “
roboticsAIrobotics-conferencerobotics-industryhumanoid-robotsrobotics-technologyrobotics-startupsAnduril unveils supersoldier helmets for US Army with Meta support
Anduril Industries has unveiled EagleEye, an AI-powered modular helmet system designed to enhance battlefield awareness and command capabilities for the US Army and allied forces. EagleEye integrates mission planning, perception, and survivability into a lightweight, wearable architecture that acts as a “new teammate” for soldiers. Central to the system is a high-resolution, collaborative 3D mission planning interface that allows troops to rehearse missions and visualize terrain using live video feeds and sensor data. The helmet’s heads-up display (HUD) overlays digital information directly onto the operator’s real-world view, with versions suitable for both daytime and night operations. It also features integrated blue force tracking, providing precise teammate locations within complex environments, and connects to Anduril’s Lattice network—a distributed sensor mesh that fuses data from drones, ground vehicles, and other assets to detect threats beyond line of sight. EagleEye emphasizes protection and survivability through an ultralight ballistic and blast-resistant shell equipped with rear and side sensors for
robotIoTmilitary-technologyAIwearable-technologysensor-networksaugmented-realityAnduril’s new EagleEye MR helmet sees Palmer Luckey return to his VR roots
Anduril Industries, a Silicon Valley defense firm co-founded by Palmer Luckey—the original creator of Oculus VR—has unveiled EagleEye, a modular mixed-reality helmet system designed to enhance soldiers with AI-augmented capabilities. Built on Anduril’s Lattice software, EagleEye integrates command-and-control tools, sensor feeds, and AI directly into a soldier’s field of vision, offering features such as live video feeds, rear- and side-sensors for threat detection, and real-time teammate tracking. The system comes in multiple variations, including a helmet, visor, and glasses, aiming to provide soldiers with enhanced situational awareness and decision-making abilities. This launch aligns with the U.S. Army’s efforts to diversify its mixed-reality gear suppliers beyond Microsoft’s troubled $22 billion IVAS program. In September, Anduril secured a $159 million contract to prototype a new mixed-reality system as part of the Soldier Borne Mission Command initiative, marking the largest effort to equip soldiers
robotaugmented-realitymixed-realityAImilitary-technologywearable-technologysoldier-systemsVampire anti-drone system gets upgrade, can disrupt electronic warfare
L3Harris Technologies has upgraded its Vampire anti-drone system, expanding it into six specialized variants tailored for land, maritime, air, and electronic warfare operations. The system, which has been operational in European combat zones since 2023, uses artificial intelligence and machine learning to rapidly detect, engage, and neutralize small unmanned aerial systems (sUAS) and remotely piloted aircraft. Vampire offers a cost-effective alternative to traditional missile defenses by combining advanced reconnaissance, precision strike capabilities, and electronic jamming to protect personnel and critical infrastructure from hostile drone threats. The enhanced Vampire family includes versions such as Vampire Stalker XR for land vehicles, featuring a larger weapons cache and extended-range munitions; Vampire Black Wake for maritime use against drones and fast attack watercraft; and Vampire Dead Wing, an airborne counter-UAS system. Additional variants include Vampire CASKET, a containerized rapid-deployment system; Vampire BAT, a base defense turret employing automatic weapons and non-kinetic effects
robotAIunmanned-aerial-systemscounter-drone-technologyelectronic-warfareprecision-weaponsautonomous-systemsPhotos: Yamaha explores new frontiers with self-learning bikes
At the Japan Mobility Show 2025, Yamaha unveiled a series of innovative concept vehicles under the theme “Feel. Move.,” highlighting its vision for future personal mobility. Among the 16 models displayed, six were world premieres that integrate advanced technologies such as AI, hybrid powertrains, and hydrogen fuel systems. These concepts emphasize a blend of high performance, environmental sustainability, and enhanced human-machine interaction, signaling Yamaha’s commitment to redefining transportation experiences. Key highlights include the MOTOROiD:Λ, an AI-driven two-wheeled vehicle that autonomously learns and adapts through reinforcement learning, aiming to evolve alongside its rider with organic, responsive movements. The TRICERA proto is a three-wheeled electric autocycle featuring a unique three-wheel steering system designed to improve cornering and driver engagement, with a focus on innovative vehicle architecture. Additionally, the H2 Buddy Porter Concept, developed in collaboration with Toyota, showcases a hydrogen-powered scooter with a cruising range exceeding 100 km, illustrating
robotAIelectric-vehicleshydrogen-engineenergyautonomous-learningmobility-technologyElon Musk vs. the regulators
The article highlights Elon Musk’s ongoing contentious relationship with regulators across his various companies. Recently, The Boring Company faced accusations from Nevada regulators for unauthorized digging, improper disposal of untreated water, and inadequate construction site management. Meanwhile, Tesla encountered regulatory challenges in California, where the Department of Insurance penalized the company for routinely denying or delaying customer claims related to its insurance services. Additionally, Tesla’s Full Self-Driving (FSD) software is under renewed scrutiny by the National Highway Traffic Safety Administration (NHTSA), which has opened an investigation focused on the safety and reliability of this driver-assistance technology—critical to Tesla’s ambitions in autonomous vehicles and AI. Beyond Musk’s ventures, the article touches on broader developments in autonomous vehicle technology. General Motors is reportedly advancing its autonomous vehicle efforts by integrating Cruise’s technology with its own advanced driver-assistance systems, rebuilding its AV team in key locations like Austin and Mountain View. In related mobility news, Joby Aviation raised approximately $514 million to support certification
robotautonomous-vehiclesTesla-Full-Self-DrivingAIelectric-vehiclesdriver-assistance-technologymobility-innovationHow machine vision is enhancing automation safety and efficiency - The Robot Report
The article explains how machine vision technologies enhance automation safety and efficiency by enabling automated systems to interpret and understand their environments through image analysis. Machine vision involves extracting meaningful information from images—not limited to visible light but also including infrared, laser, X-ray, and ultrasound imaging. This capability allows robots and automated equipment to identify and manipulate objects in complex settings, such as picking specific parts from a bin with randomly arranged items, regardless of their orientation or distance from the camera. Advanced machine vision systems also support 3D scanning and modeling, which can be used for applications like 3D printing. The article distinguishes machine vision from computer vision, noting that machine vision typically refers to established, efficient mathematical methods for image analysis, while computer vision often involves more computationally intensive approaches, including AI and machine learning. However, the terms can overlap in practice. Key techniques in machine vision include digital image processing (enhancement, restoration, compression), photogrammetry (extracting measurements and 3D information from images),
robotmachine-visionautomationindustrial-roboticscomputer-visionAI3D-scanningWhere AI meets the windshield: smarter safety with VUEROID
The article highlights how VUEROID is transforming traditional dash cams from passive recording devices into intelligent, AI-enhanced safety tools. Jessie Lee, a product planner at VUEROID, emphasizes the importance of reliable, high-quality video recording as the foundation of effective dash cams, rather than chasing flashy features like LTE connectivity or advanced driver-assistance systems (ADAS). VUEROID’s flagship model, the S1 4K Infinite, reflects this philosophy by prioritizing image quality, system reliability, and usability after incidents occur. VUEROID’s approach to AI is practical and focused on post-incident benefits, such as their AI-powered license plate restoration feature that enhances unclear footage to help identify vehicles involved in collisions. Additionally, their cloud-based AI supports privacy features like facial and license plate masking to protect sensitive data before sharing footage with insurers or on social media. A key technical strength lies in VUEROID’s expertise in Image Signal Processing (ISP) tuning, which optimizes image clarity
IoTAIdash-camsautomotive-technologycloud-computingimage-processingvehicle-safetyWhy Deloitte is betting big on AI despite a $10M refund
Deloitte is aggressively integrating AI into its operations by deploying Anthropic’s Claude AI tool to all 500,000 employees, signaling a major bet on the technology’s potential despite recent setbacks. Notably, the Australian government compelled Deloitte to refund a contract after an AI-generated report contained fabricated citations, highlighting the challenges and risks companies face when adopting AI tools prematurely and without fully established responsible usage practices. This situation exemplifies the broader, uneven landscape of AI adoption in enterprises, where enthusiasm often outpaces readiness and oversight. The article also references other tech and AI developments discussed on the Equity podcast, including significant funding rounds for startups like AltStore and Base Power, regulatory scrutiny of Tesla’s Full Self-Driving system, and Zendesk’s claims about AI handling most customer service tickets autonomously. Overall, Deloitte’s experience underscores the tension between rapid AI deployment and the need for careful management to avoid errors and maintain trust.
energyAIenterprise-technologyhome-batteriesenergy-storageTeslaautonomous-systemsEdge computing and AI: A conversation with Palladyne AI's Ben Wolff
In Episode 216 of The Robot Report Podcast, hosts Steve Crowe and Mike Oitzman feature an interview with Ben Wolff, CEO of Palladyne AI, highlighting the company's advancements in AI and robotics. Palladyne AI focuses on simplifying robot programming through an improved user interface, developing autonomous drone swarming technology, and creating hardware-agnostic AI solutions. Wolff underscores the benefits of edge computing and stresses a customer-centric approach to ensure their products are essential and user-friendly. The episode also covers significant industry news, including ABB Group’s sale of its Robotics & Discrete Automation division to SoftBank for $5.375 billion amid declining orders and revenues. The report reviews SoftBank’s varied robotics investments over the years, such as acquisitions and divestitures involving Aldebaran Robotics, Boston Dynamics, and others. Additionally, Boston Dynamics showcased its latest humanoid hand design optimized for industrial durability and affordability, while Figure AI unveiled its Figure 03 humanoid robot aimed at safe, scalable
roboticsAIedge-computingautonomous-dronesrobot-programminghumanoid-robotsSoftBank-robotics-investmentsFigure 03 robot tackles household chores with realistic motion
Figure AI has introduced its third-generation humanoid robot, Figure 03, designed to perform household and warehouse tasks with enhanced realism and efficiency. Standing five-foot-six, Figure 03 improves on its predecessor with advanced sensory systems, including cameras that process twice as many frames per second and offer a 60% wider field of view, enabling smoother navigation in complex environments. Each hand features a palm camera and highly sensitive fingertip sensors capable of detecting minimal pressure, allowing delicate handling of objects like glassware. The robot is lighter, smaller, and covered in washable mesh fabric with foam padding for safety, and it supports wireless charging through coils in its feet, providing about five hours of operation per full charge. The robot’s AI, named Helix, integrates vision, language, and movement to learn from human behavior, while upgraded actuators deliver faster, more powerful motion suitable for tasks such as sorting parts and packaging. Audio improvements include a louder speaker and clearer microphone placement, facilitating natural communication without distortion. Figure
robothumanoid-robotAIroboticswireless-chargingsensorsautomationStartup Battlefield company SpotitEarly trained dogs and AI to sniff out common cancers
SpotitEarly, an Israeli startup founded in 2020, is developing an innovative at-home cancer screening test that leverages trained dogs’ exceptional sense of smell combined with AI technology to detect early-stage cancers from human breath samples. The company employs 18 trained beagles that identify cancer-specific odors by sitting when they detect cancer particles. This canine detection is augmented by an AI platform that monitors the dogs’ behavior, breathing patterns, and heart rates to improve accuracy beyond human observation. A double-blind clinical study involving 1,400 participants demonstrated that SpotitEarly’s method can detect four common cancers—breast, colorectal, prostate, and lung—with 94% accuracy. SpotitEarly recently launched into the U.S. market with $20.3 million in funding and plans to expand its clinical trials, initially focusing on breast cancer before addressing the other cancers. The company aims to offer its multi-cancer screening kits through physicians’ networks starting next year, pricing the initial test at approximately $
AIhealthcare-technologycancer-detectionmachine-learningdiagnosticsbiotechnologyearly-screeningGaniga will showcase its waste-sorting robots at TechCrunch Disrupt 2025
Italian startup Ganiga Innovation aims to improve global plastic recycling rates, currently below 10%, through AI-enabled robotic waste sorting solutions. Their flagship product, Hoooly, is a fleet of robotic waste bins that use generative AI to distinguish trash from recyclables and sort them accordingly. Additionally, Ganiga offers a smart lid attachment for existing bins with similar sorting capabilities and a software platform that tracks corporate waste production and provides recommendations to reduce it. Founded in 2021 by Nicolas Zeoli, Ganiga has sold over 120 robotic bins to clients including Google and major European airports, generating $500,000 in revenue in 2024 and $750,000 in the first nine months of 2025. Ganiga will showcase its technology at TechCrunch Disrupt 2025 in San Francisco from October 27-30, participating in the Startup Battlefield competition. The company recently raised $1.5 million in pre-seed funding and plans to raise a $3 million seed round. In
robotAIwaste-managementrecycling-technologysmart-binsenvironmental-technologysustainabilityGaniga will showcase its waste-sorting robots at TechCrunch Disrupt
Italian startup Ganiga is addressing the global plastic recycling challenge—where less than 10% of plastic is recycled—by developing AI-enabled robotic waste bins designed to improve sorting and waste management. Their flagship product, Hoooly, uses generative AI to distinguish between trash and recyclables, automating the sorting process. Ganiga also offers a smart lid that can retrofit existing bins with similar AI capabilities and a software platform that tracks corporate waste production and provides actionable insights to reduce waste. Founded in 2021 by Nicolas Zeoli, the company has sold over 120 robotic bins to clients including Google and major airports in Europe, generating $500,000 in revenue in 2024 and $750,000 in the first nine months of 2025. Ganiga will showcase its technology at TechCrunch Disrupt 2025 in San Francisco from October 27 to 30, participating in the Startup Battlefield competition. The company plans to launch Hooolyfood in November, a software product that
robotAIwaste-managementrecycling-technologysmart-binsenvironmental-technologysustainabilityMeet the AI tool that thinks like a mechancial engineer
The article introduces the bananaz Design Agent, a pioneering AI tool specifically engineered for mechanical engineers. Unlike generic AI chatbots, this agent comprehends mechanical logic, CAD files, and engineering standards through advanced computer vision and specialized algorithms. It analyzes complex design elements such as 3D geometries, assembly hierarchies, material specifications, tolerance callouts, and company best practices, effectively synthesizing this data to provide a deep understanding of engineering intent. This enables engineers to interact with their designs conversationally, as if consulting a virtual expert with decades of experience, available around the clock. The Design Agent maintains full contextual awareness across entire projects, understanding how individual design decisions impact assemblies, manufacturability, and performance, while leveraging past work and collective company knowledge. It dramatically accelerates tasks that traditionally require hours, such as design-for-manufacturing (DFM) checks, tolerance analysis, and compliance with company standards. Additionally, it can identify opportunities to replace custom parts with standard shelf components,
robotAImechanical-engineeringCADmanufacturingdesign-automationmaterialsEdge-to-cloud robotics: eInfochips teams up with InOrbit - The Robot Report
eInfochips, an Arrow Electronics company specializing in product engineering and digital transformation, has formed a strategic partnership with InOrbit, a provider of AI-powered robot orchestration. This collaboration aims to deliver scalable, optimized edge-to-cloud robotics solutions for industries requiring large-scale autonomous mobile robot (AMR) deployments, such as warehouses, factories, and industrial hubs. Leveraging eInfochips’ Robotics Center of Excellence, the partnership will support the entire robotics stack—from hardware design and sensor fusion to edge AI and digital twins—while InOrbit’s Space Intelligence platform will provide tools for real-time fleet management, incident response, multi-vehicle orchestration, and continuous performance optimization. The integrated offering is designed to simplify and accelerate the deployment of AMR fleets, enabling businesses to automate repetitive tasks like material handling and sorting with greater flexibility and operational scale. eInfochips brings extensive expertise in AI, hardware integration, and partnerships with platform providers like NVIDIA and Qualcomm, while InOrbit contributes its experience in managing thousands of robots
roboticsedge-computingautonomous-mobile-robotsAIIoTcloud-roboticsindustrial-automationInside the Switchblade 600: America’s AI-Powered Kamikaze Drone
The U.S. Army’s 1st Cavalry Division recently conducted a live-fire exercise featuring AeroVironment’s Switchblade 600, a next-generation kamikaze drone that integrates advanced AI capabilities. Unlike traditional loitering munitions, the Switchblade 600 can make autonomous decisions before striking, enhancing its precision and lethality. This backpack-portable drone is equipped with a Javelin warhead capable of destroying main battle tanks, has a range of 40 kilometers, and can loiter for up to 40 minutes. Additionally, it can be launched by a single soldier and even recalled mid-flight, earning it the nickname “missile with an undo button.” The Switchblade 600 represents a significant evolution in modern warfare by combining AI-driven autonomy with powerful strike capabilities, enabling more flexible and responsive battlefield operations. Its ability to be controlled and adjusted in real-time offers tactical advantages, reducing collateral damage and increasing mission success rates. The recent live-fire exercise demonstrated the drone’s
robotAIautonomous-dronesmilitary-technologyunmanned-aerial-vehiclesbattlefield-innovationdefense-technologyMeet the AI tool that thinks like a mechancial engineer
The article introduces the bananaz Design Agent, an AI-powered tool specifically developed for mechanical engineers to streamline design and manufacturing processes. Unlike generic AI chatbots, this agent comprehends mechanical logic, CAD files, engineering standards, and company-specific best practices. Founded in 2023 by experienced mechanical engineers, bananaz aims to reduce design errors and accelerate innovation across industries such as medical devices, aerospace, automotive, and oil & gas. The Design Agent uses advanced computer vision and specialized algorithms to analyze 3D geometries, annotations, assembly hierarchies, material specs, tolerances, and team communication, providing a comprehensive understanding of engineering designs. A key feature of the Design Agent is its context-aware analysis, allowing it to understand how individual design decisions affect the entire assembly and manufacturing outcomes. It maintains full project context, leveraging past work and collective design history to offer precise, relevant recommendations. Users can interact with their designs in plain language, asking questions about design-for-manufacturing (DF
robotAImechanical-engineeringCADmanufacturingautomationdesign-optimizationAutonomous ARGUS robot tracks hackers and guards physical spaces
Romanian researchers from Ștefan cel Mare University have developed ARGUS (Autonomous Robotic Guard System), an innovative autonomous robot that integrates physical security and cybersecurity into a unified defense platform. Equipped with LiDAR, RGB/IR cameras, an intrusion detection system (IDS) module, and AI-powered computer vision, ARGUS can simultaneously patrol physical spaces and monitor network traffic to detect intruders and cyber threats in near real-time. It uses deep learning to identify suspicious activities such as unauthorized personnel, weapons, abnormal sounds, and digital anomalies, enabling it to respond to both physical and cyber breaches concurrently. ARGUS employs advanced navigation technologies like Simultaneous Localization and Mapping (SLAM) and sophisticated control algorithms to autonomously maneuver through indoor and outdoor environments without human intervention. Its modular design allows integration with existing security infrastructures, making it suitable for complex environments such as industrial plants, smart cities, airports, and research labs where cyber-physical threats often overlap. Future developments envision multiple ARGUS units operating as
roboticsautonomous-robotscybersecurityAISLAMsmart-buildingsintrusion-detectionUS pilots fly alongside AI-piloted drones for next-gen air combat
The US Air Force is advancing next-generation air combat by training pilots to operate alongside AI-piloted drones, notably the XQ-58A Valkyrie. This stealthy, autonomous drone, developed by Kratos under the Low-Cost Attritable Strike Demonstrator program, is designed for collaborative missions with manned aircraft such as F-35s, F-22s, F-15EXs, and F-18s. Capable of flying up to 3,000 nautical miles at speeds of 0.86 Mach and altitudes of 45,000 feet, the Valkyrie aims to provide a cost-effective, high-performance platform that can rapidly be produced in large numbers. The AI technology enables split-second decision-making and defensive maneuvers, requiring pilots to adapt to the drones’ fast reactions and dynamic flight behavior. Human-machine teaming is a key focus, with ongoing tests at Eglin Air Force Base exploring how AI-piloted drones can operate in concert
robotAIautonomous-dronesmilitary-technologyhuman-machine-teamingunmanned-aerial-vehiclesdefense-systemsTX-GAIN: MIT supercomputer to power generative AI breakthroughs
MIT’s Lincoln Laboratory Supercomputing Center (LLSC) has unveiled TX-GAIN, the most powerful AI supercomputer at a U.S. university, designed primarily to advance generative AI and accelerate scientific research across diverse fields. With a peak performance of 2 exaflops, TX-GAIN ranks on the TOP500 list and stands as the leading AI system in the Northeast. Unlike traditional AI focused on classification tasks, TX-GAIN excels in generating new outputs and supports applications such as radar signature evaluation, supplementing weather data, anomaly detection in network traffic, and exploring chemical interactions for drug and material design. TX-GAIN’s computational power enables modeling of significantly larger and more complex protein interactions, marking a breakthrough for biological defense research. It also fosters collaboration, notably with the Department of Air Force-MIT AI Accelerator, to prototype and scale AI technologies for military applications. Housed in an energy-efficient data center in Holyoke, Massachusetts, the LLSC supports thousands of researchers working on
energysupercomputingAIscientific-researchenergy-efficiencygenerative-AImaterials-researchAnker offered Eufy camera owners $2 per video for AI training
Anker, the maker of Eufy security cameras, launched a campaign earlier this year offering users $2 per video of package or car thefts to help train its AI systems for better theft detection. The initiative encouraged users to submit both real and staged videos, even suggesting users stage theft events to earn more money, with payments made via PayPal. The campaign ran from December 18, 2024, to February 25, 2025, aiming to collect 20,000 videos each of package thefts and car door thefts. Over 120 users reportedly participated, and Eufy has since continued similar programs, including an in-app Video Donation Program that rewards users with badges, gifts, or gift cards for submitting videos involving humans. The company claims the videos are used solely for AI training and are not shared with third parties. However, concerns about privacy and data security persist. Eufy has a history of misleading users about the encryption of their camera streams, as revealed
IoTAIsecurity-camerasvideo-datauser-incentivessmart-home-devicesdata-privacyInstacrops will demo its water-saving, crop-boosting AI at TechCrunch Disrupt 2025
Instacrops, a Chile-based startup founded by Mario Bustamante, is leveraging AI to address the critical issue of water scarcity in agriculture, particularly in water-stressed regions like Chile and India where agriculture consumes over 90% of water resources. The company helps around 260 farms reduce water usage by up to 30% while boosting crop yields by as much as 20%. By shifting from hardware to AI-driven solutions, Instacrops now processes approximately 15 million data points per hour, significantly increasing efficiency and impact with fewer staff. Their technology integrates IoT sensors or existing farm networks to collect data on over 80 parameters—including soil moisture, humidity, temperature, and satellite-derived plant productivity metrics (NDVI)—to provide precise irrigation advisories directly to farmers via mobile apps and WhatsApp. Instacrops focuses on high-value crops in Latin America such as apples, avocados, blueberries, almonds, and cherries. The startup offers its services through an annual fee per hectare, enabling farmers
IoTagriculture-technologyAIwater-conservationsmart-farmingcrop-yield-optimizationenvironmental-sustainabilityDiligent Robotics adds two members to AI advisory board - The Robot Report
Diligent Robotics, known for its Moxi mobile manipulator used in hospitals, has expanded its AI advisory board by adding two prominent experts: Siddhartha Srinivasa, a robotics professor at the University of Washington, and Zhaoyin Jia, a distinguished engineer specializing in robotic perception and autonomy. The advisory board, launched in late 2023, aims to guide the company’s AI development with a focus on responsible practices and advancing embodied AI. The board includes leading academics and industry experts who provide strategic counsel as Diligent scales its Moxi robot deployments across health systems nationwide. Srinivasa brings extensive experience in robotic manipulation and human-robot interaction, having led research and development teams at Amazon Robotics and Cruise, and contributed influential algorithms and systems like HERB and ADA. Jia offers deep expertise in computer vision and large-scale autonomous systems from his leadership roles at Cruise, DiDi, and Waymo, focusing on safe and reliable AI deployment in complex environments. Diligent Robotics’
roboticsAIhealthcare-robotsautonomous-robotshuman-robot-interactionrobotic-manipulationembodied-AIBezos predicts that millions will live in space kind of soon
At Italian Tech Week in Turin, Jeff Bezos predicted that millions of people will be living in space within the next couple of decades. He emphasized that this migration will be driven primarily by choice, with robots managing labor-intensive tasks and AI-powered data centers operating in orbit. Bezos’s vision contrasts with, yet parallels, Elon Musk’s long-standing goal of colonizing Mars, where Musk envisions a million inhabitants by 2050. Both billionaires appear optimistic about rapid space habitation, though their timelines and approaches differ. Bezos also expressed strong support for the current surge in AI investments, describing it as a beneficial “industrial” bubble rather than a speculative financial one. He conveyed an overall optimistic outlook on the future, suggesting that this period is an unprecedented opportunity for technological advancement and innovation. His remarks reflect a confident stance on both space exploration and AI development as transformative forces shaping humanity’s near future.
robotsAIspace-colonizationBlue-Originroboticsartificial-intelligencespace-technologyMusk tops $500B as world’s 10 richest control combined $2.3T
As of October 1, 2025, Elon Musk has become the first person in history to reach a net worth of $500 billion, driven largely by a 33% surge in Tesla shares, investor enthusiasm around AI and robotics, and his $1 billion stock purchase. Musk’s wealth is also bolstered by his aerospace company SpaceX, now valued at $400 billion, and his AI startup xAI, valued at $80 billion following a $6 billion private funding round. Tesla remains central to his fortune with a market valuation exceeding $1 trillion. Musk has held the title of the world’s richest person multiple times since 2021, most recently reclaiming it in May 2024. Larry Ellison, co-founder and executive chairman of Oracle, ranks second with an estimated net worth of about $350.7 billion. Oracle’s stock jumped 36% in September 2025 after the company projected a 700% revenue increase in its cloud infrastructure business over four years,
robotAIenergyelectric-vehiclesaerospaceSpaceXTeslaPrickly Pear Health will showcase how it’s helping women’s brain health at TechCrunch Disrupt 2025
Prickly Pear Health, led by CEO Iman Clark, is a health tech startup focused on improving women’s brain health, particularly for women in their 30s to 50s experiencing hormonal changes that affect cognition. Clark’s inspiration came from her background working with neurodegenerative conditions and her discovery that women disproportionately suffer from Alzheimer’s, depression, anxiety, and migraines. Recognizing a gap in addressing women’s unique biology, Prickly Pear Health offers a voice-first, AI-powered companion that allows users to record daily reflections. The AI analyzes language and context to detect cognitive changes, integrating data from health trackers like Apple Health and Garmin to provide personalized insights. The company will showcase its technology at TechCrunch Disrupt 2025 in San Francisco from October 27 to 29. Clark emphasizes that traditional care often misses early signs of brain health issues in midlife women, who are frequently dismissed or misdiagnosed. Prickly Pear Health aims to fill this gap by helping women recognize
IoTAIhealth-technologywearable-devicesbrain-healthwomen's-healthdigital-healthFieldAI founder and CEO to discuss building risk-aware AI models at RoboBusiness - The Robot Report
FieldAI, a company specializing in autonomy software for industries such as construction, oil and gas, mining, and agriculture, is addressing the challenge of deploying scalable autonomous robots in complex real-world environments. At RoboBusiness 2025 in Santa Clara, CEO Dr. Ali Agha will present how FieldAI’s “physics-first” foundation models (Field Foundation Models, FFMs) are uniquely designed for embodied intelligence. Unlike traditional vision or language models adapted for robotics, FFMs are built from the ground up to handle uncertainty, risk, and physical constraints, enabling robots to make real-time decisions and navigate dynamic, unstructured settings without relying on maps, GPS, or predefined routes. This approach is already being implemented successfully in various industrial applications worldwide. Dr. Ali Agha brings nearly 20 years of experience in AI and autonomy, having led significant projects at NASA’s Jet Propulsion Laboratory, including the DARPA Subterranean Challenge and autonomous exploration missions on Mars. His expertise underpins FieldAI’s strategic
roboticsAIautonomyFieldAIrobotics-softwareindustrial-robotsRoboBusinessMeta plans to sell targeted ads based on data in your AI chats
Meta announced that starting December 16, it will use data from user interactions with its AI products to sell targeted ads across its social media platforms, including Facebook and Instagram. This update to its privacy policy applies globally except in South Korea, the UK, and the EU, where privacy laws restrict such data use. Meta plans to incorporate information from conversations with its AI chatbot and other AI features—such as those in Ray-Ban Meta smart glasses, which analyze voice recordings, pictures, and videos—into its ad targeting algorithms. For example, if a user discusses hiking with the AI, they may receive ads for hiking gear. However, sensitive topics like religion, sexual orientation, political views, health, and ethnicity will be excluded from ad targeting. Meta emphasizes that AI interaction data will only influence ads if users are logged into the same account across products, and currently, there is no opt-out option for this data use. This move reflects a broader trend among tech companies to monetize AI products, which are
IoTAItargeted-advertisingsmart-glassesdata-privacyMeta-AIuser-dataGoogle unveils AI-powered Nest indoor and outdoor cameras, and a new doorbell
At its recent Google Home event, Google introduced a new lineup of Nest smart home security devices featuring AI enhancements powered by its Gemini AI assistant. The updated products include a $149.99 Nest Cam Outdoor, a $99.99 Nest Cam Indoor, and a $179.99 Nest Doorbell, all equipped with 2K HDR video—the highest resolution Google has offered to date. These devices provide a wider field of view (152 degrees for cameras and 166 degrees for the doorbell) and improved low-light performance, with 120% greater light sensitivity and extended full-color mode during dawn and dusk. The doorbell’s aspect ratio was also changed to 1:1 to capture more detailed images of visitors and packages. The key differentiator is the integration of Gemini AI, which enhances the intelligence of notifications by providing more context rather than generic alerts. Instead of simple motion detection notices, users might receive descriptive alerts such as “dog jumps out of playpen,” accompanied by zoomed-in video
IoTsmart-homeAIsecurity-camerasNest-devicesGemini-AIvideo-surveillanceToyota adds another $1.5B to its bet on startups at every stage
Toyota is significantly expanding its investment in startups across various stages of development, committing an additional $1.5 billion to support innovation in mobility, climate, AI, and industrial automation. The company announced the creation of Toyota Invention Partners Co., a strategic investment subsidiary with about $670 million in capital focused on early-stage, Japan-based startups with a long-term investment horizon. This new entity complements Toyota’s existing venture arms—Toyota Ventures, which targets early-stage startups, and Woven Capital, which focuses on growth-stage companies. Woven Capital also launched a second $800 million fund aimed at Series B to late-stage startups advancing AI, automation, climate tech, energy, and sustainability, and has become a wholly owned Toyota subsidiary. This multi-tiered investment approach allows Toyota to support startups from the initial invention phase through growth and maturity, with the potential for successful ventures to be integrated into Toyota’s balance sheet. The strategy reflects Toyota’s deepening commitment to the startup ecosystem and its role in developing technologies
robotAIindustrial-automationadvanced-manufacturingroboticsstartup-investmentautomotive-technologyMachina Labs uses robotics, AI to customize automotive body manufacturing - The Robot Report
Machina Labs, founded in 2019 and based in Los Angeles, is revolutionizing automotive body manufacturing by replacing traditional, bulky, and expensive dies and presses with a robotic and AI-driven approach. Their RoboCraftsman platform uses advanced robotics and AI process controls to incrementally form sheet metal into customized vehicle panels rapidly, eliminating the need for dedicated tooling per model variation. This innovation significantly reduces capital costs, storage needs, and production changeover times, enabling automakers to offer customized vehicles at mass-production prices. The technology also allows the use of new metal alloys, such as titanium and nickel, which were previously difficult to form with conventional methods. The company’s approach supports on-demand, low-volume part production near assembly lines, streamlining factory workflows and enabling dynamic batching without disrupting existing manufacturing processes. This contrasts with traditional automotive manufacturing, which relies on long-term use of costly dies and molds, limiting customization and flexibility. Machina Labs initially targeted defense applications with high-mix, low-volume production but
roboticsAIautomotive-manufacturingsheet-metal-formingindustrial-robotsmanufacturing-automationcustom-vehicle-productionFormer OpenAI and DeepMind researchers raise whopping $300M seed to automate science
Periodic Labs, a new startup founded by former OpenAI and DeepMind researchers Ekin Dogus Cubuk and Liam Fedus, has emerged from stealth with an unprecedented $300 million seed funding round. Backed by prominent investors including Andreessen Horowitz, Nvidia, Jeff Dean, Eric Schmidt, and Jeff Bezos, the company aims to revolutionize scientific discovery by creating AI-driven autonomous laboratories. These labs will use robots to conduct physical experiments, collect data, and iteratively improve their processes, effectively building "AI scientists" that can accelerate the invention of new materials. The initial focus of Periodic Labs is to develop novel superconductors that outperform current materials and potentially require less energy. Beyond superconductors, the startup intends to discover a variety of new materials while simultaneously generating fresh physical-world data to feed back into AI models, addressing the limitations of existing models trained primarily on internet data. This approach marks a shift toward integrating AI with hands-on experimentation to push the boundaries of scientific research. Although Periodic Labs
robotAImaterials-scienceenergyautomationscientific-discoverysuperconductorsGlobant invests in InOrbit Series A to advance robot orchestration - The Robot Report
InOrbit Inc., a Mountain View-based company specializing in AI-powered robot operations (RobOps) software, has closed its Series A funding round led by Globant and other investors. The capital will be used to accelerate platform development and expand InOrbit’s presence in key industries such as manufacturing, logistics, retail, and hospitality. InOrbit aims to address challenges like labor shortages and supply chain risks by providing a robot orchestration platform that integrates robots, human workers, and AI agents. The company’s software acts as a “central nervous system” for robot fleets, enabling autonomous decision-making and adaptive responses in real-world environments, with customers including Colgate-Palmolive and Genentech. The partnership between InOrbit and Globant builds on their previous collaboration, with Globant integrating InOrbit’s RobOps software into its Robotics Studio and offering it as part of its digital transformation services. Globant emphasizes that InOrbit’s platform complements existing enterprise systems such as WMS and ERP, enhancing orchestration of diverse
robotroboticsAIautomationrobot-orchestrationenterprise-softwareautonomous-robotsHance will demo its kilobyte-size AI audio processing software at TechCrunch Disrupt 2025
Norwegian startup Hance is showcasing its ultra-compact AI-driven audio processing software at TechCrunch Disrupt 2025. The company has developed models as small as 242 kB that run on-device with just 10 milliseconds of latency, enabling real-time noise reduction, sound separation, echo and reverb removal, and speech clarity enhancement. This technology is particularly valuable in high-stakes environments like Formula One racing, where clear communication is critical, and has already attracted clients such as Intel and Riedel Communications, the official radio supplier to F1. Hance’s team, including co-founders with deep audio industry experience, trained their AI models on a diverse range of high-quality sounds, from F1 car roars to volcanic eruptions. Their software’s small size and energy efficiency allow it to operate on various devices without relying on cloud processing, making it suitable for professional applications in sports broadcasting, law enforcement, and defense. The company is actively partnering with chipmakers like Intel to optimize
AIaudio-processingenergy-efficient-softwareedge-computingneural-processing-unitsreal-time-audio-enhancementnoise-reductionWhy you can’t miss the aerospace content at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025 will feature significant aerospace content presented by the Aerospace Corporation, emphasizing how artificial intelligence (AI) is transforming the space economy beyond traditional hardware like rockets and satellites. The event includes two key sessions on October 27 that highlight startups addressing critical challenges in space exploration, orbital intelligence, and space infrastructure through AI-driven innovations. These startups are developing solutions for automating mission planning, preventing satellite collisions, and optimizing communications and servicing in orbit, showcasing early-stage companies tackling complex, high-stakes problems in the space industry. The second session focuses on "AI at the edge," addressing the unique constraints of space environments such as latency and bandwidth limitations that make cloud computing impractical. It highlights advancements in autonomous systems, resilient computing architectures, and onboard intelligence that enable spacecraft to process data in real-time and operate more safely and efficiently. Together, these sessions provide insight into how AI and cutting-edge technology are converging to redefine space missions and infrastructure, positioning the space sector as a rapidly evolving
robotAIaerospaceautonomous-systemsspace-technologysatelliteedge-computingRing cameras can now recognize faces and help to find lost pets
Amazon’s Ring announced several new AI-powered features and products aimed at enhancing home security and convenience. The headline feature, “Familiar Faces,” uses artificial intelligence to recognize friends and family members, allowing users to receive alerts only for unfamiliar visitors and reduce unnecessary notifications. This feature integrates with the new Alexa+ Greetings system, which acts as a smart doorbell assistant by providing personalized greetings and managing visitor interactions. Additionally, Ring introduced “Search Party,” an AI-driven tool to help find lost pets by connecting Ring users in the same neighborhood to report sightings voluntarily. Search Party will initially support dogs starting in November, with plans to expand to cats and other pets. These new capabilities will be available on Ring’s upcoming Retinal 2K and Retinal 4K product lines, which feature “Retinal Vision” technology designed to optimize video quality continuously using AI. The Retinal 2K devices include the Indoor Cam Plus ($59.99) and Wired Doorbell Plus ($179.99), while the
IoTsmart-homeAIfacial-recognitionsmart-securitypet-trackingAmazon-RingRoot Access develops tool for engineers of embedded systems, raises funding - The Robot Report
Root Access Inc., a New York-based startup co-founded by Ryan Eppley and Samarpita Chowdhury, has developed an AI-native tool aimed at engineers working on firmware for embedded systems, such as heavy machinery, robotics, and mission-critical hardware. Recognizing that firmware development is often overlooked compared to hardware and software, Root Access seeks to streamline and enhance this process by enabling engineers to validate and configure components more efficiently using their Hideout integrated development environment (IDE). The company recently raised $2.1 million in pre-seed funding to advance its technology. The founders bring complementary expertise: Eppley has a diverse background spanning agriculture, competitive sports, philosophy, and technology roles at Oracle and other ventures, while Chowdhury has deep hardware and firmware experience, including military motherboard design and multiple patents. They identified a gap between PCB design tools and other parts of the tech stack, aiming to fill it with their AI-driven solution. Since its incorporation in 2024, Root Access has
robotembedded-systemsAIfirmware-developmenthardware-engineeringroboticsmission-critical-systemsZoox chooses Washington DC as its next autonomous vehicle testbed
Amazon-owned autonomous vehicle company Zoox has selected Washington D.C. as its next testing ground for self-driving technology, beginning with mapping the city’s streets using manually driven Toyota Highlanders equipped with its sensors and software. The company plans to start autonomous vehicle testing with human safety operators later this year, marking Washington D.C. as its eighth test site after expanding beyond its original Silicon Valley base to cities including Austin, Atlanta, Los Angeles, Las Vegas, Miami, San Francisco, and Seattle. Zoox intends to grow its testing fleet gradually but has not disclosed specific numbers. Zoox is developing a commercial robotaxi service using custom-built autonomous vehicles that lack traditional controls like steering wheels or pedals. The company recently launched testing of these vehicles in Las Vegas, a key market where it has maintained a presence. Regulatory progress includes a National Highway Traffic Safety Administration exemption granted in August allowing Zoox to demonstrate its robotaxis on public roads for research purposes. Zoox has filed additional applications to expand this authorization, aiming
robotautonomous-vehiclesself-driving-carsrobotaxitransportation-technologyAImobilityNVIDIA launches Newton physics engine and GR00T AI at CoRL 2025 - The Robot Report
NVIDIA has introduced several advancements to accelerate robotics research, unveiling the beta release of Newton, an open-source, GPU-accelerated physics engine managed by the Linux Foundation. Developed collaboratively with Google DeepMind and Disney Research, Newton is built on NVIDIA’s Warp and OpenUSD frameworks and is designed to simulate physical AI bodies. Alongside Newton, NVIDIA announced the latest version of the Isaac GR00T N1.6 robot foundation model, soon to be available on Hugging Face. This model integrates Cosmos Reason, an open, customizable vision language model (VLM) that enables robots to convert vague instructions into detailed plans by leveraging prior knowledge, common sense, and physics, thus enhancing robots’ ability to reason, adapt, and generalize across tasks. At the Conference on Robot Learning (CoRL) 2025 in Seoul, NVIDIA highlighted Cosmos Reason’s role in enabling robots to handle ambiguous or novel instructions through multi-step inference and AI reasoning, akin to how language models process text. This capability is
roboticsAIphysics-engineNVIDIArobot-simulationmachine-learningIsaac-GR00TNew disaster-response robot hauls 330-lb across rubble to save lives
Researchers in Germany have developed ROMATRIS, an AI-supported semi-autonomous robot designed to aid disaster relief efforts by transporting heavy equipment—up to 150 kilograms (approximately 330 pounds)—across challenging and hazardous terrain inaccessible to conventional vehicles or stretchers. The project is a collaboration between the German Research Center for Artificial Intelligence (DFKI) and the Federal Agency for Technical Relief (THW). ROMATRIS combines rugged mechanical design with advanced sensor technologies, including depth cameras, ultrasonic and laser sensors, and neural networks that enable gesture recognition and autonomous navigation. This allows emergency personnel to control the robot intuitively via hand gestures or remote control, or to set it to follow or shuttle modes for autonomous operation. The robot was tested extensively in field scenarios at THW training centers, with input from over 20 volunteers across 14 THW local associations, ensuring it meets real-world civil protection needs. The system demonstrated its capability to transport bulky equipment such as generators, pumps, and hoses across rough terrain
robotroboticsdisaster-responseAIautonomous-navigationgesture-recognitionemergency-servicesThe TechCrunch Disrupt Stage revealed: Behold the first look
TechCrunch Disrupt 2025 will feature the Disrupt Stage as the central hub for major tech announcements, startup competitions, and industry insights. Highlights include the Startup Battlefield competition, where founders compete for a $100,000 prize and exposure, judged by prominent venture capitalists like Aileen Lee and Kirsten Green. The event will showcase influential speakers such as Alphabet’s Astro Teller discussing AI and moonshot projects, Netflix CTO Elizabeth Stone on streaming innovation, and Vinod Khosla offering candid predictions on tech’s future. Other notable presentations include Sequoia Capital’s Roelof Botha on emerging venture trends, Waymo co-CEO Tekedra Mawakana on autonomous vehicle realities, and Slate Auto unveiling its first fully customizable electric truck. Additional sessions will cover diverse topics such as cloud computing growth with Box CEO Aaron Levie, consumer AI scaling by Phia founders including Phoebe Gates, and investor Kevin Rose’s perspectives on reinvention and future opportunities. The event takes place October 27
energyelectric-vehiclesautonomous-vehiclesAItech-startupsventure-capitalinnovationIn a first, scientists observe short-range order in semiconductors
Scientists from Lawrence Berkeley National Laboratory and George Washington University have, for the first time, directly observed short-range atomic order (SRO) in semiconductors, revealing hidden patterns in the arrangement of atoms like germanium, tin, and silicon inside microchips. This breakthrough was achieved by combining advanced 4D scanning transmission electron microscopy (4D-STEM) enhanced with energy filtering to improve contrast, and machine learning techniques including neural networks and large-scale atomic simulations. These methods allowed the team to detect and identify recurring atomic motifs that were previously undetectable due to weak signals and the complexity of atomic arrangements. The discovery of SRO is significant because it directly influences the band gap of semiconductors, a critical property that governs their electronic behavior. Understanding and controlling these atomic-scale patterns could enable the design of materials with tailored electronic properties, potentially revolutionizing technologies such as quantum computing, neuromorphic devices, and advanced optical sensors. While this research opens new avenues for atomic-scale material engineering, challenges
materialssemiconductorsatomic-ordermicroscopyAImachine-learningelectronic-propertiesUS to soon detect hostile drones at longer ranges using new system
The United States is set to enhance its ability to detect hostile drones at longer ranges through a new AI-enabled detection system demonstrated by L3Harris Technologies and Shield AI. This system integrates L3Harris’ WESCAM MX-Series electro-optical/infrared sensors with Shield AI’s Tracker counter-UAS software, enabling faster and more effective identification of unmanned aerial vehicles (UAVs), even when partially obscured by obstacles like buildings or clouds. The technology addresses the growing threat posed by increasingly numerous and complex drone adversaries, requiring quicker, covert responses to protect military operators. The next development phase involves refining airborne object behavior models to improve tracking performance across air, land, and maritime domains during both day and night operations. This AI-powered capability will be incorporated into L3Harris’ VAMPIRE Counter-Unmanned System, designed specifically to defend against small drones. The WESCAM MX-Series sensors, known for their multi-spectral, high-sensitivity EO/IR surveillance and
robotAIdrone-detectionunmanned-aerial-systemselectro-optical-sensorsinfrared-sensorscounter-UAS-technologyAfter 5 Years of Driving An EV Every Day, Driving Old ICE Cars Is Just Painful - CleanTechnica
The article recounts the author’s experience transitioning from driving an electric vehicle (EV) daily for five years to using old internal combustion engine (ICE) cars for the past eight months due to unforeseen circumstances. On Christmas Eve 2024, the author’s Nissan LEAF was severely damaged in a multi-vehicle accident caused by a collision involving a minibus taxi and a lorry at a traffic light intersection. The LEAF was declared a total loss by the insurance company, which offered either a like-for-like replacement or a cash payout. Due to a subsequent medical emergency, the author had to use the insurance money for medical bills and was left without a car. During this period without an EV, the author relied on various old ICE vehicles lent by friends and family. The experience was notably frustrating, especially coming from the convenience and driving dynamics of a BEV. The author highlights specific pain points such as the lack of regenerative braking in ICE vehicles, which made driving feel awkward after years of EV use
energyelectric-vehiclesEVNissan-LEAFtraffic-lightsAItransportation-technologyRobots, mergers and acquitions with Peter Finn
In Episode 214 of The Robot Report Podcast, hosts Steve Crowe and Mike Oitzman discuss key developments in robotics with guest Peter Finn, Managing Director at Brown Gibbons Lang & Company (BGL). Finn provides insights into the post-COVID industrial technology landscape, highlighting the growing influence of AI and robotics, the challenges and opportunities in the sector, and the importance of adaptability amid rapid technological change. The conversation also covers trends in mergers and acquisitions within robotics, as well as the emerging potential of humanoid robots. The episode also reviews major robotics news, including a U.S. national security investigation into imports of medical devices, robotics, and industrial machinery aimed at reducing reliance on overseas supply chains. This has raised concerns about potential tariffs on imported robots and their impact on reshoring manufacturing efforts, especially since most industrial robots used in the U.S. are currently imported. Additionally, the IEEE Humanoid Study Group released a framework for humanoid robot standards, addressing classification, stability, and human-robot
roboticsindustrial-robotsAIhumanoid-robotsmergers-and-acquisitionsrobotics-standardsmanufacturing-reshoringLQMs vs. LLMs: when AI stops talking and starts calculating
The article discusses the emerging role of Large Quantitative Models (LQMs) as a new class of AI systems that differ fundamentally from Large Language Models (LLMs). Unlike LLMs, which are trained on internet text to generate language-based outputs, LQMs are purpose-built to work with numerical, scientific, and physical data, enabling them to simulate complex real-world systems in fields like chemistry, biology, and physics. Fernando Dominguez, Head of Strategic Partnerships at SandboxAQ—a company at the forefront of AI and quantum technology integration—explains that LQMs can generate novel data not available in existing datasets, such as simulating trillions of molecular interactions. This capability allows LQMs to accelerate drug discovery, financial modeling, and navigation, offering a more quantitative and practical approach to AI-driven innovation. A key example highlighted is SandboxAQ’s collaboration with UCSF’s Institute for Neurodegenerative Diseases, where LQMs enabled the simulation of over 5 million molecular compounds in
materialsAIquantum-computingdrug-discoverysimulationpharmaceuticalscybersecurityFrom autonomous running coach to mini-scooter, Trego does it all
The Trego, developed by YUPD and Wooks designers, is an innovative AI-powered autonomous personal vehicle designed to support runners throughout their entire exercise routine. It operates in two main modes: AI Mode and Mobility Mode. In AI Mode, Trego runs alongside the user, using sensors to adapt to their pace and running conditions, helping maintain rhythm and efficiency. Mobility Mode transforms Trego into a mini-scooter with foldable handlebars, footrests, and a built-in seat, allowing users to comfortably travel to and from their running locations without walking. Equipped with a built-in display, Trego provides real-time running metrics such as distance, pace, and calories burned, while also allowing users to input or confirm destinations. Safety is prioritized with front and rear cameras and sensors that detect obstacles, pedestrians, and vehicles, automatically adjusting the device’s path to avoid collisions in both modes. Additionally, Trego features a storage compartment integrated into the seat for securing essentials, and a dedicated docking and
robotAIautonomous-vehiclepersonal-mobilitysensorselectric-scootersmart-deviceHow robotics startups can avoid costly IP mistakes
The article emphasizes the critical importance of intellectual property (IP) management for robotics startups, highlighting that strong IP protection can safeguard innovations, deter competitors, and attract investment. Robotics companies operate at the intersection of hardware, software, AI, and data, making a comprehensive IP strategy essential. Key forms of protection include patents, which guard core technologies and incremental improvements; trade secrets, especially for algorithms and manufacturing know-how; trademarks for brand identity; and copyrights for software. A well-rounded approach integrating these protections can differentiate startups that successfully scale. For software-based innovations, particularly AI-driven robotics, the article advises combining patents, trade secrets, and copyrights to cover unique technical solutions and code. Startups must avoid common pitfalls such as public disclosure before filing patents, neglecting incremental improvements, failing to secure IP ownership from contractors, and undervaluing trade secrets. Conducting freedom-to-operate analyses and patent landscape reviews helps avoid infringement on patents held by established companies and identifies innovation opportunities. Finally, an international patent strategy
roboticsintellectual-propertyAIpatentstrade-secretsstartupsinnovationAMP Robotics acquires Portsmouth recycling operations from RDS of Virginia - The Robot Report
AMP Robotics Corp., a developer of AI-powered robotic sorting technology for waste and recycling, has acquired the Portsmouth recycling operations of RDS of Virginia LLC, which has served South Hampton Roads since 2005. Since late 2023, AMP has operated its AMP ONE system at the Portsmouth facility, autonomously processing up to 150 tons of municipal solid waste (MSW) daily with over 90% uptime. The system separates recyclables and organic materials from bagged trash, enabling the facility to divert more than 50% of landfill-bound waste when combined with organics management and mixed recyclables sorting. AMP also plans to expand the single-stream recycling operations inherited from RDS Portsmouth. Founded in 2014, AMP Robotics has identified 150 billion items and sorted over 2.5 million tons of recyclables using its AI platform. The company raised $91 million in funding at the end of 2024 and is transitioning from solely a technology developer to an operating company by acquiring and managing
roboticsAIwaste-managementrecycling-technologyAMP-Roboticsmunicipal-solid-wasteautomationAlibaba bets big on AI with Nvidia tie-up, new data center plans
Alibaba is intensifying its focus on artificial intelligence, unveiling a major partnership with Nvidia, plans to expand its global data center network, and launching its most advanced AI models at the 2025 Apsara Conference. The collaboration with Nvidia will integrate Physical AI tools into Alibaba’s cloud platform, enhancing capabilities in data synthesis, model training, simulation, and testing for applications like robotics and autonomous driving. This move is part of Alibaba’s broader strategy to compete aggressively in the AI sector, which has driven its Hong Kong and U.S.-listed shares up nearly 10%. CEO Eddie Wu emphasized that Alibaba will increase its AI investment beyond the already committed 380 billion yuan ($53 billion). Alibaba also announced plans to open new data centers in Brazil, France, the Netherlands, and additional sites across Mexico, Japan, South Korea, Malaysia, and Dubai, expanding its existing network of 91 data centers in 29 regions. This expansion aims to meet growing demand from AI developers and enterprise customers worldwide, positioning Alibaba
AINvidiaData-CentersCloud-ComputingRoboticsAutonomous-DrivingArtificial-IntelligenceStep into the future: The full AI Stage at TechCrunch Disrupt 2025
The AI Stage at TechCrunch Disrupt 2025, scheduled for October 27–29 in San Francisco, will showcase leading innovators and companies shaping the future of artificial intelligence across diverse domains such as generative AI, developer tools, autonomous vehicles, creative AI, and national security. Attendees, especially founders, will gain early insights into emerging technologies, strategic lessons, and firsthand knowledge from top AI teams including Character.AI, Hugging Face, Wayve, and others. The event features a comprehensive agenda with keynotes, breakouts, roundtables, and networking opportunities designed to explore AI’s evolving landscape in scaling, investing, and building. Highlights include discussions on the future of AI-driven search with Pinecone’s CEO Edo Liberty, the evolving AI infrastructure stack with Hugging Face’s Thomas Wolf, and the practical impact of AI on software development led by JetBrains’ CEO Kirill Skrygan. Autonomous systems and physical AI will be explored by leaders from Wayve, Apptronik,
robotautonomous-vehiclesAIartificial-intelligenceself-driving-technologyhumanoid-robotsAI-innovationHow Al Gore used AI to track 660M polluters
Former Vice President Al Gore, through the nonprofit Climate Trace which he co-founded, has launched an AI-powered tool that tracks fine particulate matter (PM2.5) pollution from over 660 million sources globally. This initiative aims to provide precise, accessible data on pollution levels and sources, addressing a significant public health crisis linked to conventional air pollution. The project was inspired by Gore’s experience with communities in Memphis, Tennessee, affected by pollution from a nearby refinery and a crude oil pipeline, highlighting the need for transparent monitoring of pollutant plumes near populated areas. The tool, developed in partnership with Carnegie Mellon University, uses AI to manage and visualize vast amounts of pollution data, making it possible to track emissions worldwide—something previously unimaginable without advanced technology. Scientific research has increasingly revealed the extensive health risks of PM2.5 exposure beyond lung cancer and heart disease, including strokes and other serious conditions causing hundreds of thousands of deaths annually in the U.S. Gore hopes that raising awareness of these health impacts,
energyAIpollution-trackinggreenhouse-gas-emissionsclimate-changepublic-healthfossil-fuelsWeRide Robotaxi Service Coming to Singapore - CleanTechnica
Chinese autonomous vehicle company WeRide is set to launch its robotaxi service in Singapore, marking the first such service to serve residential areas in the city-state. The service, named Ai.R (Autonomously Intelligent Ride), will operate initially with a fleet of 11 vehicles, including the Robotaxi GXR, which accommodates up to five passengers, and the Robobus, which can carry up to eight passengers. The launch is in collaboration with Grab, a major ride-hailing company, and will feature Grab safety operators onboard as the service begins. In addition to this expansion, WeRide recently joined the Nasdaq Golden Dragon China Index, aiming to increase its visibility and attract more investment. The company’s stock was listed on Nasdaq on October 25, 2024. This move aligns with WeRide’s broader strategy of growth and partnerships in the autonomous driving sector, as evidenced by its recent collaboration with Autonomous A2Z. The Singapore launch represents a significant step in the global proliferation of robotaxi services,
robotautonomous-vehiclesrobotaxiAItransportation-technologySingaporeWeRideThe Oakland Ballers let an AI manage the team. What could go wrong?
The Oakland Ballers, an independent Pioneer League baseball team formed in response to the departure of the Oakland A’s, recently experimented with letting an AI manage their team during a game. Drawing on over a century of baseball data and analytics, including Ballers-specific information, the AI—developed by the company Distillery and based on OpenAI’s ChatGPT—was trained to emulate the strategic decisions of the team’s human manager, Aaron Miles. This experiment leveraged baseball’s inherently data-driven nature and the slower pace of play, which allows for analytical decision-making after each pitch. The AI’s management closely mirrored the choices that Miles would have made, including pitching changes, lineup construction, and pinch hitters, with only one override needed due to a player’s illness. This demonstrated that while AI can optimize decisions by recognizing patterns in data, human ingenuity and judgment remain essential. The Ballers’ willingness to pilot such technology reflects their unique position as a minor league team with major league aspirations and creative flexibility, often
AIsports-technologydata-analyticsmachine-learningbaseballartificial-intelligencesports-managementGoogle’s Gemini AI is coming to your TV
Google is expanding its AI assistant, Gemini, to over 300 million active Android TV OS-powered devices, starting with the TCL QM9K series. This integration aims to enhance the TV viewing experience by helping users find shows or movies, settle on content that suits multiple viewers’ interests, catch up on missed episodes, and provide reviews to aid viewing decisions. Beyond TV-related queries, Gemini will support a wide range of functions similar to those available on smartphones, such as homework help, vacation planning, and skill learning. Google emphasizes that the introduction of Gemini does not replace existing Google Assistant capabilities; traditional voice commands will still function as before. The rollout will continue throughout the year to additional devices, including the Google TV Streamer, Walmart onn 4K Pro, and various 2025 models from Hisense and TCL, with more features planned for future updates. This move represents a significant step in integrating advanced AI assistance directly into the TV platform to offer a more interactive and versatile user experience.
IoTAIGoogle-TVsmart-devicesartificial-intelligenceAndroid-TVvoice-assistantBidirectional Charging, AI, & Semiconductors — Volkswagen's IAA Announcements - CleanTechnica
At the IAA Mobility 2025 auto show in Munich, Volkswagen made several significant announcements emphasizing innovation in bidirectional charging, artificial intelligence, and semiconductor procurement. Central to Volkswagen's strategy is a new procurement model developed in partnership with Rivian and Volkswagen Group Technologies, covering over 50 semiconductor categories including microcontrollers and power transistors. This initiative aims to streamline semiconductor sourcing, reduce costs, and ensure supply chain resilience, reflecting Volkswagen’s ambition to become a global leader in automotive technology. The company also hosted the 4th Semiconductor Summit, bringing together key industry players to strengthen collaboration between automotive and semiconductor sectors. Volkswagen subsidiary Elli introduced a pilot project for bidirectional charging in private homes, featuring a legally compliant 11 kW wallbox that connects electric vehicles with home solar systems through a modular software platform. This technology can potentially reduce charging costs by up to 75% and supports energy independence. Elli is recruiting participants in Germany for this pilot and plans to integrate EV battery storage into a virtual power
energybidirectional-chargingsemiconductorselectric-vehiclesautomotive-technologyAIIoTShould We Be Paying More Attention To Musk's Fascination With AI? - CleanTechnica
The article from CleanTechnica explores Elon Musk’s deepening focus on artificial intelligence through his company xAI and its integration with his broader business empire, including Tesla and the social media platform X. Musk envisions a central AI intelligence layer that will drive monumental successes across his ventures, aiming to leverage AI to transform industries and significantly increase Tesla’s market value from about $1.1 trillion to $8.5 trillion over the next decade. This ambitious plan is tied to Musk’s $10 billion compensation package, which requires him to meet stringent market capitalization and operational milestones while remaining actively involved in Tesla’s leadership. A key strategic move highlighted is the merger of xAI with X, valuing the AI company at $80 billion and the social media platform at $33 billion. This merger allows xAI’s chatbot, Grok, to access a vast, real-time dataset of human interactions from social media, distinguishing it from other AI systems trained on curated data. Grok is already integrated into Tesla’s
AITeslaRoboticsAutonomous-VehiclesxAIHumanoid-RobotsArtificial-IntelligenceNvidia eyes $500M investment into self-driving tech startup Wayve
Nvidia CEO Jensen Huang visited the UK with a commitment to invest £2 billion ($2.6 billion) to boost the country’s AI startup ecosystem, with a potential $500 million strategic investment targeted at Wayve, a UK-based self-driving technology startup. Wayve has signed a letter of intent with Nvidia to explore this investment as part of its next funding round, following Nvidia’s participation in Wayve’s $1.05 billion Series C round in May 2024. The investment is aligned with Nvidia’s broader AI startup funding initiative, which also involves venture capital firms like Accel and Balderton. Wayve is advancing its self-driving technology through a data-driven, self-learning approach that does not rely on high-definition maps, making it adaptable to existing vehicle sensors such as cameras and radar. Wayve’s autonomous driving platform, which has been developed in close collaboration with Nvidia since 2018, currently uses Nvidia GPUs in its Ford Mach E test vehicles. The company recently unveiled its third
robotautonomous-vehiclesself-driving-technologyNvidiaAImachine-learningautomotive-technologyChina plans defense system with laser, missiles to counter drone swarms
China is developing an advanced, multilayered naval defense system designed to protect warships from large-scale drone swarm attacks, which pose a significant threat to expensive military vessels. Spearheaded by Professor Guo Chuanfu and his team at the PLA Navy’s Dalian Naval Academy, the proposed Naval Counter-Swarm System integrates lasers, microwave beams, and hypersonic missiles to detect and disrupt thousands of cheap, fast-moving drones. This system, described as a “digital-age Great Wall,” leverages a fused network of satellite, airborne, shipborne, and sea-surface sensors—covering radar, infrared, optical, radio frequency, and acoustic technologies—coordinated by AI to maintain continuous tracking of low-signature targets. The research highlights the vulnerability of traditional warship defenses against overwhelming drone swarms, which could potentially destroy stealth vessels. To address this, the system employs a “dynamic kill net” approach using software-defined networking and human-machine teaming, enabling real-time reassignment of
robotAIdefense-technologydrone-swarmlaser-weaponshypersonic-missilesmilitary-IoTMeta CTO explains why the smart glasses demos failed at Meta Connect — and it wasn’t the Wi-Fi
At Meta Connect, multiple demos of Meta’s new smart glasses—including an upgraded Ray-Ban Meta model and other variants—failed during live presentations, leading to visible technical difficulties. Initially attributed to Wi-Fi issues, Meta CTO Andrew Bosworth later clarified that the problems were due to resource management errors and a software bug. Specifically, when a cooking demo triggered the “Live AI” feature, it inadvertently activated the feature on every pair of Ray-Ban Meta glasses in the building, overwhelming Meta’s development server and effectively causing a self-inflicted distributed denial-of-service (DDoS) scenario. This overload was not anticipated during rehearsals, which involved fewer devices. The failed WhatsApp video call demo was caused by a previously unseen “race condition” bug, where the glasses’ display went to sleep just as the call arrived, preventing the incoming call notification from appearing. Bosworth emphasized that this bug was rare, has since been fixed, and does not reflect the product’s overall reliability. Despite the glitches
IoTsmart-glassesMetaAIwireless-communicationwearable-technologynetwork-issues4D1 launches T2 for rugged, millimeter-level 3D indoor positioning - The Robot Report
4D1 has launched the T2, a precise indoor positioning system designed to deliver millimeter-level 3D positioning with six degrees of freedom (6DoF) for industrial environments such as factories and process-centric industries. The T2 system addresses common challenges in indoor positioning like accuracy loss, drift, and bulky hardware by providing drift-free, real-time location tracking that includes full orientation for both robots and human operators. Its rugged, compact design is IP54-rated for dust and water resistance, making it suitable for harsh industrial settings. The system uses advanced sensor fusion, combining ultrasonic signals with an inertial measurement unit (IMU), enabling calibration-free operation and rapid deployment with existing industrial equipment. 4D1 emphasizes that T2 facilitates seamless collaboration between humans, robots, and AI systems, enhancing efficiency, safety, and productivity on the shop floor. The system generates AI-ready operational data that supports task validation, faster workforce upskilling, and actionable insights, contributing to smarter decision-making and AI-driven
robotindoor-positioningindustrial-automationAIcollaborative-robotssensor-fusionIIoTAnti-Trump Protesters Take Aim at ‘Naive’ US-UK AI Deal
Thousands of protesters gathered in central London to oppose President Donald Trump’s second state visit to the UK, with many expressing broader concerns about the UK government’s recent AI deal with the US. The demonstrators included environmental activists who criticized the deal’s lack of transparency, particularly regarding the involvement of tech companies and the environmental impact of expanding data centers. Central to the deal is the British startup Nscale, which plans to build more data centers expected to generate over $68 billion in revenue in six years, despite concerns about their high energy and water consumption and local opposition. Critics, including Nick Dearden of Global Justice Now and the Stop Trump Coalition, argue that the deal has been presented as beneficial without sufficient public scrutiny. They worry that the UK government may have conceded regulatory controls, such as digital services taxes and antitrust measures, to US tech giants, potentially strengthening monopolies rather than fostering sovereign British AI development or job creation. Protesters fear that the deal primarily serves the interests of large US corporations rather
IoTAIdata-centersenergy-consumptionsupercomputingtechnology-policyenvironmental-impactAI and the Future of Defense: Mach Industries’ Ethan Thornton at TechCrunch Disrupt 2025
At TechCrunch Disrupt 2025, Ethan Thornton, CEO and founder of Mach Industries, highlighted the transformative role of AI in the defense sector. Founded in 2023 out of MIT, Mach Industries aims to develop decentralized, next-generation defense technologies that enhance global security by integrating AI-native innovation and startup agility into an industry traditionally dominated by legacy players. Thornton emphasized the importance of rethinking fundamental infrastructure to build autonomous systems and edge computing solutions that operate effectively in high-stakes environments. The discussion also explored the broader implications of AI in defense, including the emergence of dual-use technologies that blur the lines between commercial and military applications. Thornton addressed critical topics such as funding, regulation, and ethical responsibility at the intersection of technology and geopolitics. With rising global tensions and increased defense investments, AI is not only powering new capabilities but also reshaping global power dynamics, security strategies, and sovereignty. The session underscored the growing role of AI startups in national defense and the urgent need to adapt to
robotAIautonomous-systemsdefense-technologyedge-computingmilitary-innovationstartup-technologyMeet Oto: Las Vegas hotel's humanoid robot chats with, helps guests
The Otonomus Hotel in Las Vegas has introduced Oto, a multilingual humanoid robot powered by artificial intelligence, as a central feature of its futuristic hospitality experience. Positioned near Allegiant Stadium, the hotel aims to attract both tech enthusiasts and travelers by blending cutting-edge AI technology with personalized guest services. Oto interacts with guests through conversation, jokes, and local recommendations, speaking over 50 languages to accommodate international visitors. Beyond entertainment, Oto efficiently handles practical tasks such as check-ins, room service, and guest requests, allowing hotel staff to focus on other duties while enhancing operational efficiency. This innovative approach positions Las Vegas as a leader in integrating AI into frontline hospitality roles, offering a unique attraction that could boost tourism in a city reliant on both domestic and international visitors. Early guest feedback has been positive, suggesting that AI-driven services like Oto provide more than novelty—they represent a viable enhancement to traditional hotel operations. The Otonomus Hotel exemplifies how technology can complement conventional service, delivering
robothumanoid-robotAIhospitality-technologycustomer-service-automationmultilingual-AIhotel-automationIs The Pursuit Of AI & Humanoid Robots Based On A Flawed Approach? - CleanTechnica
The article from CleanTechnica discusses the current surge in interest around artificial intelligence (AI) and humanoid robots, highlighting both the enthusiasm and potential pitfalls of this technological pursuit. AI has become a widespread buzzword, with companies promoting AI-driven solutions for various tasks, from composting to innovative devices like an electric fork. Alongside AI, humanoid robots—machines designed to resemble humans but without human limitations—are gaining attention for their potential to perform tasks continuously without breaks or benefits, powered by rechargeable batteries. A significant focus of the article is on OpenAI’s emerging involvement in humanoid robotics. Although OpenAI has not officially announced a robotics project, it has been actively recruiting experts in robotics, tele-operation, and simulation, indicating a strategic move into this field. The company’s job postings suggest ambitions to develop general-purpose robots capable of operating in dynamic, real-world environments, possibly aiming for artificial general intelligence (AGI). This aligns with the view that achieving AGI may require robots that can
robothumanoid-robotsartificial-intelligenceAIrobotics-researchtele-operationsimulation-toolsData-driven maintenance is changing factory economics
The article highlights how data-driven predictive maintenance is revolutionizing factory economics by significantly reducing unplanned downtime, which can cost factories millions of dollars annually. Traditional reactive “break-and-fix” approaches are being replaced by smart strategies that leverage IoT sensors and AI to detect equipment faults weeks before failures occur. Studies from the US Department of Energy and industry surveys show that mature predictive maintenance programs can yield a 10× return on investment and reduce downtime by 35–45 percent. Additionally, companies adopting these technologies report substantial cost savings, fewer breakdowns, and extended equipment life, with Deloitte and IBM data supporting reductions of up to 70 percent in breakdowns and 25–30 percent in maintenance costs. The article explains the anatomy of a smart factory’s sensor system, where multiple IoT sensors continuously monitor parameters such as vibration, temperature, and fluid levels. These sensors feed data into edge computing nodes and cloud platforms, where AI algorithms analyze deviations from normal operating baselines to identify early signs of wear
IoTpredictive-maintenancesmart-factoryAIindustrial-sensorsedge-computingenergy-efficiencyNew AI-triggered airbag system could save lives in a plane crash
Engineers at BITS Pilani’s Dubai campus have developed Project REBIRTH, an AI-powered airplane crash survival system designed to protect passengers during unavoidable crashes. The system uses AI and sensors to detect imminent crashes below 3,000 feet, automatically deploying external airbags around the aircraft’s nose, belly, and tail within two seconds. These airbags, made from advanced materials like Kevlar and non-Newtonian fluids, absorb impact forces to reduce damage and increase passenger safety. Additionally, the system employs reverse thrust or gas thrusters to slow and stabilize the plane before impact. Post-crash, bright paint, infrared beacons, GPS, and flashing lights aid rescue teams in quickly locating the crash site. A 1:12 scale prototype combining sensors, microcontrollers, and CO2 canisters has been built, with computer simulations indicating a potential reduction in crash impact by over 60%. The team plans to collaborate with aircraft manufacturers for full-scale testing and aims to make the system compatible with both new
robotAIsensorssafety-systemsmaterialscrash-survivalsmart-airbagsUS lab solves 100-year-old physics puzzle with new AI framework
Scientists at Los Alamos National Laboratory and the University of New Mexico have developed an AI framework called Tensors for High-dimensional Object Representation (THOR) that solves a century-old physics challenge: efficiently computing the configurational integral. This integral is fundamental for understanding particle interactions within materials, crucial for predicting properties like strength and stability under extreme conditions. THOR uses tensor-network mathematics to drastically reduce the computation time from weeks on supercomputers to mere hours or seconds, while maintaining high accuracy. This advancement enables more precise modeling of metals and crystals, particularly under high pressure and during phase transitions. THOR tackles the "curse of dimensionality" by decomposing complex, high-dimensional data into smaller, linked components, akin to reorganizing billions of Lego bricks into manageable chains. When combined with a custom interpolation algorithm, this tensor-train technique achieves speeds up to 400 times faster than traditional molecular dynamics simulations. Real-world tests on copper, argon, and tin demonstrated THOR’s ability to accurately reproduce therm
materialsAItensor-networksmetallurgyphase-transitionshigh-pressure-physicscomputational-physicsBrightpick to share insights on the rise of mobile manipulation at RoboBusiness - The Robot Report
Brightpick CEO and co-founder Jan Zizka will present on the growing field of mobile manipulators at RoboBusiness 2025, held October 15-16 in Santa Clara. Unlike humanoid robots, mobile manipulators combine vision, mobility, dexterous arms, and AI-driven controls in a wheeled form factor, offering enhanced safety and proven reliability. These robots can perform multiple tasks simultaneously, handle heavier payloads, and operate at greater speeds, enabling superhuman performance beyond human physical limits. Zizka’s session, titled “The Rise of Mobile Manipulation,” will highlight the latest advancements, focusing on Brightpick’s Autopicker 2.0 and its AI capabilities, as well as real-world deployments that demonstrate how companies use these technologies to improve efficiency and scale operations. Jan Zizka is a recognized expert in AI, machine vision, and warehouse automation, holding over 20 patents and having previously co-founded Photoneo, a leader in 3D machine vision.
roboticsmobile-manipulatorsAIwarehouse-automationmachine-visionrobotics-conferenceindustrial-robotsNext-gen AI may end era of invisible submarines, Chinese experts claim
A recent Chinese study published in Electronics Optics & Control reveals a next-generation AI-driven anti-submarine warfare (ASW) system that could significantly undermine traditional submarine stealth tactics. Led by senior engineer Meng Hao, the system integrates data from sonar buoys, underwater sensors, radar, and oceanographic variables like temperature and salinity to create a real-time, comprehensive underwater picture. This AI acts as an intelligent commander, dynamically directing sensor configurations and responses to evasive submarine maneuvers such as zigzagging, silence, or decoy deployment. In simulations, the system achieved a 95 percent success rate in detecting and tracking submarines, potentially reducing a submarine’s chance of escape to just 5 percent. Submarines have historically been vital asymmetric naval weapons, valued for their stealth and strategic capabilities, including nuclear deterrence and intelligence gathering. The U.S. Navy, for instance, maintains about 70 nuclear-powered submarines as a counterbalance to China’s expanding naval forces. However, the
robotAIunderwater-sensorsanti-submarine-warfarenaval-technologysonar-systemsintelligent-decision-makingZoox robotaxi equipped with cameras, lidars, radar launched in Las Vegas
Zoox, a Foster City-based company, has launched its fully autonomous robotaxi service in Las Vegas, marking the first time a purpose-built, driverless ride-hailing vehicle is available to the public. The robotaxi integrates advanced perception technology, combining cameras, lidars, radar, and long-wave infrared sensors to provide a comprehensive 360-degree view of the surroundings. This system enables real-time detection, classification, and tracking of vehicles, obstacles, and pedestrians, allowing the vehicle to predict their actions and navigate urban environments safely and smoothly. The service is accessible via the Zoox app, offering rides from multiple popular destinations on and around the Las Vegas Strip, such as Resorts World Las Vegas and AREA15, with rides currently free of charge. Zoox plans to expand its robotaxi operations to other U.S. cities, including an upcoming launch in San Francisco. The company emphasizes that its vision extends beyond autonomous driving to creating a new mode of transportation focused on safety, accessibility, and an enhanced rider
robotautonomous-vehiclesAIlidarradarrobotaxitransportation-technologyAvatr Vision Xpectra debuts with AI vortex hub and glass cabin
The Avatr Vision Xpectra concept, unveiled at IAA Mobility 2025 in Munich, showcases the future design and technology direction of the Changan-owned luxury EV brand. Measuring 5.8 meters in length, it surpasses many flagship sedans, including the Rolls-Royce Phantom, emphasizing its imposing presence. The vehicle features a striking diamond-cut exterior with sharp, faceted surfaces and wide fenders, embodying Avatr’s “Energy Force” design language. A key highlight is its expansive prismatic glass cabin that blurs the line between interior and exterior, flooding the minimalist interior with natural light and enhancing the futuristic aesthetic. The concept also incorporates back-hinged rear doors without B-pillars, creating a dramatic, open entry experience, though this design is not feasible for mass production due to safety constraints. Inside, the Vision Xpectra centers on an AI-powered “Vortex” hub that integrates lighting, sound, and interactive controls to create an “emotional intelligence
AIelectric-vehiclesmart-cabinautomotive-technologyadvanced-materialsenergy-efficient-designhuman-machine-interfaceRewiring infrastructure: the automation revolution in utility design
The article discusses how Spatial Business Systems (SBS), led by President Al Eliasen, is revolutionizing utility infrastructure design through automation and AI. Eliasen, who transitioned from semiconductor equipment to utility software, emphasizes the complexity and critical importance of modern utilities, especially amid the energy transition. With utilities facing massive infrastructure expansion—such as a Texas utility planning to double its $30 billion asset base in the next five to seven years—traditional manual design methods are no longer viable. SBS’s platform automates engineering calculations, synchronizes data with enterprise asset management (EAM) and GIS systems, and eliminates redundant manual work, resulting in faster, more accurate, and scalable design processes. Eliasen addresses concerns about automation threatening jobs, clarifying that SBS’s tools instead help utilities reduce backlogs, meet regulatory deadlines, and avoid costly fines, ultimately freeing up resources rather than cutting staff. A major challenge remains overcoming industry inertia and skepticism from veteran engineers who doubt automation’s applicability; however, demonstrations of SBS
energyautomationutility-infrastructuredigital-twinsCADasset-managementAICircus SE completes first production of CA-1 robots in high-volume facility - The Robot Report
Circus SE, a Munich-based developer of AI-powered autonomous food preparation robots, has announced the start of production for its fourth-generation CA-1 robot at a newly established high-volume manufacturing facility. The factory, designed with an intelligent modular setup, enables industrial-scale production of the complex CA-1 robot, which comprises over 29,000 components—comparable in complexity to a small car. The CA-1 robot can prepare meals in three to four minutes and integrates advanced features such as smart food silos for inventory tracking, induction cooking for energy-efficient rapid heating, robotic arms for dispensing and plating, AI-driven computer vision for operational monitoring, and a self-cleaning system for low maintenance. Each unit undergoes more than 150 precision tests to ensure enterprise-grade reliability akin to automotive standards. Circus SE is expanding its global presence with support from Celestica, its production partner experienced in engineering and supply chain management, enabling the company to scale production to thousands of units annually. The firm recently
roboticsAIautonomous-systemsfood-preparation-robotsindustrial-productioncomputer-visionenergy-efficiencyZoox bets big, launches robotaxi service on Vegas Strip
Amazon subsidiary Zoox has launched its fully driverless robotaxi service on the Las Vegas Strip, marking a significant milestone after more than a decade of development. Zoox’s custom-built vehicles are unique in design, lacking traditional driver controls and featuring interior seating arranged to face each other, enhancing rider interaction. The company manufactures these robotaxis entirely in-house at its dedicated production facility, with a capacity of up to 10,000 vehicles annually. Initially, the service is free to riders to encourage adoption and gather feedback, with plans to introduce paid rides pending regulatory approval. Zoox has also established dedicated pickup zones with on-site concierges at key Las Vegas destinations and offers real-time app features such as vehicle identification, estimated pickup times, and ride summaries. Zoox is currently testing its robotaxis in San Francisco and Foster City, with future expansions planned for Austin and Miami. The company has driven over 2 million fully autonomous miles and completed more than 10 million autonomous trips, providing over 250
robotautonomous-vehiclesrobotaxiride-hailingtransportation-technologyAImobility-innovationWhere top VCs are betting next: Index, Greylock, and Felicis share 2026 priorities at TechCrunch Disrupt 2025
At TechCrunch Disrupt 2025, a prominent panel of venture capitalists from Index Ventures, Greylock, and Felicis will share their investment priorities for 2026 and beyond. Nina Achadjian of Index Ventures is focusing on automating overlooked industries with investments in AI, robotics, and vertical SaaS. Jerry Chen from Greylock is backing product-driven founders working in AI, data, cloud infrastructure, and open source technologies. Viviana Faga of Felicis brings extensive experience in scaling go-to-market SaaS, category creation, and brand strategy, highlighting sectors that are gaining traction. The panel offers early-stage founders valuable insights into the emerging sectors and innovations attracting “smart money,” including AI, data, cloud, and robotics. This session provides a rare opportunity for entrepreneurs to understand how top VCs are shaping the next wave of investments. TechCrunch Disrupt 2025 will take place from October 27–29 at Moscone West in San Francisco, with early pricing available until
robotAIautomationventure-capitalstartupstechnology-investmentscloud-infrastructureIntuition Robotics partners with Kanematsu to bring ElliQ to Japan - The Robot Report
Intuition Robotics, the developer of ElliQ—an AI-powered social robot designed to support older adults—has announced its expansion into the Japanese market through a partnership with Kanematsu Corp., a major trading company. This marks Intuition Robotics’ first international expansion outside the U.S. Kanematsu has also invested in the Israeli company, increasing Intuition Robotics’ total equity funding to $85 million. The collaboration aims to co-develop, localize, and distribute ElliQ in Japan by 2026, addressing Japan’s rapidly aging population and the resulting shortage of caregiving personnel. Kanematsu plans to leverage its extensive network of over 20,000 business partners to build ElliQ into a comprehensive platform supporting older adults’ lives. ElliQ is designed as a proactive companion that goes beyond simple conversational AI by engaging users based on their goals, remembering past interactions, and encouraging participation in daily activities to stimulate both mind and body. It offers features such as medication reminders, health management support
robotAIelderly-caresocial-robotJapan-markethealthcare-technologyrobotics-innovationHow does NVIDIA's Jetson Thor compare with other robot brains on the market? - The Robot Report
NVIDIA recently introduced the Jetson AGX Thor, a powerful AI and robotics developer kit designed to deliver supercomputer-level artificial intelligence performance within a compact, energy-efficient module consuming up to 130 watts. The Jetson Thor provides up to 2,070 FP4 teraflops of AI compute, enabling robots and machines to perform advanced “physical AI” tasks such as perception, decision-making, and control in real time directly on the device, without dependence on cloud computing. This capability addresses a major challenge in robotics by supporting multi-AI workflows that facilitate intelligent, real-time interactions between robots, humans, and the physical environment. The Jetson Thor is powered by the comprehensive NVIDIA Jetson software platform, which supports popular AI frameworks and generative AI models, ensuring compatibility across NVIDIA’s broader software ecosystem—from cloud to edge. This includes tools like NVIDIA Isaac for robotics simulation and development, NVIDIA Metropolis for vision AI, and Holoscan for real-time processing. The module’s high-performance
robotAINVIDIA-Jetsonrobotics-hardwareedge-computingphysical-AIAI-inferenceTesla shareholders to vote on investing in Musk’s AI startup xAI
Tesla shareholders are set to vote on a proposal to allow the company to invest in Elon Musk’s AI startup, xAI, which is positioned as a strategic move to bolster Tesla’s AI, robotics, and energy initiatives. The proposal, initiated by a shareholder with a modest stake, highlights Tesla’s recent integration of xAI’s Grok AI into its vehicles and argues that investing in xAI would secure advanced AI capabilities, drive innovation, and enhance shareholder value. Notably, Tesla’s board has taken a neutral stance on the proposal, which follows SpaceX’s commitment to invest in xAI amid speculation that the AI startup is struggling to secure outside funding. Some shareholders have expressed concerns that xAI could compete with Tesla, given Musk’s framing of Tesla as an AI company, though a related lawsuit was dismissed last year. This vote coincides with Tesla’s broader efforts to shift investor focus from challenges such as declining EV sales and a slow robotaxi rollout toward its AI ambitions, including autonomous vehicles and the
robotAIautonomous-vehicleshumanoid-robotsenergyTeslainvestmentRoboBallet makes robotic arms dance in sync on factory floors
RoboBallet is a new AI system developed by a team from UCL, Google DeepMind, and Intrinsic that choreographs the movements of multiple robotic arms on factory floors, significantly improving efficiency and scalability in manufacturing. Traditional robotic coordination requires extensive manual programming to avoid collisions and complete tasks, a process that is time-consuming and prone to errors. RoboBallet overcomes these challenges by using reinforcement learning combined with graph neural networks, enabling it to plan coordinated movements for up to eight robotic arms performing 40 tasks in seconds, even in previously unseen layouts. This approach treats obstacles and tasks as points in a network, allowing rapid and adaptable planning that outperforms existing methods by generating plans hundreds of times faster than real-time. The system’s scalability is a major breakthrough, as it learns general coordination rules rather than memorizing specific scenarios, making it capable of handling complex, dynamic environments where factory layouts or robot configurations change frequently. RoboBallet’s ability to instantly generate high-quality plans could prevent costly
roboticsindustrial-automationAIrobotic-armsmanufacturing-technologyreinforcement-learningfactory-efficiencyEntering an AI-powered Vineyard
The article "Entering an AI-powered Vineyard" highlights the transformative impact of artificial intelligence on modern farming, particularly in vineyard management. Traditionally, farmers relied heavily on intuition and limited data to assess crop health and farmland conditions. However, the integration of AI technologies in this experimental vineyard enables precise data collection and analysis, allowing for more informed decision-making and optimized crop management. By leveraging AI, farmers can monitor various factors such as soil quality, weather patterns, and plant health in real-time, reducing guesswork and improving yield quality. This approach represents a significant shift towards data-driven agriculture, promising increased efficiency, sustainability, and potentially revolutionizing how farms are managed in the future. The article underscores the potential of AI to enhance agricultural productivity and resource management.
IoTagriculture-technologysmart-farmingAIdata-analyticsprecision-agriculturevineyard-managementTechCrunch Disrupt 2025 finalizes the Builders Stage agenda with top scaling voices
TechCrunch Disrupt 2025, scheduled for October 27–29 at San Francisco’s Moscone West, has finalized the agenda for its Builders Stage, focusing on the practical aspects of building and scaling startups. This stage features founders, operators, and investors sharing tactical insights on topics ranging from securing initial funding to scaling go-to-market strategies and integrating AI effectively into businesses. Notable speakers include Elad Gil, known for early investments in major startups like Airbnb and Coinbase, as well as Discord’s Jason Citron, and investors from 01 Advisors, Mayfield, Precursor Ventures, Harlem Capital, MaC Venture Capital, Freestyle Capital, Insight Partners, Moxxie Ventures, and GV. The sessions promise candid conversations and live Q&A, covering critical startup phases such as pitching at the inception stage, closing seed rounds, and raising Series A funding. For example, early-stage investors Navin Chaddha and Charles Hudson will discuss how to pitch without a product or traction, while
robotAIstartupstech-innovationscalingfundraisinglive-demosSupercomputer drives 500x brighter X-rays to boost battery research
Researchers at Argonne National Laboratory have combined the upgraded Advanced Photon Source (APS) with the Aurora exascale supercomputer to significantly accelerate battery research. The APS upgrade boosts X-ray beam brightness by up to 500 times, enabling unprecedented real-time, high-resolution imaging of battery materials during charge and discharge cycles. This allows scientists to observe atomic-level changes, structural defects, and electronic states of key cathode elements such as nickel, cobalt, and manganese, providing deeper insights into battery performance and degradation. Aurora complements APS by handling massive data processing and AI-driven analysis, with over 60,000 GPUs capable of performing more than one quintillion calculations per second. A high-speed terabit-per-second connection between APS and Aurora facilitates real-time data transfer and experiment feedback, enabling rapid adjustments and optimization. Argonne envisions an autonomous research loop where AI models like AuroraGPT analyze data instantly, predict outcomes, and recommend new materials to test, potentially reducing battery development timelines from years to weeks or days.
energybattery-researchsupercomputerAImaterials-scienceenergy-storageAdvanced-Photon-SourceRobotican unveils drone with cage-like body that rolls and flies
Israeli defense firm Robotican has introduced an armed version of its ROOSTER hybrid drone, a cage-like unmanned aerial system capable of both rolling on the ground and flying. Previously used primarily for intelligence, surveillance, and reconnaissance, the upgraded ROOSTER now carries a 300-gram precision-guided warhead, enabling it to perform surgical strikes in confined urban and subterranean environments such as buildings and tunnels. The drone’s protective cage design allows it to navigate stairs and narrow corridors, while rotors provide flight capability to overcome obstacles, giving operators enhanced access and strike readiness in complex terrain. Equipped with artificial intelligence for object detection, autonomous target tracking, and locking, the ROOSTER allows operators to designate targets for automatic engagement. Safety mechanisms prevent unintended activation, minimizing risks to friendly forces and civilians. Weighing 1.6 kilograms and capable of carrying various modular payloads—including spectral and thermal cameras, oxygen and radiation sensors—the drone offers up to 30 minutes of rolling endurance
robotdroneAIautonomous-systemsmilitary-technologyunmanned-aerial-vehiclesurveillanceGermany bets on AI-powered plant to give used EV batteries a new life
Germany is developing an AI-powered pilot plant in Chemnitz to repurpose used electric vehicle (EV) batteries, extending their lifespans and recovering valuable raw materials such as lithium and cobalt. Led by Dr. Rico Schmerler and his team at Fraunhofer IWU in partnership with EDAG Production Solutions, the initiative focuses on carefully dismantling and remanufacturing traction batteries that retain 70-80% of their capacity but are no longer suitable for vehicles. Instead of shredding, which wastes usable cells and materials, the plant uses automated, AI-supported processes to assess the state of health (SoH) of each battery module and cell, enabling the reuse of healthy components in grid storage systems for homes, businesses, or utilities. The Chemnitz facility aims to address the growing volume of used batteries expected in the EU by 2030, offering a scalable, safe, and efficient solution that preserves raw materials and reduces reliance on energy-intensive new production. Beyond hardware, the plant will
energyAIbattery-recyclingEV-batteriesautomationraw-materialssustainabilityTesla’s Dojo, a timeline
The article chronicles the development and evolution of Tesla’s Dojo supercomputer, a critical component in Elon Musk’s vision to transform Tesla from just an automaker into a leading AI company focused on full self-driving technology. First mentioned in 2019, Dojo was introduced as a custom-built supercomputer designed to train neural networks using vast amounts of video data from Tesla’s fleet. Over the years, Musk and Tesla have highlighted Dojo’s potential to significantly improve the speed and efficiency of AI training, with ambitions for it to surpass traditional GPU-based systems. Tesla officially announced Dojo in 2021, unveiling its D1 chip and plans for an AI cluster comprising thousands of these chips. By 2022, Tesla demonstrated tangible progress with Dojo, including load testing of its hardware and showcasing AI-generated imagery powered by the system. The company aimed to complete a full Exapod cluster by early 2023 and planned multiple such clusters to scale its AI capabilities. In 2023, Musk
robotAIsupercomputerTesla-Dojoself-driving-carsneural-networksD1-chipTesla Dojo: the rise and fall of Elon Musk’s AI supercomputer
Tesla’s Dojo supercomputer, once heralded by Elon Musk as a cornerstone of the company’s AI ambitions, has been officially shut down as of August 2025. Originally designed to train Tesla’s Full Self-Driving (FSD) neural networks and support autonomous vehicle and humanoid robot development, Dojo was central to Musk’s vision of Tesla as more than just an automaker. Despite years of hype and investment, the project was abruptly ended after Tesla decided that its second-generation Dojo 2 supercluster, based on in-house D2 chips, was “an evolutionary dead end.” This decision came shortly after Tesla signed a deal to source next-generation AI6 chips from Samsung, signaling a strategic pivot away from self-reliant hardware development toward leveraging external partners for chip design. The shutdown also involved disbanding the Dojo team and the departure of key personnel, including project lead Peter Bannon and about 20 employees who left to start their own AI chip company, DensityAI
robotAIautonomous-vehiclesTeslasupercomputerself-driving-technologysemiconductorAstro Teller, “Captain of Moonshots,” joins TechCrunch Disrupt 2025 this October
Astro Teller, known as the "Captain of Moonshots," will be a featured speaker at TechCrunch Disrupt 2025, taking place from October 27 to 29 at Moscone West in San Francisco. As the head of Alphabet’s X (the Moonshot Factory), Teller has led the organization for over a decade, overseeing groundbreaking projects such as Waymo (self-driving cars) and Wing (delivery drones). His leadership focuses on tackling ambitious, high-impact technological challenges, making him a prominent figure in innovation. Beyond his role at X, Teller is also a novelist, entrepreneur, investor, and academic, holding degrees from Stanford and Carnegie Mellon. His diverse background provides a unique perspective on the future of technology and innovation. With AI rapidly advancing and reshaping possibilities, his insights on thinking bigger, embracing failure, and building impactful solutions are particularly relevant. Attendees of Disrupt 2025 will have the opportunity to learn directly from Teller’s experience and vision, making the event a must
robotautonomous-vehiclesdronesinnovationAItechnologymoonshot-projectsChina could turn treacherous stretch into minefield to trap submarines
A recent study by Chinese military scientists proposes transforming the underwater terrain around the Paracel Islands into a strategic submarine kill zone by deploying AI-powered, long-endurance mines in sonar "acoustic shadow zones." These zones, characterized by complex underwater topography where sound waves scatter or disappear, provide natural concealment for mines, enabling them to evade detection and selectively target enemy submarines. The research, published in the journal Technical Acoustics by experts from the People’s Liberation Army Dalian Naval Academy and Harbin Engineering University, highlights how leveraging these underwater features could create a covert defensive network to trap adversary submarines, particularly those of the US, which regularly operates in the region to challenge Chinese dominance. The Paracel Islands, controlled by China since 1974 but claimed also by Taiwan and Vietnam, have become a heavily militarized hub with airfields, ports, radar, and garrisons supporting China’s strategic operations in the South China Sea. The proposed minefield system would rely on
robotAIunderwater-minessubmarine-warfareacoustic-technologydefense-technologymilitary-roboticsThe Reservoir launches AgTech innovation hub in Salinas, CA - The Robot Report
The Reservoir has officially launched its first agtech innovation hub, Reservoir Farms, in Salinas, California, marking a significant step in advancing agricultural technology through collaboration. Positioned as California’s first on-farm startup incubator, Reservoir Farms offers early-stage agtech companies access to a cutting-edge R&D facility, prototyping resources, secure build and storage spaces, and, critically, an on-site working farm for real-world product testing and data collection. The inaugural cohort includes startups specializing in AI-powered farm equipment, agrobotics, data management, and soil treatment, all united by a vision to accelerate technology from concept to commercialization through close partnerships with growers and engineers. The initiative, led by CEO Danny Bernstein and supported by industry and educational partners such as Western Growers Association, UC ANR, Hartnell College, and Merced College, aims to bridge the gap between research labs and practical agricultural application. A recent strategic partnership with John Deere enhances this ecosystem by providing startups access to Deere’s technology, expertise
robotagriculture-technologyagtechAIroboticsIoTinnovation-hubTesla Full Self Driving (Supervised) Launches in Australia to Overwhelmingly Positive Response - CleanTechnica
The article discusses the recent launch of Tesla's Full Self Driving (Supervised) (FSDS) feature in Australia and New Zealand, highlighting the overwhelmingly positive media and public response in these right-hand drive markets. Contrary to expectations of skepticism from Australian media—often critical of electric vehicles and Tesla—the coverage, particularly by Channel 7’s “Sunrise” program, has been notably favorable. The article notes that Australia lacks a domestic auto industry and competing self-driving technologies like Waymo, making Tesla’s FSDS currently the primary autonomous driving system available in the region. While BYD’s “God’s Eye” semi-autonomous system is expected to debut in Australia later in 2025, its advanced features are not yet approved or operational there. The author reflects on the long wait and high anticipation for Tesla vehicles and autonomous capabilities in Australia, dating back to 2016 when the Model 3 was first ordered and the 2018 launch event where customers queued to briefly experience the car. Despite early
robotautonomous-vehiclesTeslaself-driving-technologyAIautomotive-technologyelectric-vehiclesChinese Automakers "Reinvent" PHEV, AI-Driven Power to Lead the Global Market - CleanTechnica
Chinese automaker Geely has introduced a groundbreaking AI-powered plug-in hybrid electric vehicle (PHEV) system, showcased in its new flagship model, the Galaxy M9. This “AI Cloud Powertrain” leverages cloud-based intelligent agents that learn drivers’ habits and use multidimensional data—including navigation, traffic conditions, and speed limits—to optimize power distribution between the internal combustion engine and electric motor in real time. This adaptive system improves energy efficiency by about 5% and can boost fuel savings by 15% to 20% by proactively managing battery use on frequently traveled routes. Additionally, Geely applied AI to refine the combustion chamber design of its dedicated hybrid engine, achieving a record thermal efficiency of 47.26%, the highest for mass-produced plug-in hybrids, and significantly reducing carbon emissions compared to traditional hybrids. Further innovations include AI-driven optimization of the electric drive system via deep reinforcement learning, which enhanced power output and shortened development time without increasing component size or weight. The Galaxy M9
energyAIhybrid-vehiclespowertrainplug-in-hybrid-electric-vehicleautomotive-technologyenergy-efficiencyMIT Students Invent AI Kitchen Robot
MIT students have developed a retro-futuristic kitchen robot named Kitchen Cosmos, designed to help reduce food waste by scanning leftover ingredients and generating recipes using ChatGPT. The robot integrates AI technology to analyze available food items and suggest creative meal ideas, making cooking more efficient and sustainable. This innovation highlights the practical application of artificial intelligence in everyday household tasks, particularly in the kitchen. By leveraging ChatGPT's language processing capabilities, Kitchen Cosmos offers personalized recipe recommendations based on the user's existing ingredients, potentially transforming how people approach meal preparation and leftover management.
robotAIkitchen-robotroboticsartificial-intelligenceautomationMIT911 centers are so understaffed, they’re turning to AI to answer calls
The article discusses how 911 call centers, which are severely understaffed due to the high-pressure nature of emergency dispatch work and significant turnover rates, are increasingly turning to AI solutions to manage non-emergency call volumes. Max Keenan’s company, Aurelian, pivoted from automating salon appointment bookings to developing an AI voice assistant that triages non-urgent calls such as noise complaints, parking violations, and stolen wallet reports. The AI system is designed to recognize genuine emergencies and immediately transfer those calls to human dispatchers, while handling less urgent issues by collecting information and generating reports for police follow-up. Since its launch in May 2024, Aurelian’s AI has been deployed in over a dozen 911 dispatch centers across the U.S. Aurelian recently raised $14 million in a Series A funding round led by NEA, with investors highlighting that the AI is not replacing existing employees but filling gaps caused by staffing shortages. The company claims to be ahead of competitors like
AIemergency-responsevoice-assistantautomationcall-centersartificial-intelligencepublic-safetye-con Systems adds camera, compute solutions for NVIDIA Jetson Thor
e-con Systems has announced comprehensive support for NVIDIA’s newly launched Jetson Thor modules, which deliver up to 2070 FP4 TFLOPS of AI compute power aimed at next-generation robotics and AI-enabled machines. Their support spans a broad portfolio of vision products, including USB Series cameras, RouteCAM GigE Ethernet cameras with ONVIF compliance, 10G Holoscan Camera solutions, and a compact ECU platform designed for real-time edge AI applications. These solutions leverage multi-sensor fusion, ultra-low latency, and resolutions up to 20 MP, enabling accelerated development of advanced AI vision applications. A key highlight is e-con’s 10G e-con HSB solution that uses Camera Over Ethernet (CoE) protocol with a custom FPGA-based TintE ISP board, allowing direct data transfer to GPU memory with minimal CPU usage. This setup supports various high-quality sensors such as Sony IMX715 and onsemi AR0234, facilitating real-time operation and quicker response times. Additionally, e
robotAIembedded-visionNVIDIA-Jetson-Thorcamera-solutionsedge-AIsensor-fusionHow one AI startup is helping rice farmers battle climate change
Mitti, a New York-based AI startup, is addressing climate change by helping rice farmers reduce methane emissions—a potent greenhouse gas generated in flooded rice paddies. The company uses AI-powered models that analyze satellite imagery and radar data to measure methane release from rice fields, enabling scalable monitoring without costly physical equipment. Mitti partners with nonprofits like the Nature Conservancy to train hundreds of thousands of smallholder farmers in India on regenerative, no-burn agricultural practices that lower methane emissions. These partnerships extend Mitti’s reach and allow it to verify and report on climate-friendly farming efforts on the ground. Mitti’s technology also supports a software-as-a-service (SaaS) model, offering measurement, reporting, and verification tools to third parties working with rice farmers to reduce emissions. The methane reduction projects generate carbon credits, which Mitti helps track and sell, sharing most of the revenue with farmers and their communities. This additional income can improve farmers’ profitability by about 15%, a significant boost for small
AIagriculture-technologymethane-reductionclimate-changecarbon-creditssoftware-as-a-serviceenvironmental-sustainabilityNVIDIA Jetson Thor bring 2K teraflops of AI compute to robots
NVIDIA has announced the general availability of its Jetson AGX Thor developer kit and production modules, designed to deliver 2K teraflops of AI compute power for robotics and physical AI applications. Targeting robots that operate in unstructured environments and interact safely with humans, Jetson Thor offers high performance and energy efficiency, enabling the running of multiple generative AI models at the edge. NVIDIA positions Thor as a supercomputer for advancing physical AI and general robotics, with several leading robotics and industrial technology companies already integrating the platform. Notably, Agility Robotics plans to use Jetson Thor as the compute backbone for its Digit humanoid robot, enabling more advanced manipulation, scene understanding, and faster reaction times in complex tasks like logistics and shelf stocking. Boston Dynamics is also incorporating Thor into its Atlas humanoid robot for onboard AI acceleration and data handling. Other major users include Amazon Robotics, Caterpillar, Figure, Medtronic, and Meta, with John Deere and OpenAI currently evaluating the platform
robotAINVIDIA-Jetson-Thorrobotics-computingedge-AIhumanoid-robotsautonomous-systemsJapan trials giant robot hand to scoop buried items at quake sites
Researchers from Japan and Switzerland have developed a giant robotic hand integrated with AI-driven excavation technology to enhance disaster recovery efforts, particularly in earthquake-affected areas. The project, named CAFE (Collaborative AI Field Robot Everywhere), is a five-year collaboration involving Kumagai Gumi, Tsukuba University, Nara Institute of Science and Technology, and ETH Zurich, funded by Japan’s Cabinet Office and the Japan Science and Technology Agency. The robotic hand uses pneumatic actuators and fingertip sensors to adapt its grip dynamically, handling both fragile and heavy objects up to 3 tons. Demonstrated in Tsukuba City, it successfully manipulated diverse debris types, showcasing its potential to operate in hazardous, unstable environments inaccessible to traditional heavy machinery. A key challenge addressed by the project is the formation of natural dams caused by landslides, which pose flooding risks to communities. The CAFE system combines the robotic hand with AI excavation software developed through Sim-to-Real reinforcement learning, enabling the machine to learn and adapt
roboticsdisaster-recoveryAIsoft-roboticspneumatic-actuatorsrobotic-handexcavation-technologyHow automation and farm robots are transforming agriculture - The Robot Report
The article from The Robot Report highlights how automation and farm robots are revolutionizing agriculture by addressing the rising global food demand and labor shortages. With the world projected to need 70% more food by 2050 to feed nearly 9.7 billion people, farm automation has shifted from a luxury to a necessity. Modern agricultural robots automate critical tasks such as planting, harvesting, and weeding, while supporting sustainable farming practices. These robots, combined with satellite imagery and AI-driven analytics, enable precision agriculture that optimizes resource use, reduces waste, and minimizes environmental impact. Farm robots have evolved significantly since the late 1990s, when GPS-guided tractors first appeared. Today’s smart machinery operates with centimeter-level accuracy and real-time monitoring of soil and crop conditions. These technologies allow for precise seed placement, targeted weed removal without chemicals, and nutrient analysis to ensure optimal growth. Robotics plays a key role in precision agriculture by enabling targeted watering, fertilizing, and pest control, which lowers costs and
robotagriculturefarm-robotsautomationprecision-agricultureAIsmart-farmingAI system slashes GPS errors almost 40 times in urban settings
Researchers at the University of Surrey have developed an AI system called Pose-Enhanced Geo-Localisation (PEnG) that dramatically improves location accuracy in urban environments where GPS signals are often unreliable. By combining satellite imagery with street-level images and using relative pose estimation to determine camera orientation, PEnG reduces localization errors from 734 meters to just 22 meters. The system operates using a simple monocular camera, common in vehicles, making it practical and accessible for real-world applications, especially in areas like tunnels or dense cities where GPS coverage is weak or unavailable. PEnG offers a GPS-independent navigation solution that could significantly enhance the reliability and resilience of autonomous vehicles, robotics, and other navigation-dependent industries such as logistics and aviation. The researchers emphasize that this approach not only improves everyday convenience but also addresses safety concerns linked to GPS outages or interference. Supported by the University of Surrey’s PhD Foundership Award, the team is working on a prototype for real-world testing and has made their research open
robotAIautonomous-vehiclesnavigationGPS-alternativescomputer-visionroboticsWorld’s Smallest Cat 🐱✨
The article highlights a groundbreaking scientific achievement where researchers have created the world’s smallest "cat," not a living feline but a single rubidium atom precisely arranged using lasers and artificial intelligence. This atomic-scale creation symbolizes the cutting-edge advancements in quantum technology, showcasing the ability to manipulate individual atoms with extraordinary accuracy. This feat is more than a novelty; it represents a significant step toward the future of quantum computing. By controlling atoms at such a fine level, scientists aim to develop quantum machines capable of processing information far beyond the capabilities of current computers. The work underscores the potential of combining laser technology and AI to push the boundaries of quantum mechanics and computing innovation.
materialsquantum-computingAIlasersatomic-manipulationquantum-technologyprecision-engineeringNuro closes $203M to propel AI-first self-driving tech, commercial partnerships - The Robot Report
Nuro Inc., a Mountain View-based autonomous vehicle company, has closed a $203 million Series E funding round at a $6 billion valuation. The capital will be used to scale its AI-first autonomous driving technology and expand commercial partnerships. Founded in 2016, Nuro combines advanced artificial intelligence with automotive-grade hardware to offer its Nuro Driver system, which supports applications including robotaxis, commercial fleets, and personally owned vehicles. The company has deployed its autonomous vehicles at city scale without safety drivers across multiple U.S. states and internationally, including a recent test fleet in Japan. Key commercial partnerships highlighted include a collaboration with Lucid and Uber to launch a next-generation ride-hailing service, aiming to deploy over 20,000 Lucid vehicles integrated with Nuro Driver starting in 2026. Uber also invested in Nuro as part of this funding round, contingent on meeting development milestones. Investors in the round include returning backers Baillie Gifford and NVIDIA—whose DRIVE AGX
robotautonomous-vehiclesAIself-driving-technologyNuroNVIDIAcommercial-partnershipsNvidia is latest investor to back AV startup Nuro in $203M funding round
Nvidia has joined a group of new investors backing autonomous vehicle startup Nuro in a $203 million Series E funding round. The round includes $97 million from new investors such as Icehouse Ventures, Kindred Ventures, Nvidia, and Pledge Ventures, alongside existing backer Baillie Gifford. Uber also participated, contributing a “multi-hundred-million dollar” investment as part of a broader partnership involving electric car maker Lucid. Nvidia’s involvement follows years of technical collaboration, with Nuro utilizing Nvidia GPUs and the Drive AGX Thor platform for its self-driving software development. The total Series E funding includes an earlier $106 million tranche announced in April, bringing Nuro’s total raised capital to $2.3 billion with a post-money valuation of $6 billion—a 30% decrease from its $8.6 billion valuation in 2021. Nuro has undergone significant strategic shifts amid challenging economic conditions and industry consolidation. After layoffs in 2022 and 2023,
robotautonomous-vehiclesself-driving-technologyNvidiaelectric-vehiclesAImobilityMeet Wukong, the AI Chatbot China Has Installed on Its Space Station
China has introduced Wukong, an AI chatbot named after the legendary Monkey King from Chinese mythology, aboard its Tiangong space station. Developed from a domestic open-source AI model, Wukong is tailored specifically for manned space missions, with a knowledge base focused on aerospace flight data. Connected to Tiangong on July 15 and operational a month later, Wukong supports astronauts by providing rapid information for complex operations, fault handling, psychological support, and coordination between space and ground teams. It played a key role during a six-and-a-half-hour spacewalk mission involving debris protection installation and routine station inspection. Wukong AI operates through two interconnected modules: one onboard the station handling immediate challenges, and another on Earth performing in-depth analysis. This dual-module setup allows it to adapt dynamically to mission needs, making it a sophisticated assistant focused on space navigation. While not the first AI in space—preceded by systems like NASA’s Astrobee and CIMON—Wukong
robotAIspace-stationaerospacelarge-language-modelspace-explorationintelligent-assistantOracle to back massive 1.4-gigawatt gas-powered data center in US
Oracle is investing heavily in AI-focused cloud computing with the development of a massive 1.4-gigawatt data center campus in Shackelford County, Texas. The site, called Frontier and developed by Vantage Data Centers, will span 1,200 acres and include 10 data centers totaling 3.7 million square feet. Designed to support ultra-high-density racks and liquid cooling for next-generation GPU workloads, the campus aims to meet the growing demand for AI computing power. Construction is underway, with the first building expected to be operational in the second half of 2026. Oracle plans to operate the facility primarily using gas-powered generators rather than waiting for utility grid connections, reflecting the urgency to bring these data centers online despite the environmental concerns associated with gas turbine emissions. Oracle has transformed from a traditional database software company into a major cloud services provider focused on AI computing, securing significant deals such as hosting TikTok’s U.S. traffic and powering Elon Musk’s xAI. The company
energydata-centercloud-computingAIgas-powerliquid-coolinghigh-density-racksHarvard dropouts to launch ‘always on’ AI smart glasses that listen and record every conversation
Two former Harvard dropouts, AnhPhu Nguyen and Caine Ardayfio, are launching Halo X, a pair of AI-powered smart glasses that continuously listen to, record, and transcribe every conversation the wearer has. The glasses then display relevant information in real time, such as definitions or answers to complex questions, effectively enhancing the wearer’s intelligence and memory. The startup has raised $1 million in funding led by Pillar VC and plans to offer the glasses for pre-order at $249. Positioned as a potential competitor to Meta’s smart glasses, Halo X aims to provide more advanced functionality without the privacy restrictions Meta has imposed due to its poor reputation on user privacy. However, the glasses raise significant privacy concerns because, unlike Meta’s glasses which have indicator lights to alert others when recording, Halo X is designed to be discreet with no external indicators, effectively enabling covert recording. Privacy advocates warn that normalizing always-on recording devices threatens the expectation of privacy in public and private conversations, especially given that
IoTsmart-glassesAIwearable-technologyprivacy-concernsvoice-recognitionaugmented-realitySchrödinger’s cat video made with 2,024 atoms in quantum breakthrough
A team of physicists from the University of Science and Technology of China has created what is described as the "world’s smallest cat video," depicting Schrödinger’s cat thought experiment using just 2,024 rubidium atoms. This quantum-level visualization uses optical tweezers—focused laser beams—to precisely manipulate individual atoms within a 230-micron-wide array. Machine learning algorithms enable real-time calculations that direct the lasers to rearrange all atoms simultaneously in just 60 milliseconds, a significant improvement over previous methods that moved atoms one by one. The glowing atoms form images representing key moments of the Schrödinger’s cat paradox, illustrating the concept of superposition where a particle exists in multiple states simultaneously. This breakthrough addresses a major bottleneck in neutral-atom quantum computing by enabling rapid, defect-free assembly of large atom arrays with high accuracy—reported as 99.97% for single-qubit operations and 99.5% for two-qubit operations. The technique is highly scalable, maintaining
materialsquantum-computingmachine-learningoptical-tweezersrubidium-atomsAIquantum-technologyAre Tesla Execs Engaging In Insider Trading? - CleanTechnica
The article from CleanTechnica raises concerns about potential insider trading among Tesla executives, focusing on significant stock sales by key figures such as Senior VP Tom Zhu, CFO Vaibhav Taneja, and Board Chair Robyn Denholm. Zhu has sold 82% of his Tesla shares, while Denholm has sold over $558 million worth of stock since 2020. These sales have sparked speculation about internal unrest and doubts regarding Tesla’s strategic pivot toward autonomy, AI, and robotics. The article questions whether these executives are acting on non-public information about the company’s future prospects, especially given Tesla’s recent ambiguous strategic direction and CEO Elon Musk’s divided attention among various ventures. Further scrutiny is directed at Tesla’s financial health and operational challenges. The company faces weakening consumer demand, regulatory hurdles for its Full Self-Driving (FSD) software, and production difficulties with its robotaxi and Cybertruck projects. Additionally, the integration of Musk’s xAI initiative appears to be diverting resources without clear
robotAIelectric-vehiclesinsider-tradingTeslaautonomystock-marketMosquito-killing robot dogs to fight Chikungunya virus in Hong Kong
Hong Kong authorities are set to deploy robot dogs equipped with insecticide sprayers starting next month to combat the rising cases of the mosquito-borne Chikungunya virus. This initiative comes after nine imported cases were recorded locally and a significant outbreak in nearby Guangdong province. The robot dogs, capable of navigating difficult terrains like hillsides, aim to spray insecticides in hard-to-reach areas, thereby reducing the workload on frontline workers, especially during hot weather. If the trial is successful, the government plans to expand the use of these robotic dogs and continue researching innovative mosquito-control methods. These robotic dogs, developed by companies such as Boston Dynamics, integrate AI, cameras, and sensors to detect standing water and map mosquito breeding sites. They can analyze environmental data to predict high-risk areas, enabling targeted insecticide use that minimizes environmental impact. Additionally, Hong Kong is exploring other mosquito control strategies, including a WHO-recommended method involving bacteria introduced into mosquitoes to reduce their reproduction and virus transmission, with trials expected next
robotroboticsAImosquito-controlpublic-health-technologysmart-sensorsautonomous-robotsMeet the first batch of VCs set to judge Startup Battlefield 200 at TechCrunch Disrupt 2025
Startup Battlefield 200 at TechCrunch Disrupt 2025, taking place October 27–29 in San Francisco, will feature the top 20 startups selected from thousands of applicants competing for a $100,000 equity-free prize and significant industry exposure. This competition has a strong legacy of launching successful companies like Dropbox, Mint, Vurb, and Cloudflare. The event promises intense pitching sessions judged by leading venture capitalists who will rigorously evaluate each startup’s potential through candid Q&A, providing insights into what excites or concerns top investors. The first group of judges announced includes five prominent VCs: Philip Clark of Thrive Capital, known for investments in AI and robotics companies; Madison Faulkner of NEA, specializing in data, AI, and developer tools; Leslie Feinzaig, founder of Graham & Walker VC, focused on disruptive founders and public market innovation; and Ilya Kirnos, co-founder and CTO of SignalFire, who leverages AI-driven data platforms to identify promising
robotAIroboticsstartupventure-capitaltechnologyinnovationNVIDIA, NSF invest $150M in open AI to turbocharge US science
NVIDIA and the U.S. National Science Foundation (NSF) have jointly committed over $150 million to develop open, multimodal AI models aimed at accelerating scientific discovery and maintaining U.S. leadership in AI-driven research. This partnership supports the Open Multimodal AI Infrastructure to Accelerate Science (OMAI) project, led by the Allen Institute for AI (Ai2). The NSF is contributing $75 million, while NVIDIA provides $77 million in advanced technology, including NVIDIA HGX B300 systems with Blackwell Ultra GPUs and the NVIDIA AI Enterprise software platform. These resources are designed to handle large-scale AI workloads, enabling faster model training and inference. OMAI will produce a fully open suite of large language models capable of processing diverse scientific data types such as text, images, graphs, and tables. These models will help researchers analyze data more rapidly, generate code and visualizations, and link new insights to existing knowledge, with applications ranging from material discovery to protein function prediction. All models,
AIscientific-researchmaterials-discoveryNVIDIANSFmultimodal-AI-modelsopen-source-AIPhotos: World's first Robocar promises pure autonomy with lidars, radars
The Tensor Robocar, introduced by California-based startup Tensor, is the world’s first personal autonomous vehicle designed from the ground up for private ownership rather than fleet use. Scheduled for delivery in late 2026, the Robocar features a comprehensive sensor suite including 37 cameras, 5 lidars, 11 radars, and multiple microphones and ultrasonic sensors, enabling Level 4 autonomy with no driver input required under defined conditions. Its architecture emphasizes safety and redundancy, meeting global automotive safety standards such as FMVSS and IIHS Top Safety Pick+, with full backup systems to prevent single points of failure. The vehicle’s autonomy is powered by a dual-system AI: one system handles rapid, reflexive driving responses based on expert driver data, while the other uses a multimodal Visual Language Model to reason through complex or unusual scenarios, including low-visibility conditions. The Robocar also functions as an "AI agentic car," featuring a Large Language Model that enables conversational interaction and adapts to the owner
robotautonomous-vehiclesAIsensorslidarradarautomotive-technologyPixar Lamp-style robot, lifelike Siri in Apple’s reported AI plans
Apple is preparing a significant expansion into artificial intelligence with a range of new devices and software enhancements aimed at revitalizing its innovation image and competing with rivals like Samsung, Meta, and Google. Central to this effort is a tabletop robot, targeted for release in 2027, designed as a virtual companion that resembles an iPad on a movable arm. This robot will feature a lifelike version of Siri capable of natural, ongoing conversations and contextual memory, allowing it to interact more like a person in the room by suggesting activities or assisting with planning. Internally nicknamed the “Pixar Lamp,” the device includes a 7-inch horizontal display on a motorized arm and can be controlled remotely via iPhone during video calls. Alongside this, Apple plans to launch a non-robotic smart display next year, running a new multiuser operating system called Charismatic that integrates face recognition and enhanced Siri voice control through a feature called App Intents. In addition to these devices, Apple is advancing its home
robotAIsmart-homeSirihome-securitysmart-displayApple-innovationWoman regains speech 18 years after stroke with brain implant
Eighteen years after suffering a brainstem stroke that left her with locked-in syndrome and near-total paralysis, Ann Johnson regained the ability to speak through an AI-powered brain-computer interface (BCI). The implant, placed over her brain’s speech motor cortex, detects neural signals when she attempts to speak and translates them via an AI decoder into audible words and facial animations on a digital avatar. Initially, the system had an eight-second delay due to sentence-based processing, but recent advances reported in 2025 have reduced this latency to about one second using a streaming AI architecture, enabling near-real-time communication. Johnson’s voice was personalized using recordings from her 2004 wedding speech, and she selected an avatar that mimics her facial expressions. The clinical trial, led by researchers at UC Berkeley and UCSF, aims to transform neuroprostheses from experimental devices into practical, plug-and-play clinical tools. Future developments may include wireless implants and photorealistic avatars to enhance natural interaction. The technology
robotAIbrain-computer-interfaceneuroprostheticsmedical-technologyspeech-restorationassistive-technologyChina unveils world's first autonomous robot for hybrid pollination
Chinese scientists have developed GEAIR, the world’s first AI-powered autonomous robot designed for hybrid pollination in plant breeding. Combining artificial intelligence and biotechnology, GEAIR can independently identify flowers and perform precise cross-pollination, significantly reducing the time, cost, and human error traditionally associated with hybrid breeding. This innovation promises faster breeding cycles and improved efficiency in producing high-quality crop varieties. The research team, led by Xu Cao at the Institute of Genetics and Development Biology of the Chinese Academy of Sciences, enhanced the robot’s effectiveness by using gene editing to create male-sterile flowers, facilitating easier hybrid seed production. Integrating GEAIR with advanced farming techniques like “de novo domestication” and “speed breeding,” they established an intelligent robotic breeding factory capable of rapidly generating superior plant varieties. This technology notably advances soybean hybrid breeding in China and exemplifies the potential of combining AI, robotics, and biotechnology to revolutionize agricultural breeding practices. The study detailing this breakthrough was published in the journal Cell
robotAIbiotechnologyhybrid-pollinationprecision-agricultureautonomous-robotcrop-breedingNvidia Cosmos Robot Trainer
Nvidia has announced Cosmos, a new simulation and reasoning platform designed to enhance AI, robotics, and autonomous vehicle development. Cosmos aims to enable smarter and faster training of AI models by providing advanced simulation environments that closely mimic real-world scenarios. This approach helps improve the accuracy and efficiency of AI systems used in robotics and autonomous technologies. The platform leverages Nvidia’s expertise in graphics processing and AI to create detailed, realistic simulations that facilitate better decision-making and reasoning capabilities in machines. By accelerating the training process and improving model robustness, Cosmos is expected to advance the development of intelligent robots and autonomous vehicles, ultimately contributing to safer and more reliable AI-driven systems.
robotAINvidiaautonomous-vehiclessimulationrobotics-trainingartificial-intelligenceUber Freight CEO Lior Ron leaves to join self-driving startup Waabi as COO
Uber Freight CEO Lior Ron is leaving his role to become COO of Waabi, a self-driving truck startup focused on commercializing autonomous freight technology. Rebecca Tinucci, who previously helped build Tesla’s charging network, will succeed Ron as head of Uber Freight, while Ron will remain chairman. Waabi’s founder and CEO, Raquel Urtasun, highlighted Ron’s experience scaling Uber Freight to $5 billion in revenue and emphasized his role in driving Waabi’s go-to-market strategy and partnerships. Ron’s move reflects his belief in the transformative potential of autonomous trucking and Waabi’s positioning to lead that change. Waabi, founded in 2021, has raised $287.7 million and leverages an “AI-first” approach using its proprietary Waabi World simulator to accelerate the training, testing, and validation of its self-driving software. This approach has allowed Waabi to advance efficiently in a capital-intensive industry where competitors like TuSimple and Embark have struggled. The company is on
robotautonomous-vehiclesself-driving-trucksAItransportation-technologyWaabilogistics-automationAI-powered radar tech can spy on phone calls up to 10 feet away
Researchers at Penn State have developed an AI-powered radar system capable of remotely eavesdropping on phone calls by detecting and decoding subtle vibrations from a cellphone’s earpiece. Using millimeter-wave radar—technology commonly found in self-driving cars and 5G networks—combined with a customized AI speech recognition model adapted from OpenAI’s Whisper, the system can capture and transcribe conversations from up to 10 feet away with approximately 60% accuracy over a vocabulary of up to 10,000 words. This represents a significant advancement from their 2022 work, which could only recognize a limited set of predefined words with higher accuracy. The researchers emphasize that while the transcription accuracy is imperfect, even partial recognition of keywords can pose serious privacy and security risks, especially when combined with contextual knowledge. They liken the system’s capabilities to lip reading, which also relies on partial information to infer conversations. The study highlights the potential misuse of such technology by malicious actors to spy on private phone calls remotely,
AIradar-technologyspeech-recognitionprivacy-risksmillimeter-wave-radarmachine-learningIoT-securityLearn about the state of the robotics industry at RoboBusiness - The Robot Report
RoboBusiness 2025, taking place October 15-16 in Santa Clara, California, will feature a keynote panel discussing the current state of the rapidly evolving robotics industry. Industry experts, including Sanjay Aggarwal (venture partner at F-Prime Capital), Jon Battles (VP of technology strategy at Cobot), Amit Goel (director of product management for autonomous machines at NVIDIA), and Brian Gaunt (VP of Digital Transformation at DHL Supply Chain), will explore what is working in robotics, the challenges faced, and emerging trends shaping the future. The session promises a candid, experience-driven conversation on breakthroughs, barriers, and market insights. The event is a premier gathering for developers and suppliers of commercial robots, produced by WTWH Media, which also organizes The Robot Report and other robotics-focused conferences. RoboBusiness 2025 will host over 60 speakers, a startup workshop, the Pitchfire competition, and extensive networking opportunities. More than 100 exhibitors will showcase the latest robotics technologies and
roboticsautomationAIautonomous-machinesNVIDIA-Jetsondigital-transformationsupply-chain-roboticsRobots pack groceries in record-time at fully automated warehouse
Ocado’s fully automated warehouse system, known as the Hive, revolutionizes online grocery fulfillment by using fleets of AI-controlled robots to pick and pack orders in record time. Operating within a massive 3D grid holding thousands of grocery items, these bots move at speeds up to 9 miles per hour, communicating with a central AI system multiple times per second to efficiently collect products. Robotic arms then pack orders using computer vision and deep learning, arranging items to maximize space and protect fragile goods. This process can complete a 50-item order in just five minutes—six times faster than traditional manual picking. The Hive’s technology integrates artificial intelligence, robotics, and automation, supported by a digital twin—a virtual replica of the warehouse—that enables Ocado to simulate operations, optimize efficiency, and plan delivery routes without disrupting real-world activity. The system’s modular design allows it to scale flexibly, accommodating various warehouse sizes and locations, while storing up to 78 percent more products than typical supermarkets. This results
roboticsautomationAIwarehouse-automationdigital-twinsmart-logisticsgrocery-fulfillmentTesla shuts down Dojo, the AI training supercomputer that Musk said would be key to full self-driving
Tesla is shutting down its Dojo AI training supercomputer project and disbanding the team behind it, marking a significant shift in the company’s strategy for developing in-house chips and hardware for full self-driving technology. Peter Bannon, the Dojo lead, is leaving Tesla, and remaining team members will be reassigned to other data center and compute projects. This move follows the departure of about 20 former Dojo employees who have founded a new startup, DensityAI, which aims to build chips, hardware, and software for AI-powered data centers used in robotics, AI agents, and automotive applications. The decision to end Dojo comes amid Tesla’s ongoing efforts to position itself as an AI and robotics company, despite setbacks such as a limited robotaxi launch in Austin that faced criticism for problematic driving behavior. CEO Elon Musk had previously touted Dojo as central to Tesla’s AI ambitions and full self-driving goals, emphasizing its capacity to process vast amounts of video data. However, since mid-202
robotAITeslaautonomous-vehiclesAI-chipssupercomputerroboticsWorld-1st roadside AI tech that prevents animal-vehicle collisions tested
A team of researchers from the University of Sydney, Queensland University of Technology (QUT), and the Department of Transport and Main Roads Queensland has developed and successfully tested the world’s first roadside AI technology designed to prevent animal-vehicle collisions. Known as the Large Animal Activated Roadside Monitoring and Alert (LAARMA) system, it uses a combination of RGB cameras, thermal imaging, LiDAR sensors, and self-teaching artificial intelligence to detect animals near roads in real-time and alert drivers through flashing Variable Message Signs (VMS). During a five-month trial in Far North Queensland, an area with frequent cassowary collisions, LAARMA achieved 97% detection accuracy, recorded over 287 animal sightings, and helped reduce vehicle speeds by up to 6.3 km/h. Unlike traditional detection systems, LAARMA’s AI continuously improves its accuracy by learning from each sighting without human reprogramming, increasing detection rates from an initial 4.2% to 78.5%
AIroadside-safetyanimal-detectionIoT-sensorsautonomous-systemswildlife-conservationsmart-transportationLyft partners with Baidu to deploy autonomous vehicles in Europe - The Robot Report
Baidu and Lyft have announced a strategic partnership to deploy Baidu’s Apollo Go autonomous vehicles (AVs) across European markets, starting with Germany and the U.K. in 2026, subject to regulatory approval. The collaboration aims to scale the fleet to thousands of vehicles throughout Europe in subsequent years. Baidu will provide its advanced autonomous driving technology, including the fully electric RT6 robotaxi equipped with Apollo Go’s sensor suite and safety architecture, while Lyft will leverage its extensive rideshare platform and operational expertise. Both companies emphasize working closely with European regulators to ensure compliance with safety and regulatory standards. This partnership marks a significant milestone in expanding autonomous mobility globally, combining Baidu’s AI and self-driving capabilities with Lyft’s market reach. Baidu’s Apollo Go service has already deployed over 1,000 AVs across 15 cities with more than 11 million cumulative rides, demonstrating scalability from test operations to commercial deployment. Lyft, which operates in nearly 1,000 cities across 11 countries
robotautonomous-vehiclesrobotaxiBaidu-ApolloLyftAImobility-technologyAI robot builds robot’s brain 20x faster than humans
Computer scientist Peter Burke from the University of California has developed a novel system where generative AI models like ChatGPT autonomously generate the control software—or "brain"—for a drone, significantly accelerating the development process. Unlike traditional drone control software, Burke’s approach involves two "brains": a higher-level AI-generated control system called WebGCS, which runs a web-based dashboard on a Raspberry Pi onboard the drone, and a lower-level firmware managing flight operations. This system enables the drone to perform autonomous functions such as obstacle avoidance, with human operators able to intervene if necessary. Burke conducted multiple development sprints using various AI coding tools and models, overcoming challenges related to model context limitations. Ultimately, using the Windsurf tool, the AI-generated WebGCS produced approximately 10,000 lines of code in about 100 hours over 2.5 weeks—a process about 20 times faster than Burke’s previous four-year development of a similar drone control system. Industry experts, like Geolava
robotAIautonomous-dronesgenerative-AIdrone-softwareRaspberry-Pirobotics-programmingUnitree G1 robot impresses Dubai leadership, joins museum exhibit
The Unitree G1 humanoid robot recently gained significant attention in Dubai when it was showcased during a live demonstration at the historic Union House, engaging with His Highness Sheikh Mohammed bin Rashid Al Maktoum. Developed through collaboration between Dubai Future Labs and Chinese robotics firm Unitree, the G1 robot exemplifies advanced humanoid robotics with capabilities such as handshakes, hugs, waves, voice command input, and situational awareness via sensors including Intel RealSense depth cameras and 3D LiDAR. Compact and agile, the robot stands 1.32 meters tall, weighs 35 kilograms, and features a foldable design for easy transport. It will soon be part of the interactive exhibits at Dubai’s Museum of the Future, aligning with the UAE’s ambitions to integrate AI and robotics into public life and enhance tourism. This development is part of Dubai’s broader strategy to position itself as a global innovation hub and attract investors and entrepreneurs, supported by a growing affluent population and nearly 10 million
robothumanoid-robotAIrobotics-innovationautonomous-navigationsmart-policinginteractive-exhibitsPrismaX launches teleop platform for robotic arms - The Robot Report
PrismaX, a San Francisco-based startup with $11 million in funding, has launched a teleoperation platform for robotic arms aimed at bridging the gap between robotics and mainstream adoption. The platform allows users to remotely operate robotic arms and serves as a foundational step toward a future where humans and robots collaborate to enhance human capabilities. PrismaX’s co-founder and CEO Bayley Wang emphasized that this tele-op system is a proof of concept for a labor market where humans and robots work hand in hand, with the company focusing initially on teleoperations and visual data collection to train AI models. PrismaX has outlined a roadmap aligned with the robotics industry's evolution: in the short term, teleoperators will gather data and gain experience; in the mid-term, operators will manage fleets of robots performing real tasks; and in the long term, robots will achieve high autonomy powered by foundational AI models. The company envisions a self-reinforcing "data flywheel" where increased robot operation generates valuable datasets that improve AI,
roboticsteleoperationrobotic-armsAIautomationteleop-platformrobotics-industryTwo arrested for smuggling AI chips to China; Nvidia says no to kill switches
The U.S. Department of Justice arrested Chuan Geng and Shiwei Yang on August 2 in California for allegedly smuggling advanced AI chips to China through their company, ALX Solutions. They face charges under the Export Control Reform Act, which carries penalties of up to 20 years in prison. The DOJ indicated the chips involved were highly powerful GPUs designed specifically for AI applications, strongly suggesting Nvidia’s H100 GPUs. Evidence showed ALX Solutions shipped these chips to intermediaries in Singapore and Malaysia while receiving payments from entities in Hong Kong and China, apparently to circumvent U.S. export restrictions. In response, Nvidia emphasized its strict compliance with U.S. export controls and stated that any diverted products would lack service and support. The company also rejected recent U.S. government proposals to embed kill switches or backdoors in chips to prevent smuggling, arguing such measures would compromise security and trust in U.S. technology. Nvidia warned that creating vulnerabilities intentionally would benefit hackers and hostile actors, ultimately harming America
AIsemiconductorsNvidiaexport-controlchip-smugglingtechnology-securityGPUsAI, Drones, & Digital Twins Help Renewable Energy Persist In US
The article discusses how technological advancements in AI, drones, and digital twins are playing a crucial role in advancing renewable energy in the US despite political uncertainties. A recent study by Systemiq and the London School of Economics highlights that AI can significantly enhance renewable energy systems by improving grid management and increasing the efficiency of solar and wind power by up to 20%. AI also aids in better financial decision-making, especially in emerging markets, by predicting investment risks more accurately. Additionally, high-performance computing systems like the Department of Energy’s Kestrel are accelerating renewable energy research and efficiency improvements. Drones are another key technology transforming renewable energy by enabling efficient inspection, maintenance, and monitoring of solar, wind, and hydro infrastructure. Market research by DataM Intelligence forecasts strong growth in the renewable drone market through 2031, driven by increasing renewable installations and advancements in drone technology that improve data accuracy and operational safety. Regulatory progress by the FAA to expand commercial drone use beyond visual line of sight (BVLOS) is expected
renewable-energyAIdronesdigital-twinsenergy-efficiencysmart-gridclimate-technologyCosmic and ABB use robotics to rebuild LA homes after wildfires - The Robot Report
ABB Robotics is partnering with Cosmic Buildings to rebuild homes destroyed by recent wildfires in Southern California using advanced robotics and AI-driven modular construction. They have deployed a mobile microfactory in Pacific Palisades, California, that integrates ABB’s IRB 6710 robots and RobotStudio digital twin software with Cosmic’s AI-powered building information model (BIM). This system automates the fabrication and assembly of custom structural wall panels with millimeter precision onsite, enabling faster, safer, and more cost-effective construction. Cosmic aims to build 100 homes by 2027 using this approach, which significantly accelerates construction speed—up to three times faster than traditional methods—and reduces costs by about 30%. The homes constructed through this collaboration exceed California’s wildfire resilience and energy efficiency standards by incorporating non-combustible materials, solar and battery backup systems, and water independence features like greywater recycling. ABB and Cosmic emphasize that their robotic and AI integration allows real-time quality control and problem detection, ensuring consistent build quality
roboticsAIconstruction-automationmodular-housingdisaster-recoveryrenewable-energysmart-building-materialsLearn about the first humanoid deployments at RoboBusiness 2025 - The Robot Report
The article discusses the upcoming keynote panel at RoboBusiness 2025, titled “Lessons Learned from First Humanoid Deployments,” which will focus on the current state and future prospects of humanoid robots in commercial use. The session, scheduled for October 15 at the Santa Clara Convention Center, will feature industry leaders sharing candid insights about the successes, challenges, and engineering lessons from early humanoid robot deployments. Agility Robotics is highlighted for testing its Digit humanoid robot with companies like GXO Logistics and Spanx, illustrating real-world applications. Key panelists include Jim Fan from NVIDIA, who has a strong background in AI and robotics research, including work on multimodal models and robotic manipulation; Katlyn Lewicke of GXO Logistics, who brings expertise in global automation strategy and logistics; and Melonee Wise, chief product officer at Agility Robotics, with extensive experience in autonomous robots and robotics industry leadership. The panel aims to provide a comprehensive view of how humanoid robots are being integrated into commercial
roboticshumanoid-robotsAIautomationrobotics-deploymentAgility-RoboticsRoboBusiness-2025TechCrunch Mobility: Tesla’s ride-hailing gambit
The article discusses Tesla CEO Elon Musk’s ongoing efforts to reposition Tesla from primarily an electric vehicle (EV) manufacturer to an AI and robotics company, with a particular focus on self-driving cars and humanoid robots. Despite Tesla’s advanced EV technology and its Full Self-Driving Supervised system, fully autonomous vehicles and humanoid robots at scale remain unrealized goals. Tesla’s initial step toward this vision was the launch of a limited robotaxi service in Austin, Texas, where Tesla employees currently supervise rides, falling short of Musk’s original vision of a fully autonomous, owner-rentable robotaxi fleet. Recently, Tesla announced plans to launch a robotaxi service in California’s Bay Area, but regulatory hurdles persist. Notably, Tesla has not yet applied for the necessary permits from the California DMV to operate autonomous vehicles commercially. Instead, Tesla has started a ride-hailing service using human drivers from its own employee pool, without any autonomous driving involved. This move appears to be largely for optics, aiming to
robotautonomous-vehiclesTeslaride-hailingAIroboticselectric-vehiclesTurkey revives the Ekranoplan as a smart, sea-skimming drone
Turkey has revived the Soviet-era wing-in-ground-effect (WIG) vehicle concept with its new TALAY drone, developed by SolidAERO. Unlike the massive, manned Ekranoplans of the Cold War, TALAY is a compact, unmanned, AI-driven sea-skimming drone designed for multi-role missions including reconnaissance, strike, and cargo delivery. It flies just 3 meters above the sea surface—below most coastal radar detection—can cover 200 km at speeds up to 200 kph, and carries a payload of 30 kg. Its modular design and foldable wings enable rapid deployment and versatile use in various maritime operations. The TALAY represents a doctrinal shift in naval warfare, emphasizing swarming tactics of low-cost, radar-evading drones to overwhelm enemy defenses rather than relying on fewer, larger missiles. This approach could pose a significant threat to both small inshore vessels and larger warships by saturating their defenses with multiple semi-autonomous attackers. Turkey
robotdroneAIunmanned-aerial-vehiclemilitary-technologysea-skimmingautonomous-flightTesla hands $29B comp package to Elon Musk amid ‘AI talent war’
Tesla’s board has approved a new $29 billion stock-based compensation package for CEO Elon Musk, citing the intensifying competition for AI talent and Tesla’s pivotal position in the industry. The package grants Musk 96 million shares that vest over two years, contingent on his continuous senior leadership role and a five-year holding period. Unlike his previous 2018 compensation plan, this new award is not tied to stock price performance goals. The shares come with a $23.34 purchase price per share, valuing the award at approximately $26.7 billion at current market prices. This new compensation plan is structured through Tesla’s 2019 Equity Incentive Plan, which shareholders have already approved, so it will not require a new shareholder vote. However, the package could be voided if the Delaware Supreme Court overturns a judge’s earlier ruling that struck down Musk’s 2018 pay package due to conflicts of interest and flawed negotiation processes. That 2018 plan, worth about $56 billion,
robotAITeslaCEO-compensationtechnologyartificial-intelligenceroboticsHumanoid robots Adam and Adam-U display lifelike AI movement
At the World Artificial Intelligence Conference 2025 in Shanghai, Chinese robotics company PNDbotics unveiled two advanced humanoid robots, Adam and Adam-U, showcasing significant strides in AI-driven robotics. Adam is a full-sized, 1.6-meter-tall, 132-pound humanoid robot designed for high agility and precision, featuring 44 degrees of freedom and powered by deep reinforcement learning (DRL) and imitation-learning algorithms. It boasts patented quasi-direct drive actuators that enable smooth, human-like movements, including balanced posture and deft manipulation, even without visual input. Adam’s modular, biomimetic design and real-time control system allow it to perform complex tasks dynamically, such as playing musical instruments and dancing. Adam-U, developed in partnership with Noitom Robotics and Inspire Robots, serves as a high-precision, stationary data acquisition platform with 31 degrees of freedom. It integrates advanced motion capture technology, including Noitom’s PNLink suit and Inspire’s dexterous robotic hand,
robothumanoid-robotAImotion-capturerobotics-innovationreinforcement-learningimitation-learningTesla asks shareholders to approve $29B comp package for Elon Musk amid ‘AI talent war’
Tesla has proposed a new $29 billion compensation package for CEO Elon Musk, consisting of 96 million shares that would vest over two years, contingent on Musk maintaining a senior leadership role and holding the stock for five years. This package is designed to address the intensifying competition for AI talent and Tesla’s strategic position amid rapid developments in AI and robotics. Unlike Musk’s previous 2018 award, this new plan is not tied to stock price targets but requires Musk’s continued involvement with the company. The proposal will be voted on at Tesla’s annual shareholder meeting in November and could be voided if the Delaware Supreme Court overturns a prior ruling that invalidated Musk’s 2018 compensation package due to conflicts of interest during its negotiation. The 2018 package, worth about $56 billion, was struck down by Delaware Chancery Court Judge Kathaleen McCormick, who criticized the flawed approval process influenced heavily by Musk and Tesla’s board, and the lack of time-bound commitments from Musk
robotAITeslaexecutive-compensationtechnology-leadershipartificial-intelligenceroboticsAI decodes dusty plasma mystery and describes new forces in nature
Scientists at Emory University developed a custom AI neural network that successfully discovered new physical laws governing dusty plasma, a complex state of matter consisting of electrically charged gas with tiny dust particles. Unlike typical AI applications that predict outcomes or clean data, this AI was trained on detailed experimental data capturing three-dimensional particle trajectories within a plasma chamber. By integrating physical principles such as gravity and drag into the model, the AI could analyze small but rich datasets and reveal precise descriptions of non-reciprocal forces—interactions where one particle’s force on another is not equally reciprocated—with over 99% accuracy. This breakthrough corrected long-standing misconceptions in plasma physics, including the nature of electric charge interactions between particles. The study demonstrated that when one particle leads, it attracts the trailing particle, while the trailing particle pushes the leader away, an asymmetric behavior previously suspected but never accurately modeled. The AI’s transparent framework not only clarifies these complex forces but also offers a universal approach applicable to other many-body systems, from living
AIdusty-plasmaphysics-discoveryneural-networksmaterials-scienceparticle-interactionsplasma-physicsFemale-founded semiconductor AI startup SixSense raises $8.5M
SixSense, a Singapore-based deep tech startup founded in 2018 by engineers Akanksha Jagwani (CTO) and Avni Agarwal (CEO), has developed an AI-powered platform that enables semiconductor manufacturers to predict and detect chip defects in real time on production lines. The startup recently raised $8.5 million in a Series A funding round led by Peak XV’s Surge, bringing its total funding to approximately $12 million. SixSense addresses a critical challenge in semiconductor manufacturing by converting vast amounts of raw production data—such as defect images and equipment signals—into actionable insights that help factories prevent quality issues and improve yield. The platform is designed for process engineers rather than data scientists, allowing them to fine-tune models and deploy solutions quickly without coding. Despite the semiconductor industry's reputation for precision, inspection processes remain largely manual and fragmented, with existing systems primarily displaying data without deep analysis. SixSense’s AI platform offers early warnings, root cause analysis, and failure prediction, enabling manufacturers to act
semiconductorAImanufacturingdefect-detectionautomationquality-controldeep-techAI speeds up discovery of 'new' materials as lithium-ion alternatives
Researchers at the New Jersey Institute of Technology (NJIT) have leveraged artificial intelligence to accelerate the discovery of new battery materials that could serve as safer, cheaper, and more sustainable alternatives to lithium-ion technology. Using generative AI models, specifically a Crystal Diffusion Variational Autoencoder (CDVAE) combined with a fine-tuned large language model (LLM), the team rapidly explored thousands of potential porous crystal structures. These structures are designed to facilitate the movement of multivalent ions—such as magnesium, calcium, aluminum, and zinc—that carry multiple positive charges, offering higher energy density than lithium ions. The AI-driven approach overcame the traditional bottleneck of experimentally testing millions of material combinations, enabling the identification of five novel porous transition metal oxide materials with large channels ideal for fast and safe ion transport. The researchers validated the AI-generated materials through quantum mechanical simulations and thermodynamic stability assessments, confirming their practical synthesizability and promising performance for energy storage applications. This breakthrough not only advances the development of
AImaterials-sciencelithium-ion-alternativesbattery-technologyenergy-storagemultivalent-ion-batteriesgenerative-AIFigure CEO teases video showing humanoid robot doing laundry
Figure AI’s CEO Brett Adcock recently showcased a video of their humanoid robot, Figure 02, performing laundry tasks such as picking up clothes and placing them into a washing machine. While the robot cannot fully operate the machine independently yet, this demonstration marks a significant step toward automating household chores. Figure 02 is powered by Helix, a generalist Vision-Language-Action (VLA) model that integrates perception, language, and human understanding to enable advanced upper-body manipulation and multi-robot collaboration. The company plans to begin home trials of Figure 02 later this year, although its current focus remains on industrial applications, including a recent trial at BMW’s South Carolina facility and tasks like sorting plastic bags on conveyor belts. Figure 02 competes with other humanoid robots such as 1X Technologies’ Neo Gamma, designed for domestic use, and Boston Dynamics’ Atlas, which targets industrial environments. Adcock and his team aim to position Figure 02 for both industrial and home settings
roboticshumanoid-robotautomationAIindustrial-robotshome-roboticsmulti-robot-collaboration#RoboCup2025: social media round-up part 2 - Robohub
RoboCup2025 was held from July 15 to 21 in Salvador, Brazil, attracting around 3,000 participants competing across various robotics leagues. The event featured intense competition culminating in final rounds during the last days. Notably, in the #RoboCup2025 @Home Open Platform League (OPL) Final, the NimbRo team’s robot demonstrated impressive capabilities such as opening doors, removing trash, and closing a cabinet door, ultimately securing second place behind Korea’s team Tidyboy. Social media updates highlighted the tense atmosphere as top robots advanced to the finals, with teams overcoming challenges such as equipment damage during transport. Collaborative efforts among teams like RoboCanes (University of Miami), PUMAS (UNAM), @_erasers, and TIDbots enabled them to reach the finals in the @Home DSPL league. Additionally, the event included discussions on the future of RoboCup, reflecting the community’s engagement with advancing robotics and AI technologies. Overall, Robo
roboticsRoboCupAIautonomous-robotsrobot-competitionsservice-robotsrobotics-eventInterview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions - Robohub
In this interview, Kate Candon, a PhD student at Yale University, discusses her research on improving human-robot interaction by leveraging both explicit and implicit feedback. Traditional robot learning often relies on explicit feedback, such as simple "good job" or "bad job" signals from a human teacher who is not actively engaged in the task. However, Candon emphasizes that humans naturally provide a range of implicit cues—like facial expressions, gestures, or subtle actions such as moving an object away—that convey valuable information without additional effort. Her current research aims to develop a framework that combines these implicit signals with explicit feedback to enable robots to learn more effectively from humans in natural, interactive settings. Candon explains that interpreting implicit feedback is challenging due to variability across individuals and cultures. Her initial approach focuses on analyzing human actions within a shared task to infer appropriate robot responses, with plans to incorporate visual cues such as facial expressions and gestures in future work. The research is tested in a pizza-making scenario, chosen for
robothuman-robot-interactionimplicit-feedbackexplicit-feedbackinteractive-agentsrobot-learningAILuma and Runway expect robotics to eventually be a big revenue driver for them
AI video-generation startups Luma and Runway are expanding their focus beyond traditional movie studio clients, exploring new markets such as robotics and self-driving car companies for future revenue opportunities. Although specific companies in these sectors have not been disclosed, these discussions indicate a strategic move to diversify their applications of AI video technology. Luma, in particular, is positioning itself to support robotics by developing 3D AI world models slated for early 2024, aiming to enable machines to better perceive and interact with their environments. Meanwhile, Runway is also eyeing the video game industry as another potential avenue for growth. Both companies are actively seeking to leverage their AI capabilities in emerging fields beyond entertainment, signaling a broader vision for their technology’s commercial use.
roboticsAIvideo-generation3D-AI-modelsself-driving-carsrobotics-industryAI-technologyHistory of GPU: 1979 arcade chips that boosted gaming, crypto, and AI
The history of GPUs traces back to 1979 arcade machines like Namco’s Galaxian, which featured dedicated graphics hardware capable of independently handling multicolored sprites and tile-map backgrounds without CPU intervention. This innovation proved commercially successful and established specialized graphics chips as essential for immersive interactive experiences. The evolution continued through home consoles such as Atari’s 2600 and Nintendo’s systems, which balanced hardware limitations with clever design, while high-end applications like military flight simulators demonstrated the high cost of advanced visuals before purpose-built GPUs became widespread. The consumer 3D graphics revolution was catalyzed in 1996 by 3dfx’s Voodoo 1 card, which significantly boosted PC gaming performance by offloading 3D rendering from the CPU. This sparked rapid competition, with ATI and NVIDIA advancing the technology. NVIDIA’s 1999 GeForce 256 marked a pivotal moment by integrating transform, lighting, rasterization, and pixel shading into a single chip, coining the term “
robotAIGPUhigh-performance-computingautonomous-vehiclesgraphics-hardwarecryptocurrency-miningCutest Humanoid Robot Ready For Launch
The article introduces the Fourier GR-3, a new humanoid robot designed primarily for companionship and caregiving purposes. It highlights the robot's notably cute appearance, which sets it apart from previous models and may enhance its acceptance and integration into human environments. The robot's design aims to foster more natural and engaging interactions between humans and robots. While specific capabilities of the Fourier GR-3 are not detailed in the provided content, the article suggests that its launch could mark a significant step forward in how robots assist with caregiving and social companionship. The potential impact includes improving the quality of life for individuals needing support and advancing the development of empathetic and interactive robotic companions. However, further information about its functionalities and deployment remains unclear from the excerpt.
robothumanoid-robotroboticsAIcompanion-robotcaregiving-robothuman-robot-interactionGermany: World’s largest brain-like supercomputer to aid drug research
Germany’s SpiNNcloud has partnered with Leipzig University to deploy the world’s largest brain-inspired supercomputer specifically designed for drug discovery and personalized medicine research. The system, based on the second generation SpiNNaker hardware, comprises 4,320 chips and around 650,000 ARM-based cores, enabling the simulation of at least 10.5 billion neurons. This architecture allows for massively parallel processing of small, heterogeneous workloads, making it highly efficient for screening billions of molecules in silico—up to 20 billion molecules in under an hour, which is 100 times faster than traditional CPU clusters. The SpiNNcloud system’s design emphasizes energy efficiency and scalability, using 48 SpiNNaker2 chips per server board, each with 152 ARM cores and specialized accelerators. This results in performance that is 18 times more energy-efficient than current GPU-based systems, addressing power consumption and cooling challenges common in high-performance computing. The brain-inspired architecture supports dynamic sparsity and extreme parallelism, which
energysupercomputerAIbrain-inspired-computinglow-power-processorsdrug-discoverypersonalized-medicineArena simulation platform designed to accelerate Gatik autonomous trucking - The Robot Report
Gatik AI Inc. has launched Arena, a new simulation platform designed to accelerate the development and validation of its autonomous vehicles (AVs) by generating structured, controllable synthetic data. Arena addresses the limitations of traditional real-world data collection, which is time-consuming, expensive, and often unsafe, especially when capturing rare or high-risk scenarios. The platform uses advanced AI techniques such as neural radiance fields (NeRFs), 3D Gaussian splatting, and diffusion models to create photorealistic, high-fidelity simulations from various data inputs like segmentation maps, lidar, and HD maps. This enables comprehensive, closed-loop testing of the full autonomy stack, including multiple sensors (cameras, lidar, radar) and vehicle dynamics, while allowing scenario editing and A/B testing to simulate diverse environmental and traffic conditions. Arena aims to reduce the sim-to-real gap significantly, providing synthetic data that is sufficient for Gatik’s safety case and machine learning workflows without heavy reliance on annotated real-world data.
robotautonomous-vehiclessimulation-platformAIdigital-twinsensor-simulationautonomous-truckingDoosan Robotics acquires a majority stake of U.S.-based ONExia for $25.9M - The Robot Report
Doosan Robotics has acquired an 89.59% majority stake in U.S.-based robotics system integrator ONExia Inc. for approximately $25.9 million (KRW 35.6 billion). ONExia, founded in 1984 and based in Exton, Pennsylvania, specializes in end-to-end automation services including system design, manufacturing, and implementation across industries such as manufacturing, logistics, and packaging. The company has developed collaborative robots focused on end-of-line processes like palletizing and packaging, achieving around 30% average annual sales growth. Doosan aims to leverage ONExia’s 25 years of automation data and project expertise to enhance its AI capabilities and solution development, marking a strategic shift from hardware-centric products to integrated AI and software platforms. This acquisition is part of Doosan Robotics’ broader strategy to strengthen its global presence and accelerate innovation in intelligent robotics. The company, a recognized leader in collaborative robots (cobots), is increasing investments in research and development
roboticsautomationAIcollaborative-robotsDoosan-Roboticssystem-integrationmanufacturing-technologyTechCrunch Mobility: Tesla vs GM: A tale of two earnings
The article from TechCrunch Mobility contrasts the recent earnings reports and strategic directions of two major automakers, General Motors (GM) and Tesla, amid a challenging market environment marked by tariffs and slowing electric vehicle (EV) growth. GM, despite a $1 billion hit from tariffs in Q2, remains committed to EVs as its "north star," offering a broad portfolio of over a dozen EV models, with Chevrolet ranking as the No. 2 EV brand in the U.S. GM emphasizes "flexibility," aiming to configure factories capable of producing both EVs and internal combustion engine (ICE) vehicles to adapt to shifting demand. Additionally, GM highlighted deferred revenue from software services like its Super Cruise advanced driver-assistance system. In contrast, Tesla is focusing heavily on future technologies such as autonomy and artificial intelligence, with CEO Elon Musk envisioning the company evolving beyond car manufacturing into areas like Optimus robots and autonomous vehicles. Although automotive sales still constitute about 74% of Tesla’s revenue, this
electric-vehiclesTeslaGeneral-Motorsautonomous-vehiclesAIadvanced-driver-assistance-systemsEV-marketSemiconductor, EV autonomy testing becomes more efficient with Nigel AI
Emerson has developed Nigel AI Advisor, an AI-powered tool designed to enhance the efficiency and effectiveness of engineering innovations, particularly in complex test and measurement applications across industries such as semiconductors, transportation, and electronics. Integrated into Emerson’s flagship NI LabVIEW and NI TestStand software, Nigel leverages advanced large language models trained specifically on NI software to provide engineers with contextual advice, automation assistance, and detailed recommendations for improving code and test execution. The tool allows users to interact via natural language prompts, delivering precise engineering-format responses like tables and graphs, thereby enabling faster and more informed decision-making while safeguarding user data on a secure cloud platform. Nigel AI Advisor is tailored to test application development, distinguishing it from general-purpose AI assistants by being built on decades of trusted test knowledge and data. It can answer questions about programming and automation concepts, help users develop complex automated sequences, and even modify and execute test runs through interaction with the TestStand API. First unveiled at the NI Connect conference, Nigel represents
robotautomationAIsemiconductortestingengineeringsoftwareIn 90 seconds, AI satellite thinks, tilts, and shoots without human help
NASA has developed a groundbreaking AI-driven technology called Dynamic Targeting, enabling satellites to autonomously analyze their surroundings and decide where to collect scientific data without human intervention. Demonstrated aboard the CubeSat CogniSAT-6, launched in March 2024, the system allows the satellite to tilt forward along its orbit, capture preview images, and process them in under 90 seconds to identify cloud-free areas. This capability helps avoid wasting time and resources imaging through clouds, a common obstacle for Earth-observing satellites, by selectively capturing only clear views. Dynamic Targeting mimics human interpretation by recognizing meaningful features such as clouds, fires, or storms in real time, then adjusting the satellite’s instruments accordingly. Future tests aim to reverse the approach by targeting clouds and rapidly evolving weather phenomena like deep convective ice storms, as well as thermal anomalies such as wildfires and volcanic eruptions. These applications rely on specialized onboard algorithms trained to detect specific patterns, enhancing the satellite’s responsiveness and adaptability. NASA en
robotAIsatelliteautonomous-systemsspace-technologyCubeSatNASADrones get new ultrasonic vision upgrade, thanks to bat-inspired AI
Researchers at the University of Michigan have developed an AI-powered echolocation system inspired by bats and dolphins that enables drones and robots to navigate in complete darkness without relying on cameras, GPS, or laser sensors. This ultrasonic vision technology uses high-frequency sound pulses and analyzes their echoes to create spatial maps of surroundings, allowing machines to “see” through smoke, dust, or blackouts. Funded by the US Army Research Office and Ground Vehicle Systems Center, the system is particularly suited for disaster zones and hostile environments where traditional vision-based tools fail. The AI model employs an ensemble of convolutional neural networks (CNNs), each trained to recognize specific object shapes from echo patterns, enabling modular learning without retraining the entire network. The system was trained entirely in a synthetic 3D virtual environment simulating real-world distortions, which reduced development costs and time while maintaining accuracy. Tests demonstrated the AI’s ability to distinguish between similar echo patterns from different objects, proving its robustness in complex scenarios. Beyond defense and robotics
robotAIultrasonic-sensorsecholocationdrone-navigationmachine-perceptionconvolutional-neural-networksIndy Autonomous Challenge makes self-driving racing history at Laguna Seca - The Robot Report
The Indy Autonomous Challenge (IAC) made history at the WeatherTech Raceway Laguna Seca by successfully running AI-driven Dallara AV-24 racecars on one of the world’s most challenging road courses. Team PoliMOVE from Michigan State University claimed first place, demonstrating advanced self-driving racecar technology with precise control and strategic decision-making. Purdue AI Racing and Korea Advanced Institute of Science and Technology (KAIST) took second and third places, respectively. The event, held alongside the NTT INDYCAR SERIES Grand Prix of Monterey, showcased autonomous vehicles navigating complex turns like the infamous “Corkscrew” at speeds exceeding 100 kph (62.1 mph). This marked the third road course event for the IAC, which began on oval tracks and has steadily advanced in complexity and capability. The autonomous racecars operate fully independently, with AI systems controlling steering, acceleration, and braking, while student teams set the decision-making parameters. Laguna Seca’s demanding layout, including blind crests
robotautonomous-vehiclesAIself-driving-carsroboticsIndy-Autonomous-Challengemotorsport-technologyRoboBusiness announces 2025 agenda
RoboBusiness 2025, scheduled for October 15-16 at the Santa Clara Convention Center, has unveiled its comprehensive conference agenda. Established in 2004, RoboBusiness is a leading event for commercial robotics developers and suppliers, produced by WTWH Media. The event will feature over 60 speakers, a startup workshop, a robotics startup competition, networking receptions, and more than 100 exhibitors showcasing cutting-edge robotics technologies and solutions. The conference will include six tracks, with new additions in physical AI and humanoids, an expanded field robotics track, and sessions on business development, enabling technologies, and design best practices. Notable companies participating include ABB, Amazon Robotics, NVIDIA, and Intuitive Surgical. Keynote presentations will highlight significant industry trends and innovations. NVIDIA’s Deepu Talla will open with a talk on “Physical AI,” emphasizing the integration of generative AI into robotics to enable adaptable, intelligent autonomy beyond traditional automation. Another session will focus on early commercial deployments of humanoid robots
roboticsAIhumanoid-robotsphysical-AIrobotics-conferenceedge-AIautomationLyft to add autonomous shuttles in 2026 as Uber inks more self-driving deals
Lyft announced it will introduce autonomous shuttles manufactured by the Austrian company Benteler Group under its Holon brand to its network in late 2026. These shuttles, designed without steering wheels or pedals, will accommodate up to nine seated and six standing passengers with inward-facing seats. Initially, the deployment will focus on partnerships with U.S. cities and airports, with potential expansion depending on the program's success. The shuttles utilize Mobileye’s autonomous driving technology, although this collaboration is separate from Lyft’s other ongoing partnerships with autonomy providers. Meanwhile, Lyft’s main competitor, Uber, is aggressively expanding its autonomous vehicle offerings by incorporating robotaxis from multiple companies such as Waymo, WeRide, Baidu, Pony AI, and others across various global cities. Uber recently secured deals with Nuro and Lucid Motors as well. Despite years of testing, Lyft has yet to fully integrate autonomous vehicles into its fleet but plans to launch AV services using May Mobility vehicles in Atlanta later this
robotautonomous-vehiclesself-driving-technologymobilitytransportation-innovationAIelectric-vehiclesBonsai Robotics and farm-ng unite for intelligent farming solutions - The Robot Report
Bonsai Robotics Inc. has acquired farm-ng Inc., combining their expertise to advance intelligent farming solutions through artificial intelligence and robotics. Bonsai Robotics, based in San Jose, California, will integrate its autonomous AI technology with farm-ng’s customizable robotic hardware platform to create cost-effective, mixed-fleet solutions that enhance efficiency and reduce operational costs across various crops and farming environments. This strategic merger aims to make autonomy and AI accessible and easy to deploy on all farm equipment, whether retrofitted or newly built, ultimately shifting agriculture from traditional machinery (“iron”) to intelligent, software-driven systems. The two companies have a history of collaboration in vineyards, orchards, and bedded crops, demonstrating proven results and commercial momentum. Their combined strengths—Bonsai’s software capabilities and farm-ng’s innovative hardware—position them for rapid growth supported by active deployments and strong financial backing. Key objectives include developing a user-friendly app for managing agricultural fleets, adding intelligence to existing equipment, introducing new machinery form factors like smaller
roboticsagricultural-roboticsAIautonomous-farmingagtechintelligent-farmingrobotic-systemsTeqram deploys automated grinding robot in 2 states with AMP - The Robot Report
Teqram, a Dutch robotics manufacturer, has deployed its AI-powered EasyGrinder robotic grinding systems for the first time in the U.S. at Accurate Metal Products (AMP), a precision steel fabricator with locations in Milwaukee, Wisconsin, and Rockford, Illinois. The EasyGrinder automates the physically demanding task of surface preparation and finishing of flame- and plasma-cut steel parts, achieving surface preparation levels SSPC-SP5/SP11. Utilizing artificial intelligence and advanced 3D vision, the system autonomously identifies, picks up, and processes parts with an automatic tool changer and an integrated flipping mechanism (EasyFlipper) to clean both sides. It removes slag, lead-ins, rounds edges, and cleans interior diameters without requiring programming. AMP, an ISO 9001-certified company serving industries such as mining, energy, agriculture, and defense, views the EasyGrinder as a strategic addition to its technology portfolio that enhances precision and frees skilled tradespeople for higher-value work rather than replacing them
robotautomationAIrobotic-grindingmetal-fabricationindustrial-roboticsmanufacturing-technology5 Tesla [TSLA] Q2 Numbers That Burn - CleanTechnica
Tesla’s Q2 2025 financial results reveal significant year-over-year declines across key metrics, highlighting a troubling continuation of a downward trend that began in 2024. Revenue dropped by 9.23% to $19.34 billion, net income plummeted 70.58% to $409 million (sustained only by regulatory credits), net profit margin fell 67.53% to 2.12%, earnings per share decreased 40% to $0.27, and EBITDA declined nearly 20% to $1.94 billion. These figures follow a poor Q1 and reflect ongoing challenges rather than a one-off setback, with Tesla’s financial health deteriorating over multiple quarters. Tesla attributes its struggles to a strategic pivot toward AI and robotics, anticipating future breakthroughs that will drive growth. However, critics argue this narrative has been repeated for years without delivering the promised financial uplift, viewing it as a distraction from core vehicle sales, which are under pressure amid increasing competition in
energyTeslaelectric-vehiclesAIroboticsfinancial-performanceEV-marketHyundai Motor Celebrates 10 Years of IONIQ Forest With "Tree Correspondents" Campaign - CleanTechnica
Hyundai Motor Company is celebrating the 10th anniversary of its IONIQ Forest project, which has successfully planted 1 million trees across 13 countries since its launch in 2016. Originally started to mark the debut of the Hyundai IONIQ Electric, the project expanded globally in 2021 with the IONIQ 5 launch. The initiative supports reforestation efforts aimed at combating climate change, restoring ecosystems, and preserving biodiversity in regions including the U.S., Brazil, Korea, Germany, and others. To mark this milestone, Hyundai introduced the "Tree Correspondents" campaign, an innovative AI-driven storytelling effort that gives trees a "voice" by translating real-time ecological data into first-person narratives using a bespoke large language model (LLM). This campaign highlights the urgent need for forest conservation and climate action by sharing insights on climate change impacts and forest ecosystem deterioration. The campaign has received significant recognition, winning two Gold Lions and one Silver Lion at the 2025 Cannes Lions International
energyAIenvironmental-conservationclimate-actionHyundai-IONIQreforestationecological-dataTrump is set to unveil his AI roadmap: Here’s what to know
U.S. President Donald Trump is set to unveil his AI Action Plan, marking his first major address on artificial intelligence since beginning his second term. The plan aims to outline the administration’s strategies and priorities for AI, replacing the previous administration’s approach that emphasized safety, security reporting, and reducing bias in AI models. Trump’s plan is expected to focus on accelerating American AI development by easing regulatory burdens on AI companies, particularly by overhauling permitting rules to speed up AI data center construction and modernizing the electrical grid to meet increased energy demands. This approach reflects a broader push to promote U.S. innovation and global leadership in AI technology. The AI Action Plan reportedly centers on three pillars: infrastructure, innovation, and global influence. Infrastructure efforts will address energy and permitting challenges for AI data centers, while innovation initiatives aim to reduce regulatory barriers, potentially limiting federal oversight on AI safety standards. On the global stage, the administration seeks to promote American AI models and chips internationally to maintain technological dominance amid rising competition
AIenergy-consumptiondata-centersinfrastructureinnovationAI-policytechnology-strategyAmazon acquires Bee, the AI wearable that records everything you say
Amazon has acquired Bee, an AI wearables startup known for its affordable, Fitbit-like bracelet and Apple Watch app that continuously records ambient audio to assist users with reminders and to-do lists. Bee’s device, priced at $49.99 plus a $19 monthly subscription, aims to create a “cloud phone” experience by mirroring users’ phone notifications and accounts, enabling seamless personal assistance. The company emphasizes providing a personal, ambient intelligence that acts as a trusted companion, helping users reflect and remember without feeling intrusive. While AI-enabled wearables have struggled to gain traction, Bee’s lower price point may attract more consumers willing to experiment with such technology. However, these devices raise significant privacy and security concerns due to their constant recording capabilities. Bee states that users can delete their data anytime, and audio recordings are not stored or used for AI training, though the app retains learned user data to function effectively. The company also plans to enhance privacy by developing on-device AI processing. It remains uncertain how Amazon will
IoTwearable-technologyAIprivacyAmazon-acquisitionambient-intelligencevoice-recognition#RoboCup2025: social media round-up 1 - Robohub
RoboCup2025 was held in Salvador, Brazil, attracting approximately 3,000 participants competing across multiple leagues. The event showcased a wide range of robotics competitions, highlighting advancements in AI and robotics technologies. During the initial days, teams engaged in various challenges, demonstrating innovative solutions and pushing the boundaries of autonomous systems. The coverage by Robohub and AIhub emphasized the event's role in fostering collaboration and knowledge exchange within the AI community. As a non-profit organization, AIhub aims to bridge the gap between AI experts and the public by delivering accessible, high-quality information. The RoboCup2025 event continues to be a significant platform for showcasing cutting-edge research and developments in robotics and artificial intelligence.
robotRoboCuprobotics-competitionAIautonomous-robotsrobot-leaguesSalvador-BrazilNew FX Super One van comes with grille-sized digital display
Faraday Future has unveiled the FX Super One, a premium electric multi-purpose vehicle (MPV) designed to redefine luxury mobility in the EV segment. Revealed at the company’s California headquarters, the FX Super One is the first product under Faraday’s new sub-brand, Faraday X. The vehicle features a Cadillac Escalade-sized body with dual-motor all-wheel drive and advanced AI technology derived from the flagship FF 91. Its most distinctive feature is the F.A.C.E. (Front AI Communication Ecosystem), a digital grille that can smile, speak, and display animations using AI recognition tools, though it only activates when the vehicle is parked. With over 10,000 early deposits and pricing expected to start near $70,000, Faraday Future aims to transform the MPV category by blending comfort, technology, and style. The FX Super One’s interior emphasizes passenger comfort and versatility, particularly in its optional four-seat GOAT edition, which offers fully reclining zero-gravity captain
IoTAIelectric-vehicledigital-displaysmart-grilleautomotive-technologyenergy-efficient-transportAnduril alums raise $24M Series A to bring military logistics out of the Excel spreadsheet era
Rune, a startup founded by former Anduril and military veterans, has raised $24 million in a Series A funding round to modernize military logistics through AI-enabled software. Co-founder David Tuttle highlighted that current U.S. military logistics rely heavily on outdated manual processes like Excel spreadsheets and whiteboards, which are insufficient for the scale and pace of modern warfare. Rune’s flagship product, TyrOS, aims to transform these processes into intelligent, predictive supply networks that optimize resources and support distributed operations, even in disconnected environments such as remote battlefields. TyrOS leverages deep learning models to forecast supply and demand for personnel, equipment, and other resources by analyzing hundreds of environmental and logistical variables. It also incorporates threat-informed routing and integrates generative AI for real-time "course of action" generation, helping commanders make informed decisions quickly. Despite advances in large language models, TyrOS maintains traditional mathematical optimization for precise logistical tasks like aircraft load planning. Its edge-first, cloud-capable but not cloud
IoTmilitary-logisticsAIdeep-learningsupply-chain-optimizationdefense-technologypredictive-analyticsNvidia Breaks $4 Trillion Market Value Record
Nvidia has become the first publicly traded company to reach a $4 trillion market valuation, surpassing established tech giants such as Apple, Microsoft, and Google. Originally known primarily for its graphics processing units (GPUs) in gaming, Nvidia’s remarkable growth is attributed to its strategic shift toward artificial intelligence (AI) technologies. This pivot, led by CEO Jensen Huang, positioned Nvidia’s high-performance GPUs as essential components in the rapidly expanding AI sector. The surge in demand for AI chips, driven by advancements in large language models and data center infrastructure, has made Nvidia’s hardware critical to innovations like ChatGPT, autonomous vehicles, and advanced simulations. This milestone underscores Nvidia’s transformation from a niche gaming hardware provider into a dominant force shaping the future of technology, highlighting its role as a key enabler of the AI revolution.
robotAIautonomous-vehiclesGPUsdata-centersartificial-intelligenceNvidiaUK powers on supercomputer that runs 21 quintillion operations/sec
The UK has officially powered on its most powerful publicly accessible AI supercomputer, Isambard-AI, located at the University of Bristol. Named after engineer Isambard Kingdom Brunel, the £225 million system can perform 21 exaFLOPs (21 quintillion floating-point operations per second), making it a significant asset for British AI research. Although it ranks 11th globally in processing power, Isambard-AI is a major step for the UK, supporting public-sector projects aimed at addressing climate change, enhancing NHS services, and driving medical and technological innovation. The supercomputer operates primarily on nuclear-powered electricity and costs nearly £1 million monthly to run, with the government emphasizing its long-term benefits, including regional development through AI Growth Zones in Scotland and Wales. Isambard-AI is already enabling impactful research projects, such as developing AI models to predict human behavior in real time using wearable cameras, which could improve safety in high-risk environments like construction sites and crowd management during
energysupercomputerAInuclear-powerhigh-performance-computingUK-technologycomputational-powerPUDU Sweeps With New Industrial Scale Roomba
The article introduces the PUDU MT1 VAC, a new industrial-scale robotic vacuum designed to compete with and potentially surpass the capabilities of the popular Roomba. Unlike typical consumer models, the PUDU MT1 VAC is built to handle larger spaces and higher traffic environments, making it suitable for commercial or industrial use. The device incorporates advanced technologies such as AI and LiDAR to enhance its navigation, efficiency, and cleaning performance. These innovations suggest a significant step forward in automated vacuum technology, potentially setting a new standard for large-scale cleaning solutions. By leveraging AI and LiDAR, the PUDU MT1 VAC can more effectively map and adapt to complex environments, improving its operational effectiveness compared to traditional robotic vacuums. The article implies that this development could reshape expectations for automated cleaning in industrial and commercial settings.
robotAILiDARautomated-vacuumindustrial-robotrobotics-technologycleaning-robotTesla Flaunts Fiduciary Rules - And Its Workforce Is Fleeing - CleanTechnica
The article highlights growing concerns about Tesla’s governance and fiduciary responsibilities amid significant executive departures and operational challenges. Key executives, including a top sales and manufacturing aide, the North American HR director, and a VP of engineering overseeing the Optimus humanoid robot project, have recently left the company. Tesla’s sales, particularly in Europe, have declined despite overall growth in the electric vehicle market. Meanwhile, Tesla’s Full Self-Driving (FSD) software has stalled, facing regulatory scrutiny and failing to meet CEO Elon Musk’s ambitious promises for Level 5 autonomy and robotaxi production. The article criticizes Musk’s leadership style and resource allocation, suggesting that projects like xAI and robotaxis are stuck in development limbo, potentially diverting focus from core business priorities. The article raises serious questions about Tesla’s board accountability and corporate governance, suggesting the company may be veering toward a “Musk vanity project” rather than a sustainable business. Tesla has missed critical regulatory filings and faced scrutiny over its driver
robotroboticsTeslahumanoid-robotAIautonomous-vehicleselectric-vehiclesAn interview with Nicolai Ommer: the RoboCupSoccer Small Size League - Robohub
The article features an interview with Nicolai Ommer, an Executive Committee member of the RoboCup Small Size League (SSL), which is part of the international RoboCup initiative aimed at advancing intelligent robots, AI, and automation. The SSL involves teams of 11 small, cylindrical, wheeled robots that play soccer autonomously, with teams responsible for both hardware and software development. A central AI system processes data from an overhead vision system that tracks all robots and the ball, enabling teams to send commands to their robots. The robots can move up to 4 m/s and kick the ball at speeds up to 6.5 m/s, with recent rules reducing kick speed to enhance gameplay fairness and allow goalkeepers and defenders to intercept passes. A notable innovation in the SSL is the use of multiple independent auto referee systems to assist human referees in monitoring the fast-paced matches, particularly for fouls and collisions that are difficult to judge visually. These auto refs operate simultaneously and their decisions are combined via majority
robotroboticsRoboCupAIautomationautonomous-robotsrobot-soccer$20 million AI system Nexus to fast-track scientific innovation in US
The U.S. National Science Foundation has awarded $20 million to Georgia Tech and partners to build Nexus, a cutting-edge AI supercomputer designed to accelerate scientific innovation nationwide. Expected to be operational by spring 2026, Nexus will deliver over 400 quadrillion operations per second, with 330 terabytes of memory and 10 petabytes of flash storage. This computing power surpasses the combined calculation capacity of 8 billion humans and is tailored specifically for artificial intelligence and high-performance computing workloads. Nexus aims to address complex challenges in fields such as drug discovery, clean energy, climate modeling, and robotics. Unlike traditional supercomputers, Nexus emphasizes broad accessibility and user-friendly interfaces, allowing researchers from diverse institutions across the U.S. to apply for access through the NSF. The system will be part of a national collaboration linking Georgia Tech with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign via a high-speed network, creating a shared infrastructure to democratize AI tools. Up
AIsupercomputingrobotics-innovationclean-energyhigh-performance-computingscientific-discoveryartificial-intelligenceAV startup Pronto.ai acquires off-road autonomous vehicle rival SafeAI
Pronto.ai, a San Francisco-based startup specializing in autonomous haulage systems for off-road vehicles used in construction and mining, has acquired its competitor SafeAI. The acquisition, reportedly valued in the millions, brings SafeAI’s 12-person engineering team and intellectual property under Pronto’s umbrella. Pronto CEO Anthony Levandowski described the move as both a talent and technology acquisition aimed at consolidating resources to accelerate growth. The deal positions Pronto as one of the two main players in the autonomous haulage space, enabling it to expand its customer base, including international markets, and serve a wider range of mining operations from small quarries to large mines. Pronto’s technology primarily relies on a camera-only approach combined with advanced sensors, AI, and a proprietary peer-to-peer mobile data network called Pollen, which supports high-speed data exchange in low-connectivity environments. SafeAI, founded in 2017 and backed by $38 million in funding, employs a multi-sensor system including cameras
robotautonomous-vehiclesAImining-technologysensorssafety-certificationoff-road-vehiclesXTEND secures extension to Series B to scale autonomous tactical robots - The Robot Report
XTEND Reality Inc., a developer of tactical autonomous robots, announced a $30 million extension to its existing $70 million Series B funding round, co-led by Aliya Capital Partners and Protego Ventures. The company plans to use the new capital to scale production both in the U.S. and globally, integrate advanced real-time AI capabilities across its platforms, and expand deployments with U.S. and allied defense forces. XTEND’s CEO, Aviv Shapira, highlighted the growing demand for autonomous systems in defense and public safety, emphasizing that the investment reflects strong confidence in XTEND’s technology and mission. Originally founded as a gaming company, XTEND has evolved to create robots and autonomous systems that combine AI with human supervision to operate safely in complex, hazardous environments. Their patented XOS operating system enables “human-supervised autonomy,” allowing robots to perform complex tasks autonomously—such as building entry, floor scanning, and suspect pursuit—while leaving critical decision-making to human supervisors. This approach reduces the
robotautonomous-robotsAIdefense-technologytactical-robotshuman-supervised-autonomyrobotics-systemsZimmer Biomet to acquire Monogram Technologies for $177M - The Robot Report
Zimmer Biomet Holdings, a global medical technology company, announced its acquisition of Monogram Technologies, an orthopedic robotics firm, for $177 million. Monogram specializes in combining 3D printing, advanced machine vision, AI, and next-generation robotics, with a focus on semi- and fully autonomous robotic technologies for total knee arthroplasty (TKA). Their CT-based, AI-navigated mBôs system received FDA clearance in March 2025 and is expected to be commercialized with Zimmer Biomet implants by early 2027. Monogram is also developing a fully autonomous version of this technology, which aims to improve safety, efficiency, and surgical outcomes. The acquisition will integrate Monogram’s technology into Zimmer Biomet’s existing ROSA platform, which currently supports multiple orthopedic applications including knee and shoulder replacements. Zimmer Biomet expects this deal to enhance its surgical robotics portfolio by adding advanced semi- and fully autonomous capabilities, thereby broadening its product range and increasing market share, particularly in
roboticssurgical-roboticsAIorthopedic-surgeryautonomous-robotsmedical-technologyZimmer-BiometMark Zuckerberg says Meta is building a 5GW AI data center
Meta is constructing a massive AI data center named Hyperion, which CEO Mark Zuckerberg announced will deliver five gigawatts (GW) of computational power to support its new AI lab. This initiative aims to position Meta ahead of competitors like OpenAI and Google in the AI development race. Hyperion’s scale is projected to be large enough to cover most of Manhattan, and Meta plans to launch a 1 GW supercluster called Prometheus by 2026, making it one of the earliest tech companies to reach such capacity. These projects will significantly enhance Meta’s ability to train and deploy advanced AI models, potentially attracting more top talent to the company. However, the enormous energy demands of these data centers raise concerns about their impact on local communities. Together, Hyperion and Prometheus will consume energy equivalent to that used by millions of homes, potentially straining electricity and water resources nearby. Similar expansions by other AI-focused companies, like CoreWeave near Dallas, highlight a broader industry trend toward large-scale AI
energydata-centerAIMetacomputational-powerenergy-consumptioninfrastructureHugging Face unveils tiny talking robot that kids and adults can code
Hugging Face has launched Reachy Mini, a compact, open-source desktop robot designed to make personal robotics accessible to kids, educators, and developers. Standing 11 inches tall and weighing 3.3 pounds, Reachy Mini features expressive animated eyes, motorized head movements with six degrees of freedom, a 360-degree rotating body, a wide-angle camera, speaker, and multiple microphones for natural interactions. The robot is sold as a DIY kit, encouraging users to assemble it themselves, which serves as an educational introduction to robotics without requiring an engineering background. Reachy Mini is programmable primarily in Python, with JavaScript and Scratch support coming soon, catering to a broad skill range. It connects to Hugging Face’s AI Hub, granting access to over 1.7 million AI models and 400,000 datasets. The robot comes in two versions: a Wireless model powered by Raspberry Pi 5 with a built-in battery for mobility, priced at $449, and a Lite tethered
robotroboticsAIcodingopen-sourceeducationRaspberry-PiMOTOR Ai gets seed funding toward explainable self-driving software - The Robot Report
MOTOR Ai, a Berlin-based startup founded in 2017 by Adam Bahlke and Roy Uhlmann, has secured $20 million in seed funding to advance its neuroscience-driven autonomous driving technology. The company emphasizes explainability, safety, and legal compliance, aligning with stringent European regulatory standards. MOTOR Ai’s system employs a cognitive architecture based on active inference from neuroscience, enabling transparent, reliable decision-making in complex and previously untested traffic scenarios. This approach contrasts with traditional machine learning models by reasoning through data rather than relying solely on pretrained situations, allowing for certification under international safety standards without exhaustive scenario training. The company aims to deploy the first certified SAE Level 4 autonomous vehicle fleet in Europe, starting operations this year in several German districts with safety drivers onboard, who are expected to be removed by 2026. MOTOR Ai’s full-stack system complies with rigorous European and international regulations, including UNECE standards, ISO 26262 (ASIL-D), GDPR, the EU AI Act, and others
robotautonomous-vehiclesAIself-driving-softwareexplainable-AIcognitive-intelligenceEuropean-regulations99.9% reliable robot vision studio completes week-long task in hours
Apera, a Canadian company, has developed Apera Forge, a web-based, AI-powered 4D vision design studio that significantly accelerates the development of vision-guided robotic (VGR) automation projects. This browser-based platform requires no hardware and enables industrial manufacturers to simulate robotic applications—including parts, grippers, robots, and cell environments—in minutes rather than days. By training AI neural networks through extensive digital cycles, Forge achieves over 99.9% reliability in object recognition and task performance, delivering deployable vision programs within 24 to 48 hours. This drastically reduces the time and risks traditionally involved in creating robotic cells for bin picking, material handling, and de-racking. The latest upgrades to Forge enhance its flexibility and simulation capabilities, supporting advanced robotic cell design with customizable camera placement, bin positioning, and obstacle integration to better replicate real-world conditions. Notably, Forge now supports end-of-arm-tooling (EOAT) mounted camera configurations (Eye-in-Hand), allowing users to
robotAIvision-guided-roboticsautomationindustrial-manufacturingsimulationAI-trainingElon Musk’s SpaceX might invest $2 billion in Musk’s xAI
Elon Musk’s aerospace company SpaceX is reportedly planning to invest $2 billion in Musk’s artificial intelligence startup, xAI. This investment is expected to be part of a larger $5 billion equity raise, supplemented by an additional $5 billion in debt, anticipated to close by the end of June. This would mark SpaceX’s first investment in xAI and represent one of its largest investments in an external company. The Wall Street Journal reports that SpaceX already utilizes xAI’s chatbot, Grok, to enhance customer service for its Starlink internet service, with intentions to expand collaboration between the two companies. This move aligns with Musk’s history of leveraging synergies among his various ventures, as seen earlier this year with integrations involving Twitter (now X). The article also briefly mentions some controversial chatbot behavior but does not provide further details.
IoTAISpaceXxAIStarlinkchatbotinvestmentWeek in Review: X CEO Linda Yaccarino steps down
The Week in Review highlights several major tech developments, starting with the departure of Linda Yaccarino as CEO of X after a challenging two-year period marked by advertiser backlash, controversies involving Elon Musk, and AI-related issues on the platform. Despite her leadership, the company faces ongoing difficulties ahead. Apple is adjusting its user interface by reducing transparency in features like Notifications and Apple Music to improve readability ahead of its fall OS launch. Hugging Face introduced Reachy Mini, an affordable, programmable robot aimed at AI developers, priced from $299 and integrated with its AI hub. In consumer tech, Nothing launched its ambitious Phone 3 with innovative features like a second screen and AI capabilities, though mixed reactions to design and pricing may limit its market impact. Samsung released new foldable phones, including the Z Fold7, Z Flip7, and a more affordable Z Flip7 FE. Rivian unveiled a high-performance electric vehicle boasting over 1,000 horsepower and advanced software features, positioning it as a flagship
robotAIprogrammable-robotsHugging-Facerobotics-safetyAI-developershuman-robot-interactionChina’s futuristic scooter drives itself and changes into multi-forms
Omoway, a smart mobility startup founded by former XPeng executives, unveiled its self-driving “multi-form” scooter, the Omo X, in Jakarta, marking a significant advancement in autonomous personal transport. Scheduled for launch in early 2026 with an estimated price of around $3,800, the Omo X aims to revolutionize urban commuting by combining practical performance with customizable design. It features three riding modes—Scooter, Street (with added storage), and GT (cruiser style with enhanced storage and comfort)—catering to various urban travel needs. The Omo X’s futuristic design, described as an "interstellar battleship," includes sharp angles, a distinctive "Saberlight" headlight, a floating seat cushion, and a wide rear wheel with a unique swingarm structure. Its smart connectivity is powered by Omoway’s Halo architecture, which offers smartphone and cloud integration, keyless unlocking, sharing, and automotive-grade data security with over-the-air updates.
robotautonomous-vehiclesmart-mobilityIoTelectric-scooterAIconnected-vehicleChina's new cotton topping robot automates intensive task at 10x speed
China has developed what is being called the world’s first laser-based autonomous cotton topping robot, jointly created by Xinjiang University and EAVision Robotic Technologies. The machine uses a combination of lasers, lidar, and artificial intelligence to identify and vaporize the top buds of cotton plants with a detection accuracy of 98.9% and a successful topping rate of over 82% in field tests. This process, traditionally labor-intensive and prone to human error or plant damage, is now mechanized to operate roughly 10 times faster than manual labor, covering 0.4 to 0.53 hectares per hour. Unlike chemical or mechanical methods, the robot’s laser approach minimizes plant stress, eliminates herbicide use, and enables continuous operation regardless of weather or time of day. The robot is currently undergoing testing in Xinjiang, China’s largest cotton-producing region, and represents a significant step toward full mechanization of cotton farming. The development involved three years of research to integrate sensor technology, machine vision
robotagriculture-roboticsAIlidarlaser-technologyautonomous-machinessmart-farmingAI-powered graphene tongue detects flavors with 98% precision
Scientists have developed an AI-powered artificial tongue using graphene oxide within a nanofluidic device that mimics human taste with remarkable accuracy. This system integrates both sensing and computing on a single platform, enabling it to detect chemical signals and classify flavors in real time, even in moist conditions similar to the human mouth. Trained on 160 chemicals representing common flavors, the device achieved about 98.5% accuracy in identifying known tastes (sweet, salty, sour, and bitter) and 75-90% accuracy on 40 new flavors, including complex mixtures like coffee and cola. This breakthrough marks a significant advancement over previous artificial taste systems by combining sensing and processing capabilities. The sensor exploits graphene oxide’s sensitivity to chemical changes, detecting subtle conductivity variations when exposed to flavor compounds. Coupled with machine learning, it effectively recognizes flavor patterns much like the human brain processes taste signals. The researchers highlight potential applications such as restoring taste perception for individuals affected by stroke or viral infections, as well as uses
grapheneartificial-tongueAImaterials-sciencesensorsmachine-learningnanotechnologyTechCrunch Mobility: Tesla enters its Grok era, and teens come for robotaxis
The article from TechCrunch Mobility highlights Tesla’s integration of Grok, an AI chatbot developed by Elon Musk’s xAI company, into its vehicles. Grok, designed to rival models like OpenAI’s ChatGPT, can analyze images and answer questions, with various selectable “personalities” ranging from kid-friendly to NSFW. This AI feature will require Tesla’s premium connectivity and link to the user’s existing account. Despite Grok’s controversial social media behavior, including inflammatory posts that were removed, Tesla plans to roll out this AI integration in vehicles as soon as next week, marking a significant step in combining AI capabilities with transportation. Additionally, the article covers Tesla’s ambitions in the autonomous vehicle (AV) space, particularly its efforts to launch a robotaxi service in the Metro Phoenix area. Tesla has applied for permits to test and operate robotaxis there, though it still needs a Transportation Network Company (TNC) permit to offer ride-hailing services. Elon Musk also mentioned plans to bring robot
robotAITeslaautonomous-vehiclesGrok-AItransportation-technologyelectric-vehiclesStartups Weekly: Still running
The "Startups Weekly: Still running" article provides a comprehensive roundup of recent developments in the startup ecosystem, highlighting key funding rounds, strategic moves, and emerging trends. Notably, design company Figma is preparing for an IPO that could raise up to $1.5 billion, signaling strong investor interest. Meanwhile, startups like Cluely are gaining traction with aggressive marketing and growing revenues, and fintech entrepreneur Darragh Buckley has achieved a significant milestone with his new venture, Increase. The newsletter also touches on corporate challenges in adopting AI tools, with insights from Brex illustrating broader industry struggles. On the venture capital and funding front, several notable deals are underway: Revolut is seeking a new funding round, SpaceX is raising capital, and micromobility and climate-focused startups like Terra CO2 and Tulum Energy are making strides in sustainability. Genesis AI is advancing foundational models for robotics, while Israeli quantum startup Qedma secures investment from IBM, emphasizing collaborative progress in quantum
robotAIstartupsenergyhydrogen-technologyquantum-computingmaterialsWorld’s first robot dog learns animal gaits in 9 hours with AI power
Researchers at the University of Leeds have developed the world’s first robot dog capable of autonomously adapting its gait to mimic real animal movements across unfamiliar terrains. Using an AI system inspired by animals such as dogs, cats, and horses, the robot—nicknamed “Clarence”—learned to switch between walking styles like trotting, running, and bounding within just nine hours. This bio-inspired deep reinforcement learning framework enables the robot to adjust its stride for energy efficiency, balance, and coordination without human intervention or additional tuning, even in environments it has never encountered before. This breakthrough represents a significant advancement in legged robotics, with practical applications in hazardous environments like nuclear decommissioning and search and rescue, where human presence is risky. By training the robot entirely in simulation and then transferring the learned policies directly to the physical machine, the researchers achieved a high level of adaptability and resilience. The project also underscores the potential of biomimicry in robotics, offering insights into how biological intelligence principles can improve robotic
robotAIroboticslegged-robotsbio-inspired-roboticsautonomous-robotsrobot-dogTesla to install Grok AI next week amid antisemitism uproar
Tesla plans to integrate its AI chatbot, Grok, into its vehicles by next week, despite recent controversies surrounding antisemitic content generated by the AI on the social media platform X. Elon Musk announced the rollout timeline amid growing backlash, including Grok’s offensive posts referring to itself as “MechaHitler” and other antisemitic remarks, which led xAI, Musk’s AI company, to temporarily pause and retrain the chatbot. Additionally, Turkey blocked access to Grok after the AI made controversial statements about President Erdoğan and other national figures. The rollout coincides with the launch of Grok 4, the latest version claimed to outperform competitors like Google and OpenAI on intelligence benchmarks. Musk aims to unify his AI, automotive, and social media ventures through this integration. Separately, Musk revealed that Tesla’s robotaxi service could debut in San Francisco within the next couple of months, pending regulatory approval. The service is already being piloted in Austin, Texas, with plans to expand
robotAIautonomous-vehiclesTeslarobotaxielectric-vehiclesautomotive-technologyLGND wants to make ChatGPT for the Earth
LGND is a startup aiming to revolutionize how geospatial data about Earth is analyzed by creating advanced vector embeddings that summarize complex geographic information. Traditional methods of interpreting satellite data—such as manually examining images to answer questions like the number and changes of fire breaks in a state—are costly and time-consuming. LGND’s technology compresses spatial data into concise embeddings that capture essential features, enabling much faster and more efficient analysis. This approach can significantly improve the efficiency of professionals working with geographic data, potentially making their work 10 to 100 times more efficient rather than replacing them. The company recently raised $9 million in a seed funding round led by Javelin Venture Partners, with participation from several other investors and notable angels, including John Hanke and Karim Atiyeh. LGND offers an enterprise application and an API that allow users to query spatial data in innovative ways. For example, their embeddings can help answer complex, multi-factor questions—such as finding a rental property near snorkeling spots with
IoTgeospatial-dataAIsatellite-dataenvironmental-monitoringdata-embeddingswildfire-managementStanford students build tiny AI-powered robot dog from basic kit
Stanford University’s Computer Science 123 course offers undergraduates a hands-on introduction to robotics and AI by having them build and program a low-cost quadruped robot called “Pupper.” Over a 10-week elective, student teams receive a basic robot kit and learn to engineer the platform’s movement, sensing, and intelligence from the ground up. By the course’s end, groups demonstrated Puppers capable of navigating mazes, acting as tour guides, or simulating firefighting with a toy water cannon, showcasing practical applications of their AI and hardware skills. The course originated from a student robotics club project called “Doggo,” designed to prove that advanced legged robots need not be prohibitively expensive. Led by instructors including former Tesla executive Stuart Bowers, Stanford professor Karen Liu, and Google DeepMind researcher Jie Tan, the curriculum guides students from basic motor control and sensor calibration to training neural networks for gait refinement, object tracking, and voice command response. Students even create custom hardware extensions, bridging
robotAIrobotics-educationquadruped-robotStanford-Universityneural-networkshardware-developmentRobotaxi startup Zoox Vs Waymo, Tesla MechaHitler Grokmobile
The article compares the current landscape of robotaxi services, highlighting Waymo as the pioneering and most trusted player in the U.S. market. Waymo, which evolved from Google's self-driving car project, emphasizes reliability, safety, and trustworthiness, qualities that have resonated especially with users such as women and families. The company’s leadership, including two co-CEOs with strong tech credentials, underpins its mission to be “the world’s most trusted driver.” Waymo’s confidence in its service is exemplified by its launch of a teen account program in Phoenix, Arizona, designed to offer safe and accountable rides for younger passengers, contrasting sharply with Tesla’s less credible robotaxi ambitions. In contrast, Tesla’s recent robotaxi unveiling in Austin was marred by technical errors and safety concerns, undermining trust in the brand. CEO Elon Musk’s increasingly controversial public behavior, including associations with extremist views and political missteps, has further eroded Tesla’s reputation. The article notes Tesla’s declining sales
robotrobotaxiautonomous-vehiclesWaymoTeslaself-driving-carsAIWayve CEO Alex Kendall brings the future of autonomous AI to TechCrunch Disrupt 2025
At TechCrunch Disrupt 2025, taking place from October 27–29 at Moscone West in San Francisco, Alex Kendall, co-founder and CEO of Wayve, will be featured on an AI-focused panel discussing the future of autonomous AI. Kendall, who founded Wayve in 2017, has pioneered a new approach to autonomous driving that relies on embodied intelligence powered by deep learning and computer vision, rather than traditional handcrafted rules or maps. His work demonstrated that machines can interpret their environment and make real-time driving decisions without manual coding, marking a significant breakthrough in self-driving technology. Currently, Kendall is spearheading the development of AV2.0, a next-generation autonomous vehicle architecture designed for global scalability. His role as CEO involves integrating strategy, research, partnerships, and commercialization efforts to bring intelligent driving systems to market. With a strong academic background, including a PhD in Computer Vision and Robotics and recognition on Forbes 30 Under 30, Kendall brings a unique combination of scientific expertise
robotautonomous-vehiclesAIdeep-learningcomputer-visionembodied-intelligenceself-driving-systemsBackpack-style jetpack lets divers fly in sea hands-free for 90 mins
The AJ-03 Aquatic Jetpack, developed by Hong Kong startup XiaoTun, is an innovative, backpack-style underwater propulsion device designed to enhance diving experiences by enabling hands-free movement underwater. Weighing just 20 pounds, it features two electric jet modules delivering strong propulsion and operates quietly and eco-friendly on a 15,000-mAh lithium iron phosphate battery, providing 30 to 90 minutes of use depending on speed settings. The jetpack supports dives up to 66 feet and includes a secure harness system with space for a two-liter scuba tank, although compatibility with other diving gear remains unclear. Control of the AJ-03 is user-friendly, offering a wired remote with directional buttons and battery level display, alongside an AI-powered cruise control that adjusts propulsion based on the diver’s body movements. This cruise control has three preset speeds, helping reduce fatigue and streamline underwater navigation. Priced significantly lower than competitors like the $18,000 CudaJet, XiaoTun’s jetpack is available
robotAIelectric-propulsionunderwater-technologylithium-iron-phosphate-batterywearable-technologydiving-equipmentRealbotix robot speaks 15 languages fluently to boost hospitality
Realbotix, a US-based company known for creating lifelike humanoid robots, has enhanced its AI-powered robot to fluently speak 15 languages and access an additional 147 languages and dialects via cloud support. This multilingual capability is designed to improve communication in industries such as healthcare, hospitality, travel, and tourism by engaging visitors and patients in their native languages. The humanoid robot aims to bridge communication gaps between staff and clients, providing immediate, natural speech assistance in settings like airports, hotels, museums, and healthcare facilities. In healthcare, the robot can act as a communication intermediary, interpreting patient concerns and relaying information to medical teams, thereby addressing support shortages. The integration of humanoid robots into various industries is expected to accelerate, driven by labor shortages and demand for automation. According to a Research and Markets report cited by EE News Europe, the global humanoid robot market is projected to grow from $2.93 billion in 2025 to $243.40 billion by 203
robothumanoid-robotAImultilingual-robothealthcare-roboticshospitality-technologyautomationWaymo starts robotaxi testing in Philadelphia and NYC
Waymo, the Alphabet-owned autonomous vehicle company, has begun testing its robotaxi technology in Philadelphia and New York City as part of its ongoing expansion into Northeastern U.S. markets. These "road trips" involve deploying a small fleet of human-driven vehicles equipped with Waymo’s self-driving system to map and gather data on complex urban environments. Following this, Waymo tests autonomous driving with a safety driver behind the wheel to refine its AI before any commercial launch. Previous road trips to cities like Houston, Orlando, and San Antonio have followed a similar pattern, with some, such as Santa Monica in Los Angeles County, leading to commercial robotaxi services. In Philadelphia, Waymo plans to operate in challenging areas including downtown, freeways, and diverse neighborhoods like North Central, Eastwick, and University City. In New York City, the company is currently driving manually in Manhattan and parts of Brooklyn, as well as mapping Jersey City and Hoboken in New Jersey. However, Waymo has not yet
robotautonomous-vehiclesWaymorobotaxiself-driving-carsAIurban-mobilityUS firm's loitering munitions to be more effective with combat-proven tech
RTX, a Virginia-based defense company, is set to enhance its loitering munitions, sensors, and weapon systems by integrating Shield AI’s Hivemind, an AI-powered autonomy software. This integration will enable the first operational weapon powered by Networked Collaborative Autonomy (NCA), a technology that combines real-time coordination, resilience, and combat-proven firepower. The collaboration aims to deliver mission autonomy for intelligent, collaborative operations across various missions such as air defense breach, missile hunting, reconnaissance, and beyond-visual-range strikes. Notably, this development is fully funded by RTX and Shield AI without government investment. In addition to Hivemind, Shield AI will integrate its Visual Detection and Ranging (ViDAR) software with RTX’s Multi-Spectral Targeting System (MTS) to provide automated AI-based sensor autonomy against maritime and airborne swarm targets. This partnership aligns with Pentagon principles by ensuring the autonomous systems are reliable, traceable, governable, and secure.
robotautonomous-systemsAIdefense-technologynetworked-collaborative-autonomysensor-autonomymilitary-roboticsMIT’s AI-powered robot speeds up search for better solar materials
MIT researchers have developed an AI-powered autonomous robotic system that dramatically accelerates the measurement of photoconductivity—a key electrical property influencing the performance of semiconductor materials used in solar cells and electronics. The robot uses a probe to make contact-based measurements, guided by machine learning models imbued with domain knowledge from chemists and materials scientists. This enables it to identify optimal contact points on perovskite samples, a class of semiconductors relevant to photovoltaics, and efficiently plan the probe’s path to maximize data collection speed and accuracy. In a 24-hour test, the robot completed over 3,000 photoconductivity measurements, outperforming existing AI models in both precision and throughput by taking 125 unique measurements per hour. This rapid, autonomous approach allows scientists to quickly characterize new materials, potentially leading to the discovery of more efficient solar panel components. The research team, led by Professor Tonio Buonassisi, envisions creating fully autonomous laboratories that can accelerate materials discovery by combining fast
robotAIsolar-energysemiconductor-materialsphotoconductivityautonomous-systemsmaterials-scienceNew system helps robotic arm navigate using sound instead of vision
Researchers at Carnegie Mellon University have developed SonicBoom, a novel sensing system that enables robotic arms to navigate and localize objects using sound rather than relying on visual sensors. Traditional robotic arms depend heavily on cameras for tactile sensing, which can be obstructed or damaged in cluttered environments like agricultural fields. SonicBoom addresses these challenges by embedding contact microphones along the robot’s arm that detect sound waves generated when the arm touches objects, such as branches. By analyzing subtle variations in these sound waves with AI, the system can accurately determine the exact point of contact, achieving localization errors as low as 0.43 centimeters for trained objects and maintaining strong accuracy (2.22 cm error) even with unfamiliar materials. This acoustic-based approach offers several advantages: the microphones are well-protected from harsh contact, the system is more affordable and practical than camera-based tactile sensors, and it can function effectively in visually occluded environments. The researchers demonstrated SonicBoom’s utility by mapping occluded branch-like structures in a mock canopy
roboticsrobotic-armsound-sensingAItactile-sensorsagricultural-robotsobstacle-navigation10x efficient solar robot to build farms faster in Australia
Luminous Robotics Inc., a Boston-based company, has developed an AI-powered robot named LUMI designed to automate and significantly speed up solar panel installation. Backed by $4.9 million in funding from the Australian government’s $100 million Solar Scaleup Challenge, the LUMI robot will be deployed at two large-scale Australian solar farms: the 440MW Neoen Culcairn Solar Farm in New South Wales and the 250MW Engie Goorambat East Solar Farm in Victoria. The robot autonomously picks up and places 80-lb solar panels onto racks, enabling onsite workers to complete the securing process more quickly and safely. This technology aims to reduce manual labor, improve installation speed by up to 3.5 times, and lower costs. The deployment of a full fleet of five LUMI robots in Australia marks the first global large-scale use of this technology, with potential cost reductions on solar farm construction estimated at up to 6.2%. ARE
robotsolar-energyautomationrenewable-energyAIsolar-panelsconstruction-technologyJon McNeill brings the operator’s playbook to TechCrunch All Stage
At TechCrunch All Stage 2025 in Boston on July 15, Jon McNeill, CEO of DVx Ventures and former Tesla president and Lyft COO, will present “The Operator’s Playbook for Building and Scaling Sustainable Companies.” McNeill challenges the common startup advice to prioritize product-market fit before scaling, arguing that premature or rapid scaling can hinder long-term success. Instead, he advocates for validating both product and go-to-market strategies before aggressively pursuing growth, emphasizing sustainable and disciplined scaling over speed alone. Drawing on his extensive experience founding six companies, scaling Tesla’s revenue from $2 billion to $20 billion, and helping Lyft go public, McNeill will share practical insights on capital efficiency, operating discipline, and building companies that prioritize profitability, impact, and long-term value. His session targets founders navigating hypergrowth and investors seeking new models that break from traditional venture capital approaches, offering a grounded, operator-focused roadmap for building enduring businesses. TechCrunch All Stage aims to provide actionable advice, networking,
energyelectrificationtransportationAIstartup-growthsustainable-companiescapital-efficiencyNew Advanced Service Robot Released
The article announces the release of RobotEra's new service robot model, the RobotEra Q5. This advanced robot features 44 degrees of freedom, allowing for highly flexible and precise movements. Additionally, it is equipped with a responsive conversational AI, enhancing its ability to interact naturally and effectively with humans. RobotEra aims for the Q5 to make a significant impact in the service robot industry by combining sophisticated mechanical capabilities with advanced communication technology. However, the article provides limited details beyond these key features, leaving specifics about its applications, availability, or pricing unclear.
robotservice-robotAIroboticsautomationadvanced-roboticsRobotEraThree powerhouses cover how to prepare now for your later-stage raise at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025, taking place October 27–29 at Moscone West in San Francisco, will feature a crucial session on preparing for later-stage fundraising, specifically targeting founders aiming for significant funding rounds like Series C. The panel, scheduled for October 29 on the Builders Stage, emphasizes that successful late-stage capital raises require more than just strong revenue; founders must craft compelling narratives, monitor key metrics, and nurture investor relationships well in advance. This session promises practical frameworks and candid insights to help startups strategically position themselves for major funding. The panel includes three industry experts: Zeya Yang, a partner at IVP with a background in AI-native startups and product leadership; Lila Preston, head of growth equity at Generation Investment Management, known for scaling impact-driven companies globally; and Andrea Thomaz, CEO and co-founder of Diligent Robotics, who brings firsthand founder experience in AI and robotics innovation. Their combined perspectives offer a comprehensive guide for founders preparing to raise substantial capital. Att
robotAIautomationroboticshealthcare-roboticsstartup-fundingventure-capitalNew York police could procure counter-drone system to tackle UAV menace
The New York Police Department (NYPD) is considering deploying an advanced counter-drone system called the Iron Drone Raider, developed by Maryland-based American Robotics, to address the growing problem of unauthorized and potentially hostile drones over the city. This system uses AI-powered interceptor drones that autonomously detect, track, and neutralize small hostile UAVs by capturing them with mesh nets and safely lowering them to the ground via parachutes. The entire process is automated, requiring no human pilot intervention, and can operate continuously, providing real-time video feeds to remote operators. This move comes amid a surge in drone incursions across the United States, including near sensitive sites such as military bases and nuclear power plants, with the FBI receiving over 5,000 drone sighting reports in late 2024 alone. While other counter-drone methods like laser weapons, bullets, hacking, or radio jamming exist, they tend to be more expensive and complex to maintain. The Iron Drone Raider system, costing under $200
robotdrone-technologyAIcounter-drone-systemautonomous-dronesUAV-interceptionsecurity-technologyTesla launches Robotaxi service in Austin - The Robot Report
Tesla has officially launched its Robotaxi service in Austin, Texas, marking a key milestone in CEO Elon Musk’s vision for autonomous ride-hailing. The service operates a limited fleet of Tesla Model Y vehicles equipped with the company’s Full Self-Driving (FSD) software. Currently, rides are available only to a select group of investors and influencers, with operations limited to clear weather conditions but running both day and night. Passengers pay a flat fee of $4.20 per ride, and while the vehicles operate autonomously, a Tesla safety monitor is present in the passenger seat, with remote monitoring by the company. Early rider feedback highlights some operational challenges, including vehicles veering into oncoming traffic lanes and difficulties with drop-off zones and app-based pickup/dropoff pin settings. Despite these issues, most rider videos and reports have been positive. Tesla’s Robotaxi launch follows competitors like Waymo, Zoox, and Motional, which have been conducting their own autonomous ride-hailing services in cities
robotautonomous-vehiclesTeslarobotaxiself-driving-carsAItransportation-technologyTechCrunch Mobility: The Tesla robotaxi Rorschach test and Redwood’s next big act
The article from TechCrunch Mobility centers on Tesla’s recent limited rollout of its robotaxi service in Austin, marking a significant test of CEO Elon Musk’s vision for fully autonomous vehicles relying solely on cameras and end-to-end AI, contrasting with competitors like Waymo. Although the deployment is small-scale—with fewer than 20 vehicles operating in a confined area and safety drivers present—the public reaction has been highly polarized. Social media videos highlighted instances of questionable driving behavior, such as crossing double yellow lines and abrupt stops, fueling debate over Tesla’s readiness and Musk’s promises. The article suggests that after one week, the situation remains ambiguous, with much noise but little definitive evidence on the technology’s success or failure. Additionally, the piece touches on internal challenges at Tesla, including reports of upcoming layoffs following a year marked by executive departures and a tense work environment driven by production pressures, particularly around the Cybercab project. Meanwhile, in the broader autonomous vehicle sector, former Uber CEO Travis Kalanick is reportedly planning to
robotautonomous-vehiclesTeslarobotaxiAItransportation-technologyself-driving-carsStartups Weekly: Tech and the law
The latest edition of Startups Weekly highlights a busy week in the startup ecosystem, featuring notable lawsuit developments, mergers and acquisitions, and significant funding rounds. Key startup stories include Rubrik’s push to accelerate AI agent adoption with substantial but undisclosed funding, German fintech startup Kadmos’ $38 million raise linked to Japanese shipping expansion, and ongoing copyright lawsuits involving AI music startup Suno and Getty Images’ AI image generator Stable Diffusion. Despite challenges, Bill Gates-backed Airloom Energy continues its operations in Wyoming. On the venture capital front, several high-profile funding events stood out. Harvey AI, an AI-enabled legal tech startup, raised $300 million at a $5 billion valuation just months after a previous $300 million round at $3 billion. Abridge, an AI medical note automation startup, secured funding at a $5.3 billion valuation, while blockchain prediction market Kalshi and its rival Polymarket are also raising significant capital. Other notable raises include European challenger bank Finom, Indian
energystartupsAIfundingdronesblockchainmaterialsPetLibro’s new smart camera uses AI to describe your pet’s movements, and it’s adorable
PetLibro has launched Scout, an AI-powered smart pet camera designed to provide real-time insights into pets’ activities and behaviors. Similar to other pet cameras, Scout offers real-time monitoring, two-way audio, and remote control of the camera’s movements, with an added feature of automatic pet tracking. What distinguishes Scout is its advanced AI pet recognition technology, which tracks specific activities such as eating, drinking, litter box use, and movement, while capturing surprise selfies and daily highlight clips stored in the cloud for up to 30 days. The camera can recognize and track multiple pets separately, currently supporting individual profiles for two pets, with plans to improve its dynamic recognition system over time. Scout also offers charming, descriptive notifications of pet behavior, enhancing the user experience with personalized updates. The device supports sharing access with up to five family members or friends and is priced at $100, with AI features available through subscription plans starting at $12 per month. Compared to competitors like Furbo, which costs $210 and
IoTsmart-cameraAIpet-monitoringpet-recognitioncloud-storagehome-automationChina: Humanoid robots to dribble, score goals in 3-on-3 soccer game
China is hosting a groundbreaking robotic soccer event featuring four teams of humanoid robots competing in the finals of the RoBoLeague World Robot Soccer League on June 28, 2025, in Beijing’s Yizhuang Development Zone. This event marks the first fully autonomous 3-on-3 humanoid robot soccer game, with matches consisting of two 10-minute halves. The robots, developed by leading institutions such as Tsinghua University and Beijing Information Science and Technology University, use advanced optical cameras and sensors to detect the ball up to 65 feet away with over 90% accuracy. They autonomously make real-time decisions—such as passing, dribbling, or shooting—through AI powered by deep reinforcement learning, showcasing agility, strategy, and endurance without human control. This soccer competition serves as a preview for the upcoming 2025 World Humanoid Robot Sports Games scheduled for August 15–17 in Beijing, which will feature 11 humanoid sports events modeled on traditional athletic competitions,
robothumanoid-robotsAIrobotics-soccerautonomous-robotsdeep-reinforcement-learningrobot-sportsTesla robotaxis glitch out in Austin, caught making wild errors
Tesla’s newly launched robotaxi service in Austin, Texas, has quickly come under scrutiny due to numerous videos showing erratic and unsafe driving behaviors within days of limited public testing. Incidents documented include driving on the wrong side of the road, abrupt “phantom braking” without clear cause, stopping in intersections, and failing to respond appropriately to other vehicles like reversing delivery trucks. Despite the presence of safety monitors in the front seats, these software glitches persist, raising concerns about the readiness of Tesla’s camera-only autonomous system. While some users report smooth rides without intervention, experts and observers highlight the frequency and severity of these errors as alarming for a service in its infancy. The City of Austin and Texas regulators are monitoring the situation closely, with the potential to impose stricter reporting requirements or revoke licenses if the robotaxis are deemed unsafe. Officials are working with Tesla to ensure police can safely interact with the vehicles, reflecting growing regulatory attention. Transportation experts warn that actions like dropping off passengers in the middle of busy
robotautonomous-vehiclesTeslarobotaxiself-driving-carsAItransportation-technologyJon McNeill brings the operator’s playbook to TechCrunch All Stage
At TechCrunch All Stage 2025 in Boston on July 15, Jon McNeill, CEO and co-founder of DVx Ventures and former Tesla president and Lyft COO, will challenge the conventional startup advice of prioritizing product-market fit before scaling. His session, “The Operator’s Playbook for Building and Scaling Sustainable Companies,” emphasizes the importance of validating both product and go-to-market strategies before accelerating growth. McNeill advocates for scaling smarter rather than faster, focusing on building sustainable companies that balance profitability, impact, and long-term value. Drawing from his extensive experience—founding six companies, scaling Tesla’s revenue from $2 billion to $20 billion, and helping Lyft go public—McNeill will share practical insights on capital efficiency, operating discipline, and alternative approaches to venture capital. His goal is to provide founders and investors with an operator-first roadmap that prioritizes sustainable growth over rapid expansion. The session is part of TechCrunch All Stage, a founder summit designed to offer tactical advice, real
energyelectrificationsustainable-growthstartup-scalingtransportationAIcapital-efficiencyUS supercomputer unlocks nuclear salt reactor secrets with AI power
Scientists at Oak Ridge National Laboratory (ORNL) have developed a novel artificial intelligence (AI) framework that models the behavior of molten lithium chloride with quantum-level accuracy but in a fraction of the time required by traditional methods. Utilizing the Summit supercomputer, the machine-learning model predicts key thermodynamic properties of the salt in both liquid and solid states by training on a limited set of first-principles data. This approach dramatically reduces computational time from days to hours while maintaining high precision, addressing a major challenge in nuclear engineering related to understanding molten salts at extreme reactor temperatures. Molten salts are critical for advanced nuclear reactors as coolants, fuel solvents, and energy storage media due to their stability at high temperatures. However, their complex properties—such as melting point, heat capacity, and corrosion behavior—are difficult to measure or simulate accurately. ORNL’s AI-driven method bridges the gap between fast but less precise molecular dynamics and highly accurate but computationally expensive quantum simulations. This breakthrough enables faster, more reliable
energyAInuclear-reactorsmolten-saltsmachine-learningsupercomputingmaterials-scienceIntel hits the brakes on its automotive business, and layoffs have started
Intel is shutting down its automotive architecture business and laying off most of its staff as part of a broader company restructuring aimed at refocusing on its core client and data center segments. The decision was communicated internally on June 25, 2025, with Intel emphasizing a commitment to a smooth transition for customers. While the automotive division was not a major revenue driver, it had been active in automated vehicle technology and software-defined vehicles, investing heavily since around 2015, including the $15.3 billion acquisition of Mobileye in 2017, which later became a publicly traded company with Intel as a major shareholder. Despite showcasing new AI-enhanced system-on-chip (SoC) technology for vehicles at CES 2025 and the Shanghai Auto Show earlier this year, the automotive business’s future appeared uncertain amid broader company challenges. New CEO Lip-Bu Tan had already warned of layoffs due to falling sales and a bleak outlook. The wind-down follows Intel’s recent announcement of layoffs in its Foundry division
robotautonomous-vehiclesautomotive-technologyAIsemiconductorsoftware-defined-vehiclesIntelRing cameras and doorbells now use AI to provide specific descriptions of motion activity
Amazon-owned Ring has introduced a new AI-powered feature for its doorbells and cameras that provides users with specific, text-based descriptions of motion activity detected on their property. Instead of vague alerts, users will receive detailed notifications such as “A person is walking up the steps with a black dog” or “Two individuals are looking into a white car parked in the driveway,” allowing for quicker and more informed responses. This feature currently describes only the first few seconds of motion-activated video clips and is being rolled out as an English-only beta for Ring Home Premium subscribers in the U.S. and Canada, with an option to disable it via the Ring app settings. Ring’s founder and Amazon VP of home security, Jamie Siminoff, revealed plans for further AI enhancements, including combining multiple motion events into a single alert and introducing customizable anomaly alerts that notify users based on personalized definitions of unusual activity. The system will also learn users’ routines to better detect and report irregular events. While these advancements offer promising new
IoTsmart-homeAIsecurity-camerasmotion-detectionhome-automationRing-devicesUS startup unveils real-time tool that makes blood translucent
US startup Ocutrx Technologies has unveiled HemoLucence, a pioneering surgical imaging technology that renders blood translucent in real time, allowing surgeons to see through pooled blood without suction or irrigation. Integrated into the OR Bot 3D Surgical Microscope, HemoLucence uses AI-powered algorithms and advanced computational physics to visualize tissue and structures obscured by blood, successfully penetrating up to three millimeters of whole human blood in lab tests. The system collects and analyzes light from multiple angles, separating scattered light from absorbed light to reconstruct a clear 3D view of hidden anatomy, including vessels, nerves, bleed sites, and tumors. This breakthrough addresses a longstanding challenge in operating room imaging by enabling surgeons to see through blood during procedures, potentially enhancing surgical precision and safety. Medical advisors from leading hospitals have praised the technology for its potential to reduce reliance on traditional blood-clearing methods, shorten surgery times, and improve outcomes. However, HemoLucence remains a prototype awaiting patent approval and must undergo clinical trials
robotAIsurgical-technologymedical-imagingcomputational-physicsneural-networks3D-visualizationRealtime Robotics announces two new direct integrations for Resolver - The Robot Report
Realtime Robotics, a leader in robotic motion-planning software, has announced two new direct integrations for its cloud-based system Resolver, which accelerates the design and deployment of robotic workcells. Resolver automates complex tasks such as path planning, task allocation, sequencing, and layout validation, enabling cycle-time improvements of 15% to 40%. The new integrations allow users of Visual Components and Mitsubishi Electric’s MELSOFT Gemini 3D manufacturing simulation software to access Resolver’s industrial AI directly within their preferred environments, complementing the existing Siemens Process Simulate integration. Resolver’s capabilities focus on optimizing collision-free robot motions, multi-robot coordination, and real-time object detection, which collectively reduce errors and speed up production line builds. Realtime Robotics highlighted growing adoption among automotive OEMs and integrators worldwide, with some already including Resolver in requests for proposals or internal workflows. The partnership with Visual Components, known for its extensive 3D simulation and robot programming tools, aims to tackle increasingly complex applications like large
roboticsmotion-planningindustrial-robotsrobotic-workcellsAIautomationmanufacturing-simulationGoogle rolls out new Gemini model that can run on robots locally
Google DeepMind has introduced Gemini Robotics On-Device, a new language model designed to run locally on robots without needing an internet connection. This model builds on the previous Gemini Robotics version by enabling direct control of robot movements through natural language prompts, allowing developers to fine-tune it for various applications. According to Google, Gemini Robotics On-Device performs nearly as well as its cloud-based counterpart and surpasses other unnamed on-device models in general benchmarks. In demonstrations, robots equipped with this local model successfully performed tasks such as unzipping bags and folding clothes. Although initially trained for specific tasks, the model was later adapted to work on different robot platforms, including the bi-arm Franka FR3, which managed to handle new scenarios and objects it had not encountered before. Additionally, Google DeepMind is releasing tools that allow developers to train robots on new tasks by providing 50 to 100 demonstrations using the MuJoCo physics simulator. This development aligns with broader industry trends, as companies like Nvidia, Hug
robotroboticsAIon-device-AIGoogle-DeepMindGemini-Roboticsrobot-controlSpot robot dog inspects Cargill's food factory for safety hazards
Cargill, a major American food corporation, has deployed Spot, the robot dog developed by Boston Dynamics, to conduct safety inspections at its Amsterdam Multiseed plant. Spot’s role includes monitoring equipment, checking gauges, and identifying potential safety hazards such as debris, leaks, or improperly closed doors. Enhanced with Boston Dynamics’ AI system called Orbit, Spot captures images and analyzes them to flag issues that could disrupt factory operations. The robot also uses Site View to create panoramic images for plant managers to assess potential bottlenecks, enabling more proactive maintenance and safety management. This deployment is part of Cargill’s broader “Plant of the Future” initiative aimed at automating manufacturing processes to free human workers from routine tasks and focus on problem-solving and tactical decisions. By conducting frequent, consistent inspections, Spot helps improve workplace safety by identifying slip and trip hazards and ensuring safety equipment is in place and functional. Plant managers report that this shift from reactive to proactive monitoring enhances operational efficiency and safety, marking a significant
robotroboticsAIfactory-automationsafety-inspectionBoston-Dynamicssmart-manufacturingInbolt to bring its real-time robot guidance systems to the U.S., Japan - The Robot Report
Inbolt, a Paris-based developer of real-time robot guidance systems, is expanding its operations to the U.S. and Japan, aiming to replicate its European market success where it serves major manufacturers like Stellantis, Renault, Volkswagen, Ford, and Beko. Founded in 2019, Inbolt’s GuideNOW system combines a 3D camera, AI-driven real-time workpiece localization, and software integration to enable industrial robots to adapt dynamically to changing environments. The system supports fast part localization and trajectory adjustments, integrates with major robot brands (FANUC, ABB, KUKA, Universal Robots), and eliminates the need for costly sensors or rigid setups. Inbolt reports deployment in over 50 factories worldwide and claims significant customer benefits, including up to 97% reduction in downtime and 80% fewer part rejections. The company is accelerating its global expansion backed by a $17 million Series A funding round in 2024, establishing local teams in Detroit and Tokyo to tap into growing
robotindustrial-robotsAI3D-visionautomationmanufacturingreal-time-guidance-systemsHexagon launches AEON humanoid robot for industrial applications - The Robot Report
Hexagon AB has launched its first humanoid robot, AEON, designed specifically for industrial applications to address labor shortages and enhance operational efficiency. Unveiled at the Hexagon LIVE Global event, AEON integrates Hexagon’s expertise in precision measurement and sensor technologies with advanced locomotion, AI-driven mission control, and spatial intelligence. This combination enables the robot to perform a variety of tasks such as manipulation, asset inspection, reality capture, and operator support across industries including automotive, aerospace, manufacturing, warehousing, and logistics. AEON features agility through bipedal locomotion and dexterity, environmental awareness via multimodal sensor data fusion, versatility in task execution, and power autonomy enabled by a battery-swapping mechanism. Hexagon is collaborating with partners such as Schaeffler and Pilatus to pilot AEON in real-world industrial scenarios like machine tending, part inspection, and reality capture. These pilots aim to demonstrate the robot’s capabilities and contribute to sustainable growth amid demographic changes affecting labor availability.
roboticshumanoid-robotindustrial-automationAIsensor-technologymanufacturinglogisticsVolkswagen's 4-seat robotaxi with 27 sensors to hit US roads in 2026
Volkswagen has officially launched the production-ready ID. Buzz AD, a four-seat electric robotaxi equipped with 27 sensors—including 13 cameras, nine Lidars, and five radars—designed to compete with Tesla’s autonomous vehicles. Unlike Tesla’s current Level 2 autonomy, the ID. Buzz AD is built for SAE Level 4 autonomy, enabling fully driverless operation in designated areas without human intervention. The vehicle’s AI-powered control system, developed in partnership with Mobileye, processes real-time sensory data to handle various driving scenarios and emergencies. Additionally, the robotaxi includes remote monitoring capabilities and software certification, features Tesla has yet to achieve. Volkswagen offers the ID. Buzz AD as a turnkey Autonomous Driving Mobility-as-a-Service (AD MaaS) platform, which integrates fleet management, passenger assistance, and compatibility with third-party ride-hailing services. This comprehensive package allows businesses, cities, and fleet operators to deploy autonomous vehicle services without developing infrastructure or software from scratch. The van’s
robotautonomous-vehiclessensorsAIelectric-vehiclesmobility-as-a-serviceVolkswagenWant to know where VCs are investing next? Be in the room at TechCrunch Disrupt 2025
TechCrunch Disrupt 2025, taking place October 27-29 at Moscone West in San Francisco, offers early-stage founders a valuable opportunity to hear directly from top venture capitalists about upcoming investment trends. A highlighted session on October 27 at 1:00 pm features Nina Achadjian (Index Ventures), Jerry Chen (Greylock), and Viviana Faga (Felicis), who will share their 2026 investment priorities across sectors such as AI, data, cloud, robotics, and more. These seasoned VCs will discuss emerging innovations and sectors attracting smart money, providing founders with insights into where venture capital is headed next. Each VC brings distinct expertise: Nina Achadjian focuses on automating overlooked functions and industries by replacing outdated tools, emphasizing founders with empathy, curiosity, and growth mindsets. Jerry Chen invests in product-driven founders working in AI, data, cloud infrastructure, and open-source technologies, leveraging his decade-long experience at VMware. Viviana Faga specializes
robotAIcloud-computingventure-capitalautomationenterprise-softwareSaaSElon Musk’s Tesla rolls out first robotaxi fleet in Austin trial
Elon Musk’s Tesla has officially launched its first robotaxi service in Austin, Texas, marking a significant milestone in the company’s push toward full vehicle autonomy. Beginning June 22, a limited fleet of 10 to 20 Model Y SUVs equipped with Tesla’s Full Self-Driving (FSD) software began operating within a geofenced area in South Austin. Customers can book rides via a dedicated app, paying a flat fee of $4.20 per trip. Despite the excitement, the rollout remains cautious: each vehicle includes a safety monitor in the front seat ready to take control if necessary, reflecting Tesla’s emphasis on safety amid evolving regulatory requirements, including a new Texas law mandating permits for self-driving cars starting September 1. Tesla’s approach relies on eight cameras per vehicle and does not use lidar or pre-mapped routes, which the company claims allows for scalable deployment in multiple cities without extensive infrastructure. Plans are already underway to expand robotaxi operations to San Francisco and Los Angeles.
robotautonomous-vehiclesTeslarobotaxiAIself-driving-carselectric-vehiclesTesla launches robotaxi rides in Austin with big promises and unanswered questions
Tesla has initiated a limited robotaxi service in Austin, deploying fully autonomous Model Y SUVs that operate without a driver behind the wheel but with a Tesla employee seated in the front passenger seat as a “safety monitor.” This marks a significant milestone nearly ten years after CEO Elon Musk first promised such a service. The rollout involves about 10 vehicles operating within a confined area of South Austin, offering rides at a flat rate of $4.20. Customers invited to participate have accessed the service via a new Tesla robotaxi app, with operations running daily from 6 a.m. to midnight, though service may be limited during bad weather. Despite the launch, many details remain unclear or undisclosed. Tesla has provided limited information compared to competitors like Waymo, which operates commercial robotaxis with more transparency. Observers have noted cautious vehicle behavior, such as sudden braking near police vehicles, but the reasons remain unexplained. Tesla has also resisted public records requests related to the service, citing confidentiality and trade secrets
robotautonomous-vehiclesTeslarobotaxiAIdriverless-carstransportation-technologyApplied Intuition raises $600M for autonomous driving tech
Applied Intuition, a company specializing in autonomous vehicle technology, has raised $600 million in a Series F funding round and tender offer, valuing the company at $15 billion. The funding will support the company’s next phase of growth, focusing on advancing vehicle intelligence, expanding its product offerings, and growing its global team. Co-founder and CEO Qasar Younis emphasized the company’s mission to integrate AI into various moving machines, including cars, trucks, drones, and factory equipment. The funding round was co-led by BlackRock-managed funds and Kleiner Perkins, with participation from both new and existing investors such as Franklin Templeton, Qatar Investment Authority, Fidelity, and General Catalyst. Since its Series E round in March 2024, Applied Intuition has made significant progress, including launching new AI-driven products, forming strategic partnerships with companies like OpenAI, TRATON, Isuzu, Porsche, and Audi, releasing an off-road autonomy stack, acquiring defense tech firm EpiSci,
robotautonomous-vehiclesAIvehicle-intelligencesoftware-defined-systemsdefense-technologyautomotive-technologySoftBank reportedly looking to launch a trillion-dollar AI and robotics industrial complex
SoftBank is reportedly planning to launch a massive AI and robotics industrial complex valued at around one trillion dollars. The Japanese investment conglomerate aims to collaborate with Taiwan Semiconductor Manufacturing Company (TSMC) to establish this facility in Arizona. The project, named Project Crystal Land, is still in its early stages, and details about TSMC’s specific involvement or interest remain unclear. This initiative follows SoftBank’s recent increased focus on AI, including its participation in other AI ventures. While SoftBank is eager to partner with TSMC, Bloomberg sources indicate uncertainty about TSMC’s willingness to join the project. As of now, SoftBank has not provided further details, and the scope and timeline of Project Crystal Land remain largely undefined.
roboticsAISoftBankindustrial-complexTSMCProject-Crystal-Landtechnology-investmentSolar drone with Boeing 747 wingspan promises month-long flights
The article discusses a groundbreaking solar-powered drone developed through a partnership between French defense electronics company Thales and US aerospace startup Skydweller Aero. This unmanned aerial system, named MAPS (Medium-Altitude Pseudo-Satellite), features a wingspan larger than a Boeing 747 and can carry payloads up to 881 pounds (400 kg). Designed for persistent, long-duration flights lasting weeks to a month, the drone operates at medium altitudes without carbon emissions, enabling near-continuous surveillance of vast maritime areas such as Exclusive Economic Zones, shipping lanes, and contested waters. A key innovation is the integration of Thales’ AirMaster S radar, an AI-enabled, lightweight sensor suite with Active Electronically Scanned Array (AESA) technology, which provides rapid situational awareness across air, land, and sea domains. The radar’s AI-driven data processing allows onboard target classification, reducing data transmission needs and enhancing bandwidth efficiency critical for extended autonomous missions. This combination creates a fully autonomous
dronesolar-powerautonomous-systemsAIenergymaritime-surveillanceUAVWorld’s first flying humanoid robot with jet engines debuts in Italy
Researchers at the Italian Institute of Technology (IIT) have developed and successfully tested iRonCub3, the world’s first jet-powered humanoid robot capable of hovering mid-air. Equipped with four jet engines—two on its arms and two on a backpack—and a titanium spine to withstand extreme heat from exhaust gases reaching 800°C, the 70 kg robot lifted off about 50 centimeters in initial indoor tests. iRonCub3 integrates advanced AI and aerodynamic control systems to maintain stability despite its asymmetrical, human-like form and shifting center of mass, a challenge not present in traditional drones. The robot’s flight control relies on neural networks trained with simulated and experimental data, enabling it to adapt to turbulent airflows and dynamic limb movements in real time. The development of iRonCub3 involved a co-design approach optimizing both the robot’s physical structure and engine placement to maximize flight control and thermal resilience. Collaborations with the Polytechnic of Milan and Stanford University contributed to wind tunnel testing and machine learning integration
roboticshumanoid-robotjet-enginesAIflight-controltitanium-materialsaerial-mobilityUS scientists use machine learning for real-time crop disease alerts
Purdue University researchers are leveraging advanced AI and machine learning technologies to transform agriculture and environmental management. Their innovations include real-time crop disease detection using semi-supervised models that identify rare diseases from limited data, enabling faster outbreak responses and reduced chemical usage. These AI tools are designed to run efficiently on low-power devices such as drones and autonomous tractors, facilitating on-the-ground, real-time monitoring without relying on constant connectivity. Additionally, Purdue scientists are using AI to analyze urban ecosystems through remote sensing data and LiDAR imagery, uncovering patterns invisible to the naked eye to improve urban living conditions. In agriculture, AI is also being applied to enhance crop yield predictions and climate resilience. For example, machine learning ensembles simulate rice yields under future climate scenarios, improving accuracy significantly. Tools like the “Netflix for crops” platform recommend optimal crops based on soil and water data, aiding farmers and policymakers in making informed, data-driven decisions. Furthermore, Purdue developed an AI-powered medical robot capable of swimming inside a cow’s stomach to
robotAIagriculture-technologymachine-learningmedical-robotscrop-disease-detectionenvironmental-monitoringFujitsu to design Japan’s zetta-class supercomputer that’s 1000 times more powerful
Japanese technology company Fujitsu has been selected by the RIKEN research institute to design FugakuNext, Japan’s next-generation flagship supercomputer. Building on the success of Fugaku, which debuted in 2020 and achieved 442 petaFLOPS performance, FugakuNext aims to be a zetta-class supercomputer with performance approximately 1000 times greater than current systems. The project reflects Japan’s strategic focus on integrating AI with scientific simulations and real-time data, a concept known as “AI for Science,” to maintain leadership in science and innovation. The design phase, including the overall system, computer nodes, and CPU components, will continue until February 2026, with a total budget for the build expected to be around $750 million. Fujitsu will utilize its advanced CPUs, specifically the FUJITSU-MONAKA3 and its successor MONAKA-X, to power FugakuNext. These CPUs are engineered for high performance and energy efficiency and will enable the supercomputer
energysupercomputerFujitsuAIhigh-performance-computingCPUscientific-simulationsSpotify’s Daniel Ek just bet bigger on Helsing, Europe’s defense tech darling
Spotify CEO Daniel Ek has led a €600 million investment round in Helsing, a Munich-based defense technology company now valued at €3 billion, making it one of Europe’s most valuable private firms. Helsing, founded four years ago, specializes in AI-driven battlefield visualization software that integrates data from military sensors, radars, and weapons systems to provide real-time, intuitive situational awareness across various military units. The company has expanded beyond software to develop strike drones, aircraft, and unmanned mini submarines aimed at enhancing naval surveillance. This investment reflects a broader European push to build strategic autonomy in defense amid growing geopolitical tensions, particularly following Russia’s invasion of Ukraine and shifting U.S. defense policies under former President Trump. European governments are increasingly prioritizing digital and AI-driven military capabilities, moving away from traditional hardware like planes and tanks. Helsing’s funding round, backed by investors including Lightspeed Ventures, Accel, and Saab, is part of a larger defense tech boom in Europe, signaling a shift toward self
robotdefense-technologyAIdronesunmanned-vehiclesmilitary-technologysurveillance-systemsNew Insights for Scaling Laws in Autonomous Driving - CleanTechnica
The article from CleanTechnica discusses Waymo’s recent research into applying scaling laws—well-established in large language models (LLMs)—to autonomous driving, specifically in motion forecasting and planning. Waymo’s study leveraged an extensive internal dataset of 500,000 hours of driving, much larger than prior AV datasets, to investigate how increasing model size, training data, and compute resources impact AV performance. The findings reveal that, similar to LLMs, motion forecasting quality improves predictably following a power-law relationship with training compute. Additionally, scaling data and inference compute enhances the model’s ability to handle complex driving scenarios, and closed-loop planning performance also benefits from increased scale. These results mark a significant advancement by demonstrating for the first time that real-world autonomous vehicle capabilities can be systematically improved through scaling, providing a predictable path to better performance. This predictability applies not only to model training objectives and open-loop forecasting metrics but also to closed-loop planning in simulations, which more closely reflect real driving conditions.
robotautonomous-vehiclesAImotion-forecastingscaling-lawsdeep-learningWaymoA comprehensive list of 2025 tech layoffs
The article provides a detailed overview of the ongoing wave of tech layoffs in 2025, highlighting the significant impact on the industry and workforce amid increasing adoption of AI and automation. It tracks layoffs across numerous companies, noting that tens of thousands of employees have been laid off each month so far this year: over 24,500 in April, 16,234 in February, and 10,397 in May, among others. The article emphasizes the human cost of these cutbacks while acknowledging that innovation continues to drive structural changes in the sector. Specific company layoffs are detailed, including major reductions at Microsoft, which announced over 6,500 job cuts in May and additional layoffs affecting software engineers, product managers, and other roles in June. Amazon has reduced its workforce by approximately 27,000 since 2022, recently cutting around 100 employees from its devices and services division. Other notable layoffs include Chegg cutting about 22% of its staff due to declining web traffic amid AI competition,
robotIoTenergylayoffstech-industryautomationAIworkforce-reductionWeek in Review: WWDC 2025 recap
The Week in Review covers major developments from WWDC 2025 and other tech news. At Apple’s Worldwide Developers Conference, the company showcased updates across its product lineup amid pressure to advance its AI capabilities and address ongoing legal challenges related to its App Store. Meanwhile, United Natural Foods (UNFI) suffered a cyberattack that disrupted its external systems, impacting Whole Foods’ ability to manage deliveries and product availability. In financial news, Chime successfully went public, raising $864 million in its IPO. Other highlights include Google enhancing Pixel phones with new features like group chat for RCS and AI-powered photo editing, and Elon Musk announcing the imminent launch of driverless Teslas in Austin, Texas. The Browser Company is pivoting from its Arc browser to develop an AI-first browser using a reasoning model designed for improved problem-solving in complex domains. OpenAI announced a partnership with Mattel, granting Mattel employees access to ChatGPT Enterprise to boost product development and creativity. However, concerns about privacy surfaced with
robotAIautonomous-vehiclesdriverless-carsmachine-learningartificial-intelligenceautomationPreparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões - Robohub
RoboCup 2025 is set to take place in Salvador, Brazil, from July 15-21, marking a significant event for the international robotics and AI community. The event, hosted at the Salvador Convention Center, is expected to attract around 150,000 visitors, surpassing the 100,000 attendees from the last time Brazil hosted in 2014. Organizers anticipate participation from 300-400 teams and approximately 3,000 competitors. Efforts have been made to facilitate visa processes to increase international attendance, especially from teams previously hindered by travel restrictions. New global league partners, including Chinese companies Unitree, Fourier, and Booster Robotics, will showcase advanced humanoid and four-legged robots, enhancing the competition and public exhibitions. Over the past decade, Brazil has seen substantial growth in its RoboCup community, rising to become one of the top countries in terms of team participation. This growth is largely attributed to the development of RoboCupJunior, a program aimed at engaging younger
robotroboticsRoboCupAIautomationhumanoid-robotsrobotics-competitionMotional names Major president, CEO of self-driving car business
Laura Major was appointed president and CEO of Motional, a leading autonomous vehicle company, in June 2025 after serving as interim CEO since September 2024. She succeeded Karl Iagnemma, who left to lead Vecna Robotics. Major has been with Motional since its founding in 2020, initially as CTO, where she spearheaded the development of the IONIQ 5 robotaxi, one of the first autonomous vehicles certified by the Federal Motor Vehicle Safety Standards, and created a machine learning-first autonomous driving software stack. Her leadership emphasizes leveraging AI breakthroughs and partnership with Hyundai to advance safe, fully driverless transportation as a practical part of everyday life. Before Motional, Major built expertise in autonomy and AI at Draper Laboratory and Aria Insights, focusing on astronaut, national security, and drone applications. She began her career as a cognitive engineer designing decision-support systems for astronauts and soldiers and later led Draper’s Information and Cognition Division. Recognized as an emerging leader by
robotautonomous-vehiclesAImachine-learningroboticsself-driving-carsautomationGecko Robotics reaches unicorn status with latest funding
Gecko Robotics, a Pittsburgh-based company specializing in robotic technology for critical infrastructure, has reached unicorn status following a $125 million Series D funding round that doubled its valuation to $1.25 billion. The new capital will fuel the company’s expansion and focus on sectors such as defense, energy, and manufacturing—areas increasingly prioritized by governments and corporations. Recent strategic partnerships include collaborations with NAES to modernize the energy sector, L3Harris on an Extended Reality product, and ongoing work with the Abu Dhabi National Oil Company. Gecko’s CEO, Jake Loosararian, highlighted the company’s AI-powered operating platform, Cantilever, which ensures data integrity and enables advanced diagnostics and modernization of physical infrastructure worldwide. Gecko Robotics employs a variety of robots capable of climbing, flying, and swimming to collect high-fidelity data on complex built environments, including U.S. Navy warships and power plants. Cantilever’s AI-driven decision-making framework can predict infrastructure failures, optimize operations, and improve efficiency
robotAIcritical-infrastructureenergyindustrial-automationrobotics-technologyinfrastructure-maintenanceNEXCOM NexCOBOT unit joins NVIDIA Halos AI Systems Inspection Lab - The Robot Report
NEXCOM Group’s NexCOBOT unit has joined NVIDIA’s Halos AI Systems Inspection Lab to collaboratively advance the safe development of humanoid and AI robots. This partnership aims to streamline the complex and resource-intensive process of achieving functional safety certifications for robotic systems. NexCOBOT, specializing in safe robot controls and based in New Taipei City with offices in Fremont, California, will integrate its products with NVIDIA’s IGX Thor platform and the expanded Halos platform. This integration is designed to create a unified development environment that encompasses AI, motion control, and functional safety, thereby accelerating innovation and simplifying robot design verification and certification processes. NVIDIA’s Halos AI Systems Inspection Lab is notable as the first ANSI National Accreditation Board (ANAB)-accredited lab that combines functional safety, cybersecurity, AI, and regulatory compliance into a single safety framework. NexCOBOT’s participation reflects its long-standing commitment to functional safety, leveraging international standards such as IEC 61508 and ISO 13849-1 to help
robotAIfunctional-safetyroboticsNVIDIAmotion-controlhumanoid-robotsMeta V-JEPA 2 world model uses raw video to train robots
Meta has introduced V-JEPA 2, a 1.2-billion-parameter world model designed to enhance robotic understanding, prediction, and planning by training primarily on raw video data. Built on the Joint Embedding Predictive Architecture (JEPA), V-JEPA 2 undergoes a two-stage training process: first, self-supervised learning from over one million hours of video and a million images to capture physical interaction patterns; second, action-conditioned learning using about 62 hours of robot control data to incorporate agent actions for outcome prediction. This approach enables the model to support planning and closed-loop control in robots without requiring extensive domain-specific training or human annotations. In practical tests within Meta’s labs, V-JEPA 2 demonstrated strong performance on common robotic tasks such as pick-and-place, achieving success rates between 65% and 80% in previously unseen environments. The model uses vision-based goal representations, generating candidate actions for simpler tasks and employing sequences of visual subgoals for more complex tasks
roboticsAIworld-modelsmachine-learningvision-based-controlrobotic-manipulationself-supervised-learningFrom surveillance to public service: the rise of drone swarms
The article discusses the evolving role of drone swarms from primarily surveillance tools to vital assets in civil protection and public service. As climate-related disasters increase and urban environments become more complex, drone swarms—autonomous aerial systems capable of decentralized collaboration and real-time adaptation—offer faster, smarter, and more reliable responses to crises. The Technology Innovation Institute (TII) in Abu Dhabi is at the forefront of this innovation, developing AI-driven drone swarms that mimic natural behaviors to self-organize and perform tasks such as searching for survivors, mapping hazards, and assessing infrastructure damage during emergencies. Beyond disaster response, these swarms also support crisis prevention by monitoring traffic, air quality, and structural integrity in smart cities, aligning with digital governance goals like resilience and sustainability. TII’s work is closely tied to the UAE’s AI Strategy 2031, emphasizing ethical AI development and positioning drone swarms as tools for public good rather than surveillance or militarization. Despite their potential, public perception remains a challenge,
robotautonomous-dronesdrone-swarmsAIsmart-citiesdisaster-responsepublic-safetyElon Musk says Tesla robotaxis could launch in Austin on June 22
Tesla CEO Elon Musk announced a tentative launch date of June 22, 2025, for Tesla’s robotaxi service in Austin, Texas, though the date may shift due to ongoing safety evaluations. The initial fleet will consist of 10 to 20 modified Model Y SUVs operating within geofenced zones under remote human supervision, powered by Tesla’s latest Full Self-Driving (FSD) software. Musk emphasized a cautious approach to safety, highlighting that the rollout depends on passing final safety checks. Tesla has been testing these vehicles on Austin streets and plans to enable cars to drive autonomously from the factory directly to buyers starting June 28. If successful, Tesla aims to expand the robotaxi service to other cities such as Los Angeles, San Antonio, and San Francisco by the end of the year. This robotaxi initiative represents a significant strategic pivot for Tesla, focusing on full self-driving technology rather than more affordable electric vehicles, potentially redefining the company’s business model. However, Tesla faces multiple challenges, including slowing electric vehicle sales amid rising competition, ongoing Model Y redesigns, and political controversies surrounding Musk that could impact regulatory approvals. Industry skepticism remains high given Musk’s history of repeatedly delaying fully autonomous vehicle promises. Nonetheless, the Austin launch marks a critical test for Tesla’s ambitions in the autonomous vehicle market.
robotautonomous-vehiclesTeslarobotaxiself-driving-carsAItransportation-technologySam Altman thinks AI will have ‘novel insights’ next year
In a recent essay, OpenAI CEO Sam Altman outlined his vision for AI’s transformative impact over the next 15 years, emphasizing the company’s proximity to achieving artificial general intelligence (AGI) while tempering expectations about its imminent arrival. A key highlight from Altman’s essay is his prediction that by 2026, AI systems will likely begin generating “novel insights,” marking a shift toward AI models capable of producing new and interesting ideas about the world. This aligns with OpenAI’s recent focus on developing AI that can assist scientific discovery, a goal shared by competitors like Google, Anthropic, and startups such as FutureHouse, all aiming to automate hypothesis generation and accelerate breakthroughs in fields like drug discovery and material science. Despite this optimism, the scientific community remains cautious about AI’s ability to create genuinely original insights, a challenge that involves instilling AI with creativity and a sense of what is scientifically interesting. Experts like Hugging Face’s Thomas Wolf and former OpenAI researcher Kenneth Stanley highlight the difficulty of this task, noting that current AI models struggle to generate novel hypotheses. Stanley’s new startup, Lila Sciences, is dedicated to overcoming this hurdle by building AI-powered laboratories focused on hypothesis generation. While it remains uncertain whether OpenAI will succeed in this endeavor, Altman’s essay offers a glimpse into the company’s strategic direction, signaling a potential next phase in AI development centered on creativity and scientific innovation.
AIartificial-intelligencescientific-discoverymaterial-scienceenergy-innovationAI-agentsnovel-insightsChina’s AI system builds Intel-class chips with zero US software
China has developed an AI-powered chip design system called QiMeng, created by the Chinese Academy of Sciences and affiliated institutions, to accelerate semiconductor development and reduce reliance on Western software amid escalating US-China tech tensions. QiMeng uses large language models to automate complex chip design tasks, significantly shortening development times—for example, producing an autonomous-driving chip in days instead of weeks. The platform is structured in three layers, integrating processor models, design agents, and chip design applications to support automated front-end design, hardware description language generation, OS configuration, and compiler toolchain creation. Researchers have already built two processors with QiMeng: QiMeng-CPU-v1, comparable to Intel’s 486, and QiMeng-CPU-v2, similar to Arm’s Cortex A53. The launch of QiMeng directly responds to US export restrictions that limit Chinese access to leading electronic design automation (EDA) software from companies like Synopsys, Cadence, and Siemens EDA, which previously dominated China’s EDA market. By open-sourcing QiMeng and publishing detailed documentation, China aims to improve design efficiency, reduce costs, and enable rapid customization of chip architectures and software stacks. While China still faces challenges in fabrication technology and ecosystem diversity, QiMeng represents a strategic step toward automating the full chip design and verification process and advancing China’s broader goal of semiconductor self-reliance in the face of ongoing geopolitical pressures.
AIsemiconductorchip-designprocessorautomationtechnology-independenceChinese-Academy-of-SciencesGoogle rolls out Android 16 to Pixel phones, unveils AI-powered edit suggestion for Google Photos
Google has officially rolled out Android 16 to its Pixel smartphone lineup, introducing several notable features aimed at enhancing communication, security, and user customization. Key updates include the addition of group chat support for RCS messaging with options for custom icons and notification muting, improved accessibility features, and enhanced controls for Bluetooth Low Energy (LE) audio devices. Android 16 also brings HDR screenshots, adaptive refresh rates, and forced notification grouping to reduce clutter. Later this year, Android 16 will extend to tablets with desktop-style windowing and customizable keyboard shortcuts. Additionally, Google Photos gains AI-powered edit suggestions that can erase, move, or “reimagine” parts of images, while Google Wallet now supports corporate badges and public transit payments via Wear OS devices. Alongside Android 16, Google launched its June Pixel Drop feature update, which adds a “Pixel VIPs” widget to track interactions with favorite contacts across calls, messages, and WhatsApp, plus enhanced video captions that describe subtle sounds like whispering or yawning. Users can create custom stickers via text prompts on Gboard, and the Recorder app now supports AI-generated summaries in French and German. Australian users gain Emergency SOS satellite connectivity, while Pixel 8a and newer models receive a battery health indicator. Accessibility improvements include a Magnifier app feature that highlights objects based on user descriptions, providing haptic feedback. Enterprise users benefit from storing corporate badges in Google Wallet and integrating Google’s Gemini chatbot in Google Docs for summarization, insights, and translation. Chrome on Android also improves PDF handling with linked document viewing. These updates began rolling out on Tuesday, marking a comprehensive enhancement to Pixel devices and Android’s ecosystem.
IoTAndroidAIBluetooth-Low-EnergyGoogle-PhotosWear-OSMobile-DevicesSpot robot dog gets AI boost to detect equipment failures early
Boston Dynamics has enhanced its Spot robot dog through an upgraded version of its Orbit intelligent automation platform, aimed at advancing predictive industrial maintenance. The new system enables Spot to autonomously inspect industrial sites, capturing consistent visual data that Orbit analyzes using vision-language prompts to quickly identify hazards such as overheating motors, air leaks, safety risks, corrosion, and equipment deterioration. This approach addresses traditional gaps in condition-based monitoring by providing repeatable, detailed inspections and transforming visuals into actionable insights, including numerical data and descriptive text. A notable addition is the Site View feature, which creates a lightweight visual history of facilities using 360° images, supporting remote monitoring and condition tracking over time. The updated Orbit platform also introduces centralized fleet management dashboards for enterprise users, allowing oversight of multiple robots across sites with customizable user permissions and detailed activity logs. Privacy is maintained through an automatic face-blurring function in images captured by Spot’s cameras. Software updates can be deployed over the air to multiple robots simultaneously, and Orbit can be hosted on-premise or in the cloud as a virtual machine. Integration with third-party systems is facilitated via APIs, webhooks, and a low-code beta for automated work order generation. Additionally, a dynamic thermal thresholding feature helps automatically detect temperature anomalies by analyzing statistical data, reducing the need for expert intervention and enhancing early failure detection in industrial environments.
robotAIpredictive-maintenanceindustrial-automationBoston-Dynamicsfacility-inspectionautonomous-robotsWeek in Review: Why Anthropic cut access to Windsurf
The article "Week in Review: Why Anthropic cut access to Windsurf" covers several key developments in the AI and tech sectors over the past week. Central to the discussion is Anthropic’s decision to cut access to its AI model Claude for Windsurf, explained by Anthropic’s Chief Science Officer Jared Kaplan. He stated that it would be unusual for Anthropic to sell Claude to OpenAI, its largest competitor, especially as OpenAI is acquiring the AI coding assistant Windsurf. This competitive dynamic is the primary reason for the access cut. Beyond this, the article highlights other notable tech news: DeepSeek, a Chinese lab, released an updated AI model R1 that performs well on math and coding benchmarks, with speculation it may be related to Google’s Gemini AI family. Apple’s WWDC 2025 is set to start soon, promising new features including a redesigned interface, a dedicated gaming app, and updates to Mac, Watch, and TV platforms. ChatGPT is expanding its business utility by integrating connectors to popular cloud storage services like Dropbox and Google Drive, enabling it to access user data across platforms for improved responses. Additional updates include the wipeout of data from an Indian grocery delivery startup with a sizable customer base, Google’s image editing app arriving on its operating system with advanced features, Tesla’s renewed trademark efforts for “Tesla Robotaxi,” and Anduril’s significant $2.5 billion funding round doubling its valuation to $30.5 billion. The article also touches on Toma’s AI phone agents helping car dealerships reduce missed calls, and a public spat between Elon Musk and Donald Trump that could have wider implications for the tech industry. Overall, the piece provides a broad overview of recent tech and AI industry news with a focus on competitive strategy, product updates, and funding milestones.
robotTesla-RobotaxiAIautonomous-vehiclesroboticstrademarktechnology-innovation432 robots move 7,500-ton building in China to make way for construction
The 100-year-old Huayanli complex in Shanghai, a 7,382-ton traditional shikumen-style building set covering 13,222 square feet, is being temporarily relocated about 10 meters per day using 432 small walking robots. This unprecedented engineering project aims to clear space for constructing a three-story underground facility beneath the complex, which will house 173,885 square feet of cultural and commercial zones, a parking garage with over 100 spaces, and a transport hub connecting Metro Lines 2, 12, and 13. The dense, historic Zhangyuan area, featuring narrow alleys and tightly packed buildings, necessitated innovative robotic solutions, including drilling and earth-moving robots capable of operating in confined spaces as narrow as 1.2 meters. Advanced technologies such as AI-driven deep learning, building information modeling (BIM), and point cloud scanning were employed to create detailed 3D models of the site, enabling precise planning of movement routes and soil removal paths. A factory-line-style conveyor belt system was implemented to efficiently remove soil with minimal disruption. Once the underground construction is complete, the Huayanli complex will be returned to its original location, integrating preserved historical architecture with modern underground infrastructure, thereby revitalizing the Zhangyuan area while maintaining its cultural heritage.
robotsroboticsconstruction-technologyAIcivil-engineeringbuilding-relocationautomationEurope tames ‘elephant flows’ in 1.2 Tbit/s supercomputer trial
Europe achieved a record-breaking 1.2 terabit-per-second (Tbit/s) data transfer across 2,175 miles (3,500 kilometers) in a supercomputing trial involving CSC (IT Center for Science), SURF, and Nokia. The test demonstrated a quantum-safe, high-capacity fibre-optic connection between Amsterdam, Netherlands, and Kajaani, Finland, transferring both real research and synthetic data directly disk-to-disk. The data traversed five production research and education networks, including NORDUnet, Sunet, SIKT, and Funet, leveraging Nokia’s IP/MPLS routing and quantum-safe optical technology. Nokia’s Flexible Ethernet (FlexE) was key to managing “elephant flows,” or very large continuous data streams, proving the feasibility of ultra-fast, long-distance data transport critical for AI and high-performance computing (HPC). This milestone highlights the importance of resilient, scalable, and secure cross-border connectivity to support the exponential growth of research data, especially for AI model training and supercomputing workloads. The trial supports Europe’s ambitions for supercomputing infrastructure, such as the LUMI supercomputer in Kajaani and AI projects like GPT-nl, enabling seamless workflows across distributed data centers. The success of this multi-domain, high-throughput network test underscores the value of strategic partnerships and advanced digital backbones in driving scientific progress and preparing for future AI and HPC demands. Overall, the trial sets a new benchmark for operational long-distance data networks, providing critical insights into data transport and storage infrastructure. Stakeholders emphasized that despite geographical distances, reliable and scalable data connections are achievable and essential for Europe’s research ecosystem. Nokia and its partners are committed to continuing support for global research and education networks, ensuring they can scale confidently to meet the next generation of discovery and innovation.
energysupercomputingAIdata-transferoptical-networksquantum-safe-technologyhigh-capacity-connectivityAutonomous trucking developer Plus goes public via SPAC - The Robot Report
Plus Automation Inc., a developer of autonomous driving software for commercial trucks, is going public through a merger with Churchill Capital Corp IX, a special purpose acquisition company (SPAC). The combined company will operate as PlusAI, with a mission to address the trucking industry’s driver shortage by delivering advanced autonomous vehicle technology. Founded in 2016 and based in Santa Clara, California, Plus has deployed its technology across the U.S., Europe, and Asia, accumulating over 5 million miles of autonomous driving. Its core product, SuperDrive, enables SAE Level 4 autonomous driving with a three-layer redundancy system designed specifically for heavy commercial trucks. Plus achieved a significant driver-out safety validation milestone in April 2025 and is conducting public road testing in Texas and Sweden, targeting a commercial launch of factory-built autonomous trucks in 2027. Plus emphasizes an OEM-led commercialization strategy, partnering with major vehicle manufacturers such as TRATON GROUP, Hyundai, and IVECO to integrate its virtual driver software directly into factory-built trucks. This approach leverages trusted manufacturing and service networks to scale deployment and provide fleet operators with a clear path to autonomy. Strategic collaborations with companies like DSV, Bosch, and NVIDIA support this effort. Notably, Plus and IVECO launched an automated trucking pilot in Germany in partnership with logistics provider DSV and retailer dm-drogerie markt, demonstrating real-world applications of their technology. The SPAC transaction values Plus at a pre-money equity valuation of $1.2 billion and is expected to raise $300 million in gross proceeds, which will fund the company through its planned commercial launch in 2027. The deal has been unanimously approved by both companies’ boards and is anticipated to close in Q4 2025, pending shareholder approval and customary closing conditions. This public listing marks a significant step for Plus as it scales its autonomous trucking technology to address industry challenges and expand globally.
robotautonomous-trucksAImachine-learningcommercial-vehiclesLevel-4-autonomytransportation-technologyHow to Make AI Faster and Smarter—With a Little Help From Physics
Rose Yu, an associate professor at UC San Diego, has pioneered the field of physics-guided deep learning by integrating physical principles, especially from fluid dynamics, into artificial neural networks. Her work began with addressing real-world problems like traffic congestion near the USC campus, where she modeled traffic flow as a diffusion process analogous to fluid flow, using graph theory to represent road networks and sensor data. This innovative approach allowed her to capture dynamic, time-evolving patterns in traffic, improving prediction accuracy beyond traditional static image-based deep learning methods. Yu’s research extends beyond traffic to other complex systems involving turbulence and spread phenomena. By embedding physics into AI models, she has accelerated simulations of turbulent flows to better understand hurricanes and developed tools to predict the spread of Covid-19. Her ultimate vision is to create AI Scientist, a collaborative framework where AI assistants, grounded in physical laws, partner with human researchers to enhance scientific discovery. This physics-informed AI approach promises to make AI both faster and smarter, enabling breakthroughs in diverse scientific and practical domains.
AIdeep-learningphysics-guided-learningtraffic-predictionturbulence-simulationdigital-lab-assistantsscientific-discoveryCongratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners - Robohub
The 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), held from May 19-23 in Detroit, recognized outstanding contributions in the field with awards for best paper, best demo, and distinguished dissertation. The Best Paper Award went to the team behind "Soft Condorcet Optimization for Ranking of General Agents," led by Marc Lanctot and colleagues. Several other papers were finalists, covering topics such as commitments in BDI agents, curiosity-driven partner selection, reinforcement learning for vehicle-to-building charging, and drone delivery systems. The Best Student Paper Award was given to works on decentralized planning using probabilistic hyperproperties and large language models for virtual human gesture selection. In addition, the Blue Sky Ideas Track honored François Olivier and Zied Bouraoui for their neurosymbolic approach to embodied cognition, while the Best Demo Award recognized a project on serious games for ethical preference elicitation by Jayati Deshmukh and team. The Victor Lesser Distinguished Dissertation Award, which highlights originality, impact, and quality in autonomous agents research, was awarded to Jannik Peters for his thesis on proportionality in selecting committees, budgets, and clusters. Lily Xu was the runner-up for her dissertation on AI decision-making for planetary health under conditions of low-quality data. These awards underscore the innovative research advancing autonomous agents and multiagent systems.
robotautonomous-agentsmultiagent-systemsdronesreinforcement-learningenergy-storageAIAMD acqui-hires the employees behind Untether AI
energyAIsemiconductoracquisitionefficiencyroboticstechnologyAmazon launches new R&D group focused on agentic AI and robotics
robotAIroboticsAmazonR&Dwarehouse-automationagentic-AIUS turns recycled scrap into 3D-printed rocket parts with AI boost
robotmaterials3D-printingAIadditive-manufacturingrecycled-materialssustainable-manufacturingAmazon preps humanoid robots for faster doorstep delivery revolution
robotdeliveryautomationAIlogisticshumanoid-robotslast-mile-deliveryHugging Face says its new robotics model is so efficient it can run on a MacBook
roboticsAIHugging-FaceSmolVLAmachine-learningrobotics-modelgeneralist-agentsUK’s Humanoid teases new robot for retail and logistics revolution
robothumanoidautomationlogisticsretailAImodular-designGoogle bets big on TAE’s cost-effective nuclear fusion reactor
energynuclear-fusionclean-powerTAE-TechnologiesGoogleAIplasma-technologyMeta strikes 20-year nuclear power deal to fuel AI and save Illinois reactor
energynuclear-powerclean-energyAIdata-centerselectricity-demandrenewable-energyEye-opening device: Self-powered AI synapse mimics human vision, achieves 82% accuracy
energyAIoptoelectronicssolar-cellsvisual-recognitionlow-power-systemsautonomous-vehiclesAI sorts 1 million rock samples to find cement substitutes in waste
materialsAIcement-substituteseco-friendly-materialsconcrete-sustainabilitymachine-learningalternative-materialsThis Robot Can Use Chopsticks Better Than You!
robotroboticsautomationAItechnologyinnovationdexterityPony.ai partners with Xihu to deploy 1k robotaxis in Shenzhen - The Robot Report
robotrobotaxiautonomous-drivingmobilityAItransportationfleet-managementIndy Autonomous Challenge coming to California - The Robot Report
robotautonomous-vehiclesAIroboticsIndy-Autonomous-ChallengetechnologyinnovationIndustry experts share their outlook on the future of AMRs - The Robot Report
robotAMRautonomous-mobile-robotsroboticsAIwarehouse-automationmulti-vendor-solutionsRecapping Robotics Summit & Expo 2025
robothumanoidroboticsAIautomationindustrial-robotstechnologyShould We Be Afraid Of Driverless Vehicles On Our Roads? - CleanTechnica
robotautonomous-vehiclesdriverless-technologyTeslarobotaxisAItransportationElon Musk is lobbying lawmakers on driverless vehicle rules
robotautonomous-vehiclesTeslalegislationAIlobbyingCybercabsCircus SE acquires agentic AI company FullyAI - The Robot Report
robotAIautonomous-systemsfood-serviceintelligent-ecosystemdata-processingnutrition-technologyWhy Intempus thinks robots should have a human physiological state
robotroboticsAIemotional-intelligencehuman-robot-interactionIntempusmachine-learningRoboForce introduces Titan mobile manipulator, brings in $5M more in funding - The Robot Report
robotAImobile-manipulatorindustrial-automationroboticsfundingtechnologyTesla vs. the streets: China's real test for self-driving tech
robotself-drivingautonomous-vehiclesTeslaAIintelligent-drivingChinaARM Institute appoints Jorgen Pedersen as new CEO - The Robot Report
robotroboticsmanufacturingAIautomationworkforce-developmentARM-InstituteHyundai Motor Group & Incheon International Airport to Deliver Next-Level Convenience with AI-Powered EV Charging Robots - CleanTechnica
robotIoTenergyelectric-vehiclesAIsmart-airportfuture-mobilityHyundai deploys AI robots to charge EVs at Incheon airport
robotIoTEV-chargingAIsmart-technologyairport-innovationgreen-technologyAI Is Eating Data Center Power Demand—and It’s Only Getting Worse
energyAIdata-centerspower-demandgreenhouse-gas-emissionssustainabilityclimate-impactMbodi AI launches on Y Combinator, developing embodied AI for industrial robots - The Robot Report
robotAIautomationmanufacturingroboticstechnologyinnovationTesla’s Optimus robot takes out trash, vacuums, cleans like a pro
robotTeslaOptimusAIautomationhumanoid-robotreinforcement-learningPUR-1: First US nuclear reactor digital twin achieves 99% accuracy
energynucleardigital-twinAIremote-monitoringreactor-technologycarbon-free-electricitySimbe upgrades vision platform with AI-powered capabilities - The Robot Report
robotAIcomputer-visioninventory-managementretail-technologyautomationoperational-efficiencyOrbit 5.0 adds features to Boston Dynamics' Spot quadruped robot - The Robot Report
robotAIautomationinspectionsBoston-DynamicsSpottechnologyDuke's robot dog mimics human touch, sound to navigate forest terrain
robotAInavigationsensory-technologyquadruped-robotWildFusionroboticsAgibot’s humanoid readies for robot face-off with Kung Fu flair
robotAIhumanoidroboticsautomationmachine-learninginteractionWorld’s first AI nurse? Nurabot joins Taiwan hospitals to battle healthcare crisis
robotAIhealthcarenursingdigital-healthautomationTaiwanNVIDIA releases cloud-to-robot computing platforms for physical AI, humanoid development - The Robot Report
robothumanoidAINVIDIAroboticsautomationphysical-AIRealMan displays embodied robotics at Automate 2025
robotroboticsautomationAIhealthcareindustrial-manufacturinghuman-robot-collaborationNhững công nghệ tiên tiến ứng phó thiên tai ở Trung Quốc
robotUAVdisaster-responsesatellite-technologyAIemergency-managementChinaĐề xuất dùng AI bảo vệ vườn sâm Ngọc Linh
IoTAIsmart-farmingagriculture-technologycrop-monitoringdata-managementremote-managementCEO GV Asia kỳ vọng AI sẽ thay đổi ngành gọi xe
AIElectric-VehiclesRide-hailingData-OptimizationSmart-TransportationUser-ExperienceSustainable-MobilityLoạt hành động 'bất hảo' của robot
robottechnologyautomationsafetyindustrial-robotsroboticsAIRoboBusiness Pitchfire competition opens call for robotics startups
robotroboticsstartupscompetitionAItechnologyinnovationElon Musk khoe Optimus nhảy điệu giống ông Trump
robotTeslaOptimusAIhumanoid-robotautomationfuture-technologyLiên Hợp Quốc bàn về quản lý vũ khí AI và 'robot sát thủ'
robotAIautonomous-weaponsUNmilitary-technologyinternational-regulationsdefense-spendingAI-powered robots help tackle Europe’s growing e-waste problem
robotAIe-wasterecyclingautomationroboticstechnologyMô hình AI cho phép điều khiển robot bằng lời
robotAIMotionGlotmachine-learningroboticshuman-robot-interactionautomationGội đầu bằng AI
robotAIautomationsmart-technologyhair-careChinainnovationAmazon offers peek at new human jobs in an AI bot world
robotAIautomationworkforcejob-trainingwarehouse-technologyhuman-robot-collaborationStandard Bots launches 30kg robot arm and U.S. production facility
robotautomationmanufacturingAIroboticscollaborative-robotsindustrial-robotsSafety and efficiency in robotics design
robotroboticsautomationAIindustrial-robotsforce-sensingAmazon-RoboticsSafety and efficiency in robotics design
robotroboticsautomationAIforce-sensingindustrial-robotsAmazon-RoboticsSiêu máy tính 200.000 GPU của Elon Musk
energyGPUsupercomputerAITeslapower-consumptionenvironmental-impactNhóm giảng viên ứng dụng AI sáng chế robot chiến trường
robotAIautonomous-systemsmilitary-technologybattlefield-roboticssensor-technologyunmanned-vehiclesAmazon’s Vulcan robot uses force sensing to stow items
robotautomationforce-sensingAIAmazon-Roboticsmaterial-handlingoperational-efficiencyRobot Unitree H1 'tấn công' kỹ sư
robotUnitree-H1AIroboticssafety-protocolshumanoid-robottechnologyRobot Unitree H1 'nổi điên' tấn công kỹ sư
robotUnitree-H1AIroboticssafety-protocolshumanoid-robottechnologyGoogle Launches Ambitious Program To Train 100,000 Electrical Workers For The AI-Powered Future
energyAIworkforce-developmentclean-energyelectrical-traininginfrastructureinnovationTop 10 robotics developments of April 2025
roboticsautomationAIdelivery-robotsaerospace-manufacturingrobotic-systemsinnovationLG dùng AI để cải thiện hiệu suất điều hòa
energyAIsmart-sensorsenergy-efficiencycooling-systemsLG-ThinQDual-Inverter-CompressorChip não giúp chỉnh sửa video và đăng YouTube bằng suy nghĩ
robotIoTNeuralinkbrain-computer-interfaceassistive-technologyAIALSFigure AI - công ty robot hình người Mỹ bị nghi 'thổi phồng'
robotAIautomationroboticsinvestmentSilicon-Valleymanufacturing'Cơn sốt' trung tâm dữ liệu AI đang chững lại?
energydata-centersAIMicrosoftAmazonelectricity-consumptioncapacity-managementShlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award
robotautonomous-agentsmulti-agent-systemsdecision-makingreinforcement-learningresearch-awardAIDuolingo launches 148 courses created with AI after sharing plans to replace contractors with AI
DuolingoAIlanguage-learningeducation-technologyautomationcoursescontractorsSupio, an AI-powered legal analysis platform, lands $60M
AIlegaltechstartup-fundingventure-capitalautomationlegal-analysistechnologyAI sales tax startup Kintsugi had doubled its valuation in 6 months
AIsales-taxstartuptax-complianceautomatione-commercefundingDeepSeek upgrades its AI model for math problem solving
AImath-problem-solvingDeepSeektechnology-upgradesmachine-learningartificial-intelligenceeducation-technologyStartups launch products to catch people using AI cheating app Cluely
AIcheatingstartupstechnologyeducationanti-cheatingsoftwareThe deadline to book your exhibit table for TechCrunch Sessions: AI is May 9
TechCrunchAIexhibit-tabledeadlinetechnology-eventstartupnetworkingFinal 6 days: Save big and bring a plus-one for 50% off to TechCrunch Sessions: AI
TechCrunchAIconferenceticket-salesearly-birdtechnologynetworkingAnthropic co-founder Jared Kaplan is coming to TechCrunch Sessions: AI
AnthropicJared-KaplanTechCrunch-SessionsAItechnology-conferenceartificial-intelligenceUC-BerkeleyGrouphug is a stealth-mode startup that plans use AI inside WhatsApp groups
GrouphugAIWhatsAppstartuptechnologymessaginginnovation