RIEM News LogoRIEM News

Articles tagged with "machine-learning"

  • General Intuition lands $134M seed to teach agents spatial reasoning using video game clips

    General Intuition, a new AI research startup spun out from Medal—a platform for sharing video game clips—has raised $133.7 million in seed funding led by Khosla Ventures and General Catalyst. The company leverages Medal’s extensive dataset of 2 billion annual videos from 10 million monthly users to train AI agents capable of spatial-temporal reasoning, which involves understanding how objects move through space and time. This dataset is considered superior to alternatives like Twitch or YouTube due to its first-person gameplay perspective and the presence of highly selective, edge-case clips that enhance training quality. The startup’s AI models can interpret unseen environments and predict actions based solely on visual input, mimicking human player perspectives and controller inputs, making the technology transferable to real-world applications such as robotic arms, drones, and autonomous vehicles. General Intuition aims to develop general agents that interact with their surroundings, initially focusing on gaming and search-and-rescue drones. Unlike competitors who sell world models, General Intuition’s goal

    robotAI-agentsspatial-reasoningdronesautonomous-vehiclesmachine-learninggaming-AI
  • Startup Battlefield company SpotitEarly trained dogs and AI to sniff out common cancers

    SpotitEarly, an Israeli startup founded in 2020, is developing an innovative at-home cancer screening test that leverages trained dogs’ exceptional sense of smell combined with AI technology to detect early-stage cancers from human breath samples. The company employs 18 trained beagles that identify cancer-specific odors by sitting when they detect cancer particles. This canine detection is augmented by an AI platform that monitors the dogs’ behavior, breathing patterns, and heart rates to improve accuracy beyond human observation. A double-blind clinical study involving 1,400 participants demonstrated that SpotitEarly’s method can detect four common cancers—breast, colorectal, prostate, and lung—with 94% accuracy. SpotitEarly recently launched into the U.S. market with $20.3 million in funding and plans to expand its clinical trials, initially focusing on breast cancer before addressing the other cancers. The company aims to offer its multi-cancer screening kits through physicians’ networks starting next year, pricing the initial test at approximately $

    AIhealthcare-technologycancer-detectionmachine-learningdiagnosticsbiotechnologyearly-screening
  • MIT's high strength aluminum alloy can withstand high temperature

    Researchers at MIT have developed a novel printable aluminum alloy that is reportedly five times stronger than traditionally manufactured aluminum and can withstand high temperatures. Using a machine learning (ML)-based approach combined with simulations, the team evaluated only 40 possible material compositions—significantly fewer than the over one million combinations typically required—to identify an optimal mix of aluminum and other elements. This alloy exhibits a high volume fraction of small precipitates, which contribute to its enhanced strength, surpassing previous benchmarks including the wrought Al 7075 alloy. The new alloy, produced via 3D printing rather than conventional casting, benefits from rapid solidification that prevents precipitate growth, resulting in superior mechanical properties. After aging at 400 °C for eight hours, the alloy achieves a tensile strength of 395 MPa at room temperature, about 50% stronger than the best-known printable aluminum alloys. The researchers envision applications in lightweight, temperature-resistant components such as jet engine fan blades—traditionally made from heavier and more expensive

    materialsaluminum-alloy3D-printinghigh-strength-materialsmachine-learningadditive-manufacturinglightweight-materials
  • New system helps drones recover fast from stealth cyber hijacks

    Researchers at Florida International University have developed SHIELD, a novel real-time defense system that enables drones to detect and recover from cyberattacks while still in flight. Unlike traditional defenses that primarily monitor navigation sensors vulnerable to manipulation (such as GPS spoofing), SHIELD continuously scans a drone’s entire control system—including hardware components like battery levels and processor activity—to identify unusual behavior indicative of an attack. Using machine learning models, SHIELD can recognize different attack patterns, detect cyber intrusions within 0.21 seconds, and initiate recovery procedures within 0.36 seconds, allowing the drone to complete its mission rather than terminating it as a fail-safe. This advancement addresses the growing security risks associated with the expanding use of drones across industries such as delivery, agriculture, infrastructure inspection, and disaster response. As regulatory bodies like the Federal Aviation Administration prepare to increase drone operations, SHIELD’s comprehensive approach provides a crucial safety layer by ensuring drones remain reliable and secure even under stealth cyber hijacks. The research team lik

    robotdrone-securitycybersecurityIoT-securitymachine-learningreal-time-defenseautonomous-systems
  • MIT team creates model to prevent plasma disruptions in tokamaks

    Scientists at MIT have developed a novel method to predict and manage plasma behavior during the rampdown process in tokamak nuclear reactors. Rampdown involves safely reducing the plasma current, which circulates at extremely high speeds and temperatures, to prevent instability that can damage the reactor’s interior. However, the rampdown itself can sometimes destabilize the plasma, causing costly damage. To address this, the MIT team combined physics-based plasma dynamic models with machine learning techniques, training their model on experimental data from the Swiss TCV tokamak. This hybrid approach allowed the model to accurately and quickly predict plasma evolution and potential instabilities during rampdown using relatively small datasets. The new model not only enhances prediction accuracy but also translates these predictions into actionable control instructions, or “trajectories,” that a tokamak’s control system can implement to maintain plasma stability. This capability was successfully tested on multiple TCV experimental runs, demonstrating safer plasma rampdowns and potentially improving the reliability and safety of future nuclear fusion reactors. The research,

    energynuclear-fusionplasma-physicsmachine-learningtokamakclean-energyplasma-stability
  • Robot Talk Episode 127 – Robots exploring other planets, with Frances Zhu - Robohub

    In the Robot Talk Episode 127, Claire interviews Frances Zhu from the Colorado School of Mines about the development and application of intelligent robotic systems for space exploration. Frances Zhu, who holds advanced degrees in Mechanical and Aerospace Engineering including a Ph.D. from Cornell University, has a strong background in machine learning, dynamics, systems, and controls engineering. Her previous roles include being a NASA Space Technology Research Fellow and an Assistant Research Professor at the University of Hawaii, where she focused on geophysics and planetology. Since 2025, Zhu has been an Assistant Professor at the Colorado School of Mines, contributing to both the Robotics and Space Resources programs. The episode highlights her expertise in designing autonomous robots capable of exploring other planets, emphasizing the integration of advanced AI and control systems to navigate and operate in challenging extraterrestrial environments. The podcast, Robot Talk, regularly covers topics related to robotics, artificial intelligence, and autonomous machines, providing insights into cutting-edge research and technology in these fields.

    robotroboticsspace-explorationautonomous-systemsmachine-learningaerospace-engineeringintelligent-robots
  • Mars rovers serve as scientists’ eyes and ears from millions of miles away – here are the tools Perseverance used to spot a potential sign of ancient life - Robohub

    The article discusses a significant update from NASA’s Perseverance Mars rover mission, highlighting the investigation of a distinctive rock outcrop called Bright Angel near Jezero Crater. This outcrop features light-toned rocks with mineral nodules and multicolored, leopard print-like patterns. By integrating data from five scientific instruments aboard Perseverance, scientists concluded that these nodules likely formed through processes that could have involved microorganisms. While this does not constitute direct evidence of past life, it represents a compelling discovery that warrants further study by planetary scientists. The article also explains how scientists interact with rover data, using advanced sensors and instruments as extensions of their own senses to build mental models of the Martian environment. Perseverance’s toolkit includes robotic arms for cleaning and abrading rock surfaces, 19 cameras for detailed imaging—including infrared and magnified views—and spectrometers like SuperCam and SHERLOC that analyze light spectra to detect water-related minerals and organic molecules. Additionally, the RIMFAX radar instrument

    robotMars-roverPerseverancerobotic-sensorsplanetary-explorationmachine-learningspace-robotics
  • Rethinking how robots move: Light and AI drive precise motion in soft robotic arm - Robohub

    Researchers at Rice University have developed a novel soft robotic arm that can perform complex tasks such as navigating obstacles or hitting a ball, controlled remotely by laser beams without any onboard electronics or wiring. The arm is made from azobenzene liquid crystal elastomer, a polymer that responds to light by shrinking under blue laser illumination and relaxing in the dark, enabling rapid and reversible shape changes. This material’s fast relaxation time and responsiveness to safer, longer wavelengths of light allow real-time, reconfigurable control, a significant improvement over previous light-sensitive materials that required harmful UV light or slow reset times. The robotic system integrates a spatial light modulator to split a single laser into multiple adjustable beamlets, each targeting different parts of the arm to induce bending or contraction with high precision, akin to the flexible tentacles of an octopus. A neural network was trained to predict the necessary light patterns to achieve specific movements, simplifying the control of the arm and enabling virtually infinite degrees of freedom beyond traditional robots with fixed joints

    roboticssoft-roboticssmart-materialsAI-controllight-responsive-materialsmachine-learningazobenzene-elastomer
  • NVIDIA launches Newton physics engine and GR00T AI at CoRL 2025 - The Robot Report

    NVIDIA has introduced several advancements to accelerate robotics research, unveiling the beta release of Newton, an open-source, GPU-accelerated physics engine managed by the Linux Foundation. Developed collaboratively with Google DeepMind and Disney Research, Newton is built on NVIDIA’s Warp and OpenUSD frameworks and is designed to simulate physical AI bodies. Alongside Newton, NVIDIA announced the latest version of the Isaac GR00T N1.6 robot foundation model, soon to be available on Hugging Face. This model integrates Cosmos Reason, an open, customizable vision language model (VLM) that enables robots to convert vague instructions into detailed plans by leveraging prior knowledge, common sense, and physics, thus enhancing robots’ ability to reason, adapt, and generalize across tasks. At the Conference on Robot Learning (CoRL) 2025 in Seoul, NVIDIA highlighted Cosmos Reason’s role in enabling robots to handle ambiguous or novel instructions through multi-step inference and AI reasoning, akin to how language models process text. This capability is

    roboticsAIphysics-engineNVIDIArobot-simulationmachine-learningIsaac-GR00T
  • In a first, scientists observe short-range order in semiconductors

    Scientists from Lawrence Berkeley National Laboratory and George Washington University have, for the first time, directly observed short-range atomic order (SRO) in semiconductors, revealing hidden patterns in the arrangement of atoms like germanium, tin, and silicon inside microchips. This breakthrough was achieved by combining advanced 4D scanning transmission electron microscopy (4D-STEM) enhanced with energy filtering to improve contrast, and machine learning techniques including neural networks and large-scale atomic simulations. These methods allowed the team to detect and identify recurring atomic motifs that were previously undetectable due to weak signals and the complexity of atomic arrangements. The discovery of SRO is significant because it directly influences the band gap of semiconductors, a critical property that governs their electronic behavior. Understanding and controlling these atomic-scale patterns could enable the design of materials with tailored electronic properties, potentially revolutionizing technologies such as quantum computing, neuromorphic devices, and advanced optical sensors. While this research opens new avenues for atomic-scale material engineering, challenges

    materialssemiconductorsatomic-ordermicroscopyAImachine-learningelectronic-properties
  • Famed roboticist says humanoid robot bubble is doomed to burst

    Renowned roboticist Rodney Brooks, co-founder of iRobot and former MIT researcher, warns that the current enthusiasm around humanoid robots is overly optimistic and likely to collapse. He criticizes companies like Tesla and Figure for relying on teaching robots dexterity through videos of humans performing tasks, calling this method “pure fantasy thinking.” Brooks highlights the complexity of the human hand, which contains about 17,000 specialized touch receptors—a level of tactile sophistication that no robot currently approaches. Unlike advances in speech recognition and image processing, which benefited from decades of data collection, robotics lacks a comparable foundation of touch data. Brooks also raises safety concerns, noting that full-sized humanoid robots consume large amounts of energy to maintain balance, making falls dangerous. He explains that larger robots would pose exponentially greater risks due to the physics of energy impact. Predicting the future of robotics, Brooks believes that successful robots in 15 years will likely abandon the human form, instead featuring wheels, multiple arms, and specialized sensors tailored to

    robothumanoid-robotsroboticsmachine-learningrobot-safetyrobot-dexterityRodney-Brooks
  • 'Semi-stable' state identified, boosts solar material's performance

    Researchers at Chalmers University of Technology in Sweden have identified a previously unknown low-temperature phase of formamidinium lead iodide, a key halide perovskite material known for its excellent optoelectronic properties but limited by instability. Using advanced computer simulations enhanced by machine learning, the team revealed that as the material cools, its molecules enter a semi-stable state. This discovery fills a critical gap in understanding the material’s structure and behavior, which is essential for engineering and optimizing halide perovskite-based solar cells and LEDs. The study highlights the challenges of modeling halide perovskites due to their complex nature, requiring powerful supercomputers and extended simulation times. By integrating machine learning, the researchers achieved simulations thousands of times longer and on a much larger atomic scale than before, bringing models closer to real-world conditions. Experimental validation was conducted in collaboration with the University of Birmingham, confirming the simulation results at temperatures as low as -200°C. These insights are expected

    energysolar-materialshalide-perovskitesformamidinium-lead-iodidemachine-learningcomputer-simulationsustainable-energy
  • The Oakland Ballers let an AI manage the team. What could go wrong?

    The Oakland Ballers, an independent Pioneer League baseball team formed in response to the departure of the Oakland A’s, recently experimented with letting an AI manage their team during a game. Drawing on over a century of baseball data and analytics, including Ballers-specific information, the AI—developed by the company Distillery and based on OpenAI’s ChatGPT—was trained to emulate the strategic decisions of the team’s human manager, Aaron Miles. This experiment leveraged baseball’s inherently data-driven nature and the slower pace of play, which allows for analytical decision-making after each pitch. The AI’s management closely mirrored the choices that Miles would have made, including pitching changes, lineup construction, and pinch hitters, with only one override needed due to a player’s illness. This demonstrated that while AI can optimize decisions by recognizing patterns in data, human ingenuity and judgment remain essential. The Ballers’ willingness to pilot such technology reflects their unique position as a minor league team with major league aspirations and creative flexibility, often

    AIsports-technologydata-analyticsmachine-learningbaseballartificial-intelligencesports-management
  • US scientists bring quantum-level accuracy to molecular modeling

    Researchers at the University of Michigan have developed a novel method that significantly enhances the accuracy of molecular modeling by improving density functional theory (DFT), a widely used quantum chemistry simulation approach. DFT simplifies quantum calculations by focusing on electron densities rather than tracking every electron, enabling simulations of larger systems with hundreds of atoms. However, its accuracy has been limited by the need to approximate the exchange-correlation (XC) functional, which governs electron interactions. The University of Michigan team, supported by the US Department of Energy, used quantum many-body theory combined with machine learning to identify a more precise, universal XC functional that can apply broadly across molecules, metals, and semiconductors. This breakthrough addresses a longstanding challenge in quantum chemistry by moving closer to the exact form of the XC functional, which has remained unknown despite its critical role in determining chemical bonds, reactivity, and electrical behavior. The improved functional is material-agnostic, making it valuable for diverse applications such as battery development, drug design, and quantum

    materials-sciencequantum-chemistrydensity-functional-theorymolecular-modelingquantum-many-body-problemexchange-correlation-functionalmachine-learning
  • Nvidia eyes $500M investment into self-driving tech startup Wayve

    Nvidia CEO Jensen Huang visited the UK with a commitment to invest £2 billion ($2.6 billion) to boost the country’s AI startup ecosystem, with a potential $500 million strategic investment targeted at Wayve, a UK-based self-driving technology startup. Wayve has signed a letter of intent with Nvidia to explore this investment as part of its next funding round, following Nvidia’s participation in Wayve’s $1.05 billion Series C round in May 2024. The investment is aligned with Nvidia’s broader AI startup funding initiative, which also involves venture capital firms like Accel and Balderton. Wayve is advancing its self-driving technology through a data-driven, self-learning approach that does not rely on high-definition maps, making it adaptable to existing vehicle sensors such as cameras and radar. Wayve’s autonomous driving platform, which has been developed in close collaboration with Nvidia since 2018, currently uses Nvidia GPUs in its Ford Mach E test vehicles. The company recently unveiled its third

    robotautonomous-vehiclesself-driving-technologyNvidiaAImachine-learningautomotive-technology
  • Florida team builds chip to run AI tasks 100-fold at lower power cost

    Researchers at the University of Florida have developed a novel silicon photonic chip that uses light, rather than solely electricity, to perform convolution operations—key computations in AI tasks such as image and pattern recognition. By integrating optical components like laser light and microscopic Fresnel lenses directly onto the chip, the device can execute these operations much faster and with significantly lower energy consumption. Tests demonstrated that the prototype achieved about 98% accuracy in classifying handwritten digits, comparable to conventional electronic chips, while operating at near-zero energy for these computations. A notable innovation of this chip is its ability to process multiple data streams simultaneously through wavelength multiplexing, using lasers of different colors passing through the lenses concurrently. This parallel processing capability enhances efficiency and throughput. The project, involving collaboration with UCLA and George Washington University, aligns with trends in the industry where companies like NVIDIA are already incorporating optical components into AI hardware. The researchers anticipate that chip-based optical computing will become integral to future AI systems, potentially enabling more sustainable scaling of AI technologies

    energyAI-chipoptical-computingsilicon-photonicsenergy-efficiencymachine-learningsemiconductor-materials
  • #IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour - Robohub

    The article discusses advancements presented at IJCAI 2025 concerning the integration of Multi-Objective Reinforcement Learning (MORL) with restraining bolts to enable AI agents to learn normative behavior. Autonomous agents, powered by reinforcement learning (RL), are increasingly deployed in real-world applications such as self-driving cars and smart urban planning. While RL agents excel at optimizing behavior to maximize rewards, unconstrained optimization can lead to actions that, although efficient, may be unsafe or socially inappropriate. To address safety, formal methods like linear temporal logic (LTL) have been used to impose constraints ensuring agents act within defined safety parameters. However, safety constraints alone are insufficient when AI systems interact closely with humans, as normative behavior involves compliance with social, legal, and ethical norms that go beyond mere safety. Norms are expressed through deontic concepts—obligations, permissions, and prohibitions—that describe ideal or acceptable behavior rather than factual truths. This introduces complexity in reasoning, especially with contrary-to-duty

    robotartificial-intelligencereinforcement-learningautonomous-agentssafe-AImachine-learningnormative-behavior
  • Why humanoid robots aren't advancing as fast as AI chatbots - The Robot Report

    The article discusses why humanoid robots are not advancing as rapidly as AI chatbots, despite recent breakthroughs in large language models (LLMs) that power conversational AI. While tech leaders like Elon Musk and Jensen Huang predict humanoid robots will soon perform complex tasks such as surgery or home assistance, robotics experts like UC Berkeley's Ken Goldberg caution that these expectations are overly optimistic. Goldberg highlights a fundamental challenge known as the “100,000-year data gap,” referring to the vast difference between the extensive textual data available to train AI chatbots and the limited physical interaction data available to train robots for real-world tasks. This gap significantly slows the development of robots’ dexterity and manipulation skills, which remain far behind their language processing capabilities. Goldberg emphasizes that the core difficulty lies in robots’ ability to perform precise physical tasks, such as picking up a wine glass or changing a light bulb—actions humans do effortlessly but robots struggle with due to the complexity of spatial perception and fine motor control. This issue, known

    roboticshumanoid-robotsAI-chatbotsmachine-learningautomationrobotics-researchartificial-intelligence
  • Forget smartwatches, scientists teach WiFi to monitor heartbeats

    Researchers at the University of California, Santa Cruz have developed Pulse-Fi, a novel system that uses ordinary WiFi signals to monitor heart rate with clinical accuracy, eliminating the need for wearables or specialized medical devices. By leveraging inexpensive hardware like ESP32 chips and Raspberry Pi boards, Pulse-Fi applies machine learning algorithms to detect subtle variations in WiFi signals caused by heartbeats, filtering out noise from movement or environmental factors. Tested on 118 participants, the system achieved heart rate measurements with an error margin of just half a beat per minute after five seconds of processing, maintaining accuracy across different postures and activities. Pulse-Fi represents a significant advancement in non-intrusive health monitoring, potentially transforming everyday WiFi routers into health trackers capable of continuous heart rate monitoring. The technology works by analyzing how radio frequency waves are absorbed and scattered by the human body, with machine learning models trained on data collected alongside standard oximeters to recognize heartbeat-induced signal fluctuations. The system also demonstrated reliable performance up to three meters away

    IoTWiFihealth-monitoringPulse-Fimachine-learningwearable-alternativeswireless-technology
  • Why Runway is eyeing the robotics industry for future revenue growth

    Runway, a New York-based company known for its AI-powered video and photo generation tools built over the past seven years, is now targeting the robotics industry as a new avenue for revenue growth. The company’s advanced world models—large language models that simulate realistic versions of the real world—have attracted interest from robotics and self-driving car companies seeking scalable and cost-effective training simulations. Runway’s co-founder and CTO, Anastasis Germanidis, explained that while the company initially focused on creative and entertainment applications, inbound requests from robotics firms revealed broader use cases for their technology beyond entertainment. Robotics companies are leveraging Runway’s models to create highly specific training simulations that are difficult, costly, and time-consuming to replicate in real-world environments. These simulations allow for controlled testing of different actions and scenarios without altering other environmental variables, providing valuable insights into outcomes that physical testing cannot easily achieve. Rather than developing separate models for robotics and autonomous vehicles, Runway plans to fine-tune its existing models and is

    roboticsAI-simulationself-driving-carsrobot-trainingvisual-generating-toolsrobotics-industrymachine-learning
  • Humanoid robot uses human data to master cartwheels and sprints

    Researchers at Cornell University have developed BeyondMimic, a novel framework enabling humanoid robots to perform complex, fluid human-like motions such as cartwheels, sprints, dance moves, and even Cristiano Ronaldo’s “Siu” celebration. Unlike traditional programming methods that require task-specific coding, BeyondMimic uses human motion capture data to train robots through a unified policy, allowing them to generalize and execute new tasks without prior training. This system leverages Markov Decision Processes and hyperparameters to seamlessly transition between diverse movements while preserving the style, timing, and expression of the original human actions. A key innovation in BeyondMimic is the use of loss-guided diffusion, which guides the robot’s real-time movements via differentiable cost functions, ensuring accuracy, flexibility, balance, and stability. The framework supports various real-world robotic controls such as path following, joystick operation, and obstacle avoidance, making it highly adaptable. The entire training pipeline is open-source and reproducible, providing a

    roboticshumanoid-robotmotion-trackingmachine-learningrobot-controlartificial-intelligencerobotics-research
  • US lab prescribes 'medicines' for EV batteries for longer-lasting power

    Scientists at Argonne National Laboratory have developed a machine learning model to identify chemical additives that enhance the performance and longevity of high-voltage lithium-ion batteries, specifically LNMO (lithium, nickel, manganese, oxygen) batteries. These batteries offer higher energy capacity and avoid cobalt, a material with supply chain challenges, but operate at nearly 5 volts—exceeding the stability limit of most electrolytes and causing decomposition issues. To mitigate this, electrolyte additives are used to form a stable interface on electrodes, reducing resistance and degradation. Traditionally, finding effective additives is a slow process, but the Argonne team trained their model on a small dataset of 28 additives and accurately predicted the performance of 125 new combinations, significantly accelerating discovery. The key innovation lies in the model’s ability to link the chemical structure of additives to their impact on battery metrics such as resistance and energy capacity, enabling rapid screening of candidates without extensive experimental trials. This approach demonstrates that a well-chosen, small dataset can train

    energybatteriesmachine-learningelectrolyte-additiveslithium-ionbattery-technologyArgonne-National-Laboratory
  • AI Could Snuff Out Wildfires One Power Line at a Time - CleanTechnica

    The article discusses a new project led by the U.S. National Renewable Energy Laboratory (NREL) aimed at preventing wildfires caused by fallen or degraded power lines through the use of artificial intelligence (AI). Each year, a portion of wildfires in the U.S. are triggered by high-impedance (HiZ) faults—small electrical faults where energized conductors contact the ground, producing sparks that can ignite nearby flammable materials. These faults are difficult to detect due to their low energy output. To address this, NREL, funded by the U.S. Army Construction Engineering Research Laboratory, developed machine learning models based on artificial neural networks (ANNs) to identify these faults early and enable utilities to respond quickly, thereby reducing wildfire risks and power outages. NREL partnered with Eaton, a multinational power management company, to simulate various downed conductor scenarios under different environmental conditions, generating extensive datasets. These datasets were integrated into NREL’s PSCAD grid simulation platform to create a large variety of

    energyartificial-intelligencemachine-learningpower-systemswildfire-preventionhigh-impedance-faultgrid-resilience
  • MIT roboticists debate the future of robotics, data, and computing - The Robot Report

    At the IEEE International Conference on Robotics and Automation (ICRA), leading roboticists debated the future direction of robotics, focusing on whether advances will be driven primarily by code-based models or data-driven approaches. The panel, moderated by Ken Goldberg of UC Berkeley and featuring experts such as Daniela Rus, Russ Tedrake, Leslie Kaelbling, and others, highlighted a growing divide in the field. Rus and Tedrake strongly advocated for data-centric methods, emphasizing that real-world robotics requires machines to learn from extensive, multimodal datasets capturing human actions and environmental variability. They argued that traditional physics-based models work well in controlled settings but fail to generalize to unpredictable, human-centered tasks. Rus’s team at MIT’s CSAIL is pioneering this approach by collecting detailed sensor data on everyday human activities like cooking, capturing nuances such as gaze and force interactions to train AI systems that enable robots to generalize and adapt. Tedrake illustrated how scaling data enables robots to develop "common sense" for dexter

    roboticsartificial-intelligencemachine-learningrobotics-researchdata-driven-roboticshuman-robot-interactionrobotic-automation
  • How Elon Musk’s humanoid dream clashes with 100,000-year data reality

    The article discusses the significant challenges facing Elon Musk’s vision of humanoid robots, emphasizing insights from UC Berkeley roboticist Ken Goldberg. Despite advances in large language models (LLMs) trained on vast internet text, robotics lags far behind due to a massive "100,000-year data gap" in the kind of rich, embodied data required for robots to achieve human-like dexterity and reliability. Simple human tasks such as picking up a glass or changing a light bulb involve complex perception and manipulation skills that robots currently cannot replicate. Attempts to use online videos or simulations to train robots fall short because these sources lack detailed 3D motion and force data essential for fine motor skills. Teleoperation generates data but only at a linear, slow rate compared to the exponential data fueling language models. Goldberg highlights a debate in robotics between relying solely on massive data collection versus traditional engineering approaches grounded in physics and explicit world modeling. He advocates a pragmatic middle ground: deploying robots with limited but reliable capabilities to collect real-world

    roboticshumanoid-robotsmachine-learningdata-gapautomationrobotics-engineeringartificial-intelligence
  • Boston Dynamics’ robot dog nails daring backflips in new video

    Boston Dynamics has showcased its robot dog, Spot, performing consistent backflips in a new video, highlighting the robot’s advanced agility and refined design. While these gymnastic feats are unlikely to be part of Spot’s routine tasks, they serve a critical engineering purpose: pushing the robot to its physical limits to identify and address potential balance failures. This helps improve Spot’s ability to recover quickly from slips or trips, especially when carrying heavy payloads in industrial settings, thereby enhancing its reliability and durability. The development of Spot’s backflip capability involved reinforcement learning techniques, where the robot was trained in simulations to optimize its movements by receiving rewards for successful actions, akin to training a dog with treats. This iterative process of simulation and real-world testing allows engineers to fine-tune Spot’s behavior and ensure robust performance. Beyond technological advancements, Spot’s agility has also been demonstrated in entertainment contexts, such as performing dance routines on America’s Got Talent, showcasing its versatility. Looking forward, Spot’s ongoing evolution through

    robotroboticsBoston-Dynamicsrobot-dogreinforcement-learningmachine-learningquadruped-robot
  • Humanoids, robot dogs master unseen terrains with attention mapping

    Researchers at ETH Zurich have developed an advanced control system for legged robots, including the quadrupedal ANYmal-D and humanoid Fourier GR-1, enabling them to navigate complex and previously unseen terrains. This system employs a machine learning technique called attention-based map encoding, trained via reinforcement learning, which allows the robot to focus selectively on the most critical areas of a terrain map rather than processing the entire map uniformly. This focused attention helps the robots identify safe footholds even in challenging environments, improving robustness and generalization across varied terrains. The system demonstrated successful real-time locomotion at speeds up to 2 meters per second, with notably low power consumption relative to the robot’s motors. While the current approach is limited to 2.5D height-map locomotion and cannot yet handle overhanging 3D obstacles such as tree branches, the researchers anticipate extending the method to full 3D environments and more complex loco-manipulation tasks like opening doors or climbing. The attention mechanism also provides

    robothumanoid-robotsquadrupedal-robotsmachine-learningreinforcement-learningattention-mappinglocomotion-control
  • Wearable robot helps ALS patients regain daily function

    The article discusses a wearable robotic device developed by Harvard bioengineers to assist individuals with movement impairments caused by neurodegenerative diseases like ALS or stroke. The device, designed as a sensor-loaded vest with an inflatable balloon under the arm, provides mechanical assistance to weak limbs, helping users perform daily tasks such as eating, brushing teeth, or combing hair. A key advancement in the latest version is the integration of a machine learning model that personalizes assistance by learning the user’s specific intended movements through motion and pressure sensors. This personalized approach addresses previous challenges where users struggled to control the robot’s movements due to insufficient residual strength. The research, led by Conor Walsh at Harvard’s John A. Paulson School of Engineering and Applied Sciences in collaboration with clinicians from Massachusetts General Hospital and Harvard Medical School, emphasizes a multidisciplinary approach involving both patient and clinician input from the outset. ALS patient Kate Nycz, diagnosed in 2018, has actively contributed to the device’s development through data and user testing

    wearable-robotassistive-technologyALSmachine-learningpersonalized-roboticsneurorehabilitationmobility-aid
  • Primech launches upgraded bathroom cleaning robot

    Primech AI has launched the next-generation HYTRON bathroom cleaning robot, designed to autonomously clean bathrooms in challenging environments such as airports, hospitals, hotels, shopping malls, and office buildings. The robot completed its first commercial trials in September 2024, receiving positive customer feedback. Powered by the NVIDIA Jetson Orin Super System-on-Module, the HYTRON offers energy-efficient, real-time data processing and intelligent navigation. It integrates advanced 3D-cleaning capabilities and electrolyzed water technology to deliver consistent, high-quality sanitation while significantly reducing manual labor. The upgraded HYTRON features several technological improvements, including an enhanced AI-powered navigation system for better pathfinding and obstacle detection, stronger cleaning mechanisms for deeper sanitation, refined machine learning algorithms to optimize cleaning patterns, and an improved user interface for easier operational management. Primech executives emphasize that this model represents a major step forward in autonomous cleaning technology, combining functional innovation with striking design to set new industry standards in facility management and robotics.

    robotautonomous-cleaningAI-powered-navigationservice-robotenergy-efficiencymachine-learningsmart-facility-management
  • EV batteries could offer longer lifespan, higher safety with new tech

    Researchers at Uppsala University have developed an AI-driven model that significantly enhances the accuracy and robustness of electric vehicle (EV) battery health and lifetime predictions, improving these metrics by up to 65% and 69%, respectively. The model leverages a machine learning framework built on a digital twin approach, which integrates key design parameters with real-world battery behaviors under various fast charging and discharge conditions typical of Nordic climates. This framework enables rapid health assessments within seconds by inferring six critical design parameters from short charging segments, offering a detailed understanding of the chemical processes inside lithium-ion batteries (LiBs) and their aging mechanisms. The study, conducted in collaboration with Aalborg University and published in the journal Energy and Environmental Science, addresses the persistent challenge of EV battery degradation that limits battery lifespan and slows the electrification of transport. By moving beyond treating batteries as “black boxes” and instead modeling their internal chemical reactions, the new approach allows for better battery management and control systems that can extend battery life and improve

    energyelectric-vehiclesbattery-technologyAI-in-energybattery-lifespanmachine-learningbattery-management-systems
  • YC-backed Oway raises $4M to build a decentralized ‘Uber for freight’

    Oway, a San Francisco-based startup founded in 2023 and backed by Y Combinator and General Catalyst, has raised $4 million in seed funding to develop a decentralized freight platform akin to “Uber for freight.” The company aims to tackle inefficiencies in the U.S. trucking industry, where many trucks run with significant empty trailer space, representing a multi-billion dollar opportunity. Oway uses a combination of machine learning and automation to match cargo with available trailer space on long-haul routes, significantly reducing shipping costs. For example, Oway claims it can cut the cost of moving a sub-2,000-pound pallet from Los Angeles to Dallas from about $220 to as low as $60. Central to Oway’s approach are electronic logging devices (ELDs), government-mandated devices installed on trucks to monitor driving hours and locations in real-time. These devices enable Oway to identify trucks with empty space on routes already planned, allowing shippers to place cargo more efficiently and cheaply

    IoTlogistics-technologyelectronic-logging-devicesmachine-learningfreight-automationtransportation-efficiencysupply-chain-optimization
  • Soft robot jacket offers support for upper-limb disabilities

    Researchers at Harvard John A. Paulson School of Engineering and Applied Sciences, in collaboration with Massachusetts General Hospital and Harvard Medical School, have developed a soft, wearable robotic jacket designed to assist individuals with upper-limb impairments caused by conditions such as stroke and ALS. This device uses a combination of machine learning and a physics-based hysteresis model to personalize movement assistance by accurately detecting the user’s motion intentions through sensors. The integrated real-time controller adjusts the level of support based on the user’s specific movements and kinematic state, enhancing control transparency and practical usability in daily tasks like eating and drinking. In trials involving stroke and ALS patients, the robotic jacket demonstrated a 94.2% accuracy in identifying subtle shoulder movements and reduced the force needed to lower the arm by nearly one-third compared to previous models. It also improved movement quality by increasing range of motion in the shoulder, elbow, and wrist, reducing compensatory trunk movements by up to 25.4%, and enhancing hand-path efficiency by up

    soft-roboticswearable-robotsupper-limb-supportassistive-technologymachine-learningrehabilitation-roboticshuman-robot-interaction
  • Owl-inspired drones aim for agility in cities and efficiency at sea

    Researchers at the University of Surrey are developing owl-inspired fixed-wing drones that combine the endurance of traditional fixed-wing designs with the agility of rotary-wing drones. Their project, called ‘Learning2Fly,’ studies how birds of prey like owls navigate complex environments to enable drones to perch, pivot, and maneuver precisely through cluttered urban airspace or turbulent offshore wind conditions. By integrating experimental flight data with machine learning, the team aims to create drones that can predict and control their motion in real time, overcoming limitations of conventional aerodynamic simulations. The research involves real-world testing of lightweight drone prototypes in Surrey’s motion capture lab, where onboard sensors and high-speed cameras track three-dimensional flight behavior. This data trains machine learning models to anticipate drone responses to sudden air shifts and obstacles, improving reliability in unpredictable environments. Early results are promising, showing improved drone performance in complex conditions. The next phase will involve outdoor trials to validate adaptability to wind turbulence and moving obstacles, potentially enabling a new generation of drones capable of efficient,

    robotdronesmachine-learningenergy-efficiencyurban-deliveryoffshore-inspectionfixed-wing-aircraft
  • Boston Dynamics and TRI use large behavior models to train Atlas humanoid - The Robot Report

    Boston Dynamics, in collaboration with Toyota Research Institute (TRI), is advancing the development of large behavior models (LBMs) to enhance the capabilities of its Atlas humanoid robot. Recognizing that humanoid robots must competently perform a wide range of tasks—from manipulating delicate objects to handling heavy items while maintaining balance and avoiding obstacles—Boston Dynamics is focusing on creating AI generalist robots. Their approach involves training end-to-end, language-conditioned policies that enable Atlas to execute complex, long-horizon manipulation tasks by leveraging its full-body mobility, including precise foot placement, crouching, and center-of-mass shifts. The development process involves four key steps: collecting embodied behavior data via teleoperation on both real hardware and simulations; processing and annotating this data for machine learning; training neural network policies across diverse tasks; and evaluating performance to guide further improvements. To maximize task coverage, Boston Dynamics employs a teleoperation system combining Atlas’ model predictive controller with a custom VR interface, enabling the robot to perform tasks

    roboticshumanoid-robotsBoston-DynamicsAI-in-roboticsmachine-learningrobot-manipulationautomation
  • Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy - Robohub

    In this interview, Haimin Hu discusses his PhD research at Princeton Safe Robotics Lab, which centers on the algorithmic foundations of human-centered autonomy. His work integrates dynamic game theory, machine learning, and safety-critical control to develop autonomous systems—such as self-driving cars, drones, and quadrupedal robots—that are safe, reliable, and adaptable in human-populated environments. A key innovation is a unified game-theoretic framework that enables robots to plan motion by considering both physical and informational states, allowing them to interact safely with humans, adapt to their preferences, and even assist in skill refinement. His contributions span trustworthy human-robot interaction through real-time learning to reduce uncertainty, verifiable neural safety analysis for complex robotic systems, and scalable game-theoretic planning under uncertainty. Hu highlights the challenge of defining safety in human-robot interaction, emphasizing that statistical safety metrics alone are insufficient for trustworthy deployment. He argues for robust safety guarantees comparable to those in critical infrastructure, combined with runtime learning

    robothuman-robot-interactionautonomous-systemssafety-critical-controlgame-theorymachine-learningautonomous-vehicles
  • Schrödinger’s cat video made with 2,024 atoms in quantum breakthrough

    A team of physicists from the University of Science and Technology of China has created what is described as the "world’s smallest cat video," depicting Schrödinger’s cat thought experiment using just 2,024 rubidium atoms. This quantum-level visualization uses optical tweezers—focused laser beams—to precisely manipulate individual atoms within a 230-micron-wide array. Machine learning algorithms enable real-time calculations that direct the lasers to rearrange all atoms simultaneously in just 60 milliseconds, a significant improvement over previous methods that moved atoms one by one. The glowing atoms form images representing key moments of the Schrödinger’s cat paradox, illustrating the concept of superposition where a particle exists in multiple states simultaneously. This breakthrough addresses a major bottleneck in neutral-atom quantum computing by enabling rapid, defect-free assembly of large atom arrays with high accuracy—reported as 99.97% for single-qubit operations and 99.5% for two-qubit operations. The technique is highly scalable, maintaining

    materialsquantum-computingmachine-learningoptical-tweezersrubidium-atomsAIquantum-technology
  • Ultra-fast Airy beams keep network flowing past walls and obstacles

    Researchers at Princeton University have developed a novel wireless communication system that uses ultra-fast Airy beams—curved transmission paths—to navigate around indoor obstacles and maintain uninterrupted high-speed data flow. This innovation addresses a key limitation of sub-terahertz frequency signals, which, while capable of extremely high data rates needed for applications like virtual reality and autonomous vehicles, are easily blocked by walls, furniture, or people. By combining physics-based beam shaping with machine learning, the team trained a neural network to select and adapt the optimal Airy beam in real time, allowing signals to bend around obstacles rather than relying on reflection. To enable this adaptive capability, the researchers created a simulator that models countless indoor scenarios, allowing the neural network to learn effective beam configurations without exhaustive physical testing. This approach leverages physical principles to efficiently train the system, which then rapidly adjusts to dynamic environments, maintaining strong connections even in cluttered spaces. Experimental tests mimicking real-world indoor conditions demonstrated the system’s potential, marking a significant step toward

    IoTwireless-communicationneural-networkssub-terahertzAiry-beamsmachine-learningindoor-networking
  • US lab maps 76,000 lightning pulses to reveal storm power secrets

    Researchers at Los Alamos National Laboratory have compiled the largest-ever dataset of high-frequency lightning radio signals, analyzing over 76,000 trans-ionospheric pulse pairs (TIPPs) to understand how lightning energy radiates depending on its altitude within clouds. TIPPs, the most powerful natural radio signals generated by lightning, were detected using a specialized radio frequency sensor and matched with satellite observations from a geostationary orbit. The study revealed that the relative strength of the two pulses in a TIPP—one direct and one Earth-reflected—is influenced by the lightning’s altitude and its angle relative to the satellite, solving a longstanding question about why the second pulse is sometimes stronger. This research provides new insights into compact intracloud discharges, a fast and short-lived form of lightning, and offers a method to more accurately measure cloud convection heights. Such measurements could improve storm monitoring by indicating rapid changes in storm dynamics. The extensive TIPP database is expected to enhance the accuracy of data from the Global Lightning

    energylightningradio-frequencymachine-learningsatellite-technologyatmospheric-sciencestorm-research
  • AI helps US fusion lab predict ignition outcomes with 70% accuracy

    Scientists at Lawrence Livermore National Laboratory (LLNL) have developed an AI model that predicts the outcome of inertial confinement nuclear fusion experiments with over 70% accuracy, outperforming traditional supercomputing methods. The deep learning model was trained on a combination of previously collected experimental data, physics simulations, and expert knowledge, enabling it to capture complex parameters and replicate real experiment imperfections. When tested on the National Ignition Facility’s (NIF) 2022 fusion experiment, the AI correctly predicted a 74% probability of a positive ignition outcome, demonstrating its potential to optimize experimental designs before physical trials. Nuclear fusion, which combines light atomic nuclei to release energy, promises a cleaner and more efficient energy source than current nuclear fission plants, producing significantly more energy per kilogram of fuel without radioactive byproducts. The NIF uses powerful lasers to induce fusion in tiny fuel capsules, but due to the limited number of ignition attempts possible annually, optimizing each experiment is critical. The AI model’s ability

    energynuclear-fusionartificial-intelligencemachine-learningLawrence-Livermore-National-LaboratoryNational-Ignition-Facilityclean-energy
  • How to train generalist robots with NVIDIA's research workflows and foundation models - The Robot Report

    NVIDIA researchers are advancing scalable robot training by leveraging generative AI, world foundation models (WFMs), and synthetic data generation workflows to overcome the traditional challenges of collecting and labeling large datasets for each new robotic task or environment. Central to this effort is the use of WFMs like NVIDIA Cosmos, which are trained on millions of hours of real-world data to predict future states and generate video sequences from single images. This capability enables rapid, high-fidelity synthetic data generation, significantly accelerating robot learning and reducing development time from months to hours. Key components of NVIDIA’s approach include DreamGen, a synthetic data pipeline that creates diverse and realistic robot trajectory data with minimal human input, and GR00T models that facilitate generalist skill learning across varied tasks and embodiments. The DreamGen pipeline involves four main steps: post-training a world foundation model (e.g., Cosmos-Predict2) on a small set of real demonstrations, generating synthetic photorealistic robot videos from image and language prompts, extracting pseudo-actions

    roboticsartificial-intelligencesynthetic-data-generationNVIDIA-Isaacfoundation-modelsrobot-trainingmachine-learning
  • Figure humanoid robot uses Helix AI brain to fold laundry smoothly

    Figure’s humanoid robot, powered by the Helix AI brain, demonstrates advanced capabilities in folding laundry with human-like smoothness and adaptability. Helix is a Vision-Language-Action (VLA) model that integrates perception, language understanding, and learned control to enable robots to follow natural language commands and perform complex tasks without heavy programming or repeated demonstrations. In a recent video, the robot carefully folds towels one by one, handling each item with steady, deliberate movements and stacking them neatly, showcasing its ability to manage unfamiliar household objects through intuitive spoken instructions. Helix’s architecture consists of two core components: System 1 (S1), a fast visuomotor policy that executes real-time actions, and System 2 (S2), a slower, pretrained vision-language model responsible for scene and language comprehension. This design allows the robot to balance quick, precise movements with complex reasoning. The AI model controls the robot’s upper body with high dexterity, enabling fluid wrist, torso, head, and

    robothumanoid-robotAI-roboticsmachine-learningautomationvision-language-action-modelrobotics-control-systems
  • AI-powered radar tech can spy on phone calls up to 10 feet away

    Researchers at Penn State have developed an AI-powered radar system capable of remotely eavesdropping on phone calls by detecting and decoding subtle vibrations from a cellphone’s earpiece. Using millimeter-wave radar—technology commonly found in self-driving cars and 5G networks—combined with a customized AI speech recognition model adapted from OpenAI’s Whisper, the system can capture and transcribe conversations from up to 10 feet away with approximately 60% accuracy over a vocabulary of up to 10,000 words. This represents a significant advancement from their 2022 work, which could only recognize a limited set of predefined words with higher accuracy. The researchers emphasize that while the transcription accuracy is imperfect, even partial recognition of keywords can pose serious privacy and security risks, especially when combined with contextual knowledge. They liken the system’s capabilities to lip reading, which also relies on partial information to infer conversations. The study highlights the potential misuse of such technology by malicious actors to spy on private phone calls remotely,

    AIradar-technologyspeech-recognitionprivacy-risksmillimeter-wave-radarmachine-learningIoT-security
  • Robot drummer nails complex songs with 90% human-like precision

    Researchers from SUPSI, IDSIA, and Politecnico di Milano have developed Robot Drummer, a humanoid robot capable of playing complex drum patterns with over 90% human-like rhythmic precision. Unlike typical humanoid robots designed for practical tasks, this project explores creative arts by enabling the robot to perform entire drum tracks across genres such as jazz, rock, and metal. The system translates music into a “rhythmic contact chain,” a sequence of precisely timed drum strikes, allowing the robot to learn human-like drumming techniques including stick switching, cross-arm hits, and movement optimization. The development began from an informal conversation and progressed through machine learning simulations on the G1 humanoid robot. Robot Drummer not only replicates timing but also plans upcoming strikes and dynamically reassigns drumsticks, showing promise for real-time adaptation and improvisation. The researchers aim to transition the system from simulation to physical hardware and envision robotic musicians joining live performances, potentially revolutionizing how rhythm and timing skills are taught

    robothumanoid-robotmachine-learningrobotic-musiciansrobotic-drummingartificial-intelligenceautomation
  • New super-strong hydrogel can help advance biomedical and marine tech

    Researchers at Hokkaido University have developed a new super-strong hydrogel with record-breaking underwater adhesive strength, capable of supporting objects weighing up to 139 pounds (63 kg) on a postage-stamp-sized patch. This hydrogel, inspired by adhesive proteins found in diverse organisms such as archaea, bacteria, viruses, and eukaryotes, was designed by analyzing nearly 25,000 natural adhesive proteins using data mining and machine learning techniques. By replicating key amino acid sequences responsible for underwater adhesion, the team synthesized 180 unique polymer networks, with machine learning further optimizing the hydrogel’s adhesive properties. The resulting material exhibits instant, strong, and repeatable adhesion across various surfaces and water conditions, including fresh and saltwater. The hydrogel’s adhesive strength was demonstrated through practical tests, such as holding a rubber duck firmly on a seaside rock despite ocean tides and waves, and instantly sealing a leaking pipe with a patch that could be reapplied multiple times without loss of effectiveness. Its

    materialshydrogelunderwater-adhesionbiomedical-engineeringpolymer-networksmachine-learningbioinspired-materials
  • Japanese breakthrough could help make a 'fully wireless society'

    A research team at Chiba University, led by Professor Hiroo Sekiya, has developed a machine learning-based design method for wireless power transfer (WPT) systems that maintain stable output regardless of load changes, a property known as load-independent (LI) operation. Traditional WPT systems require precise component values based on idealized equations, but real-world factors like parasitic capacitance and manufacturing tolerances often cause unstable output voltage and loss of zero voltage switching (ZVS), which reduces efficiency. The new approach models the WPT circuit with differential equations incorporating real component behaviors and uses a genetic algorithm to optimize circuit parameters for stable voltage, high efficiency, and low harmonic distortion. Testing their method on a class-EF WPT system, the researchers achieved voltage fluctuations under 5% across varying loads, significantly better than the 18% typical in conventional systems. The system delivered 23 watts at 86.7% efficiency and maintained ZVS under different load conditions, including light loads, due to

    wireless-power-transfermachine-learningload-independent-operationenergy-efficiencyIoT-sensorselectromagnetic-fieldswireless-charging
  • OpenMind wants to be the Android operating system of humanoid robots

    OpenMind, a Silicon Valley startup founded in 2024 by Stanford professor Jan Liphardt, aims to become the "Android operating system" for humanoid robots by developing an advanced OS that enables more natural human-robot interactions. Unlike traditional robots designed for repetitive tasks, OpenMind focuses on creating machines that think and communicate more like humans, facilitating collaboration between humans and robots. Central to this vision is OpenMind’s new protocol called FABRIC, which allows robots to verify identities and share contextual information instantly, enabling rapid learning and seamless communication among machines. For example, robots could share language data with each other to interact with people in multiple languages without direct human teaching. OpenMind plans to launch its first fleet of 10 OM1-powered robotic dogs by September 2025 to gather real-world user feedback and iterate quickly on its technology. The company recently raised $20 million in funding led by Pantera Capital, with participation from Ribbit, Coinbase Ventures, and others, to accelerate product development and

    robothumanoid-robotsrobotic-operating-systemmachine-learningrobot-communicationAI-collaborationrobotic-dogs
  • Fundamental Research Labs nabs $30M+ to build AI agents across verticals

    Fundamental Research Labs, an applied AI research company formerly known as Altera, has raised $33 million in a Series A funding round led by Prosus, with participation from Stripe co-founder Patrick Collison. The company operates with a unique structure, maintaining multiple teams focused on diverse AI applications across different verticals, including gaming, prosumer apps, core research, and platform development. Founded by Dr. Robert Yang, a former MIT faculty member, the startup aims to be a “historical” company by eschewing typical startup norms and is already generating revenue by charging users for its AI agents after a seven-day trial. Among its products, Fundamental Research Labs offers a general-purpose consumer assistant and a spreadsheet-based AI agent called Shortcut, which has demonstrated impressive performance by outperforming first-year analysts from McKinsey and Goldman Sachs in head-to-head evaluations. The company has raised over $40 million to date and is focused on productivity applications as a primary value driver, with long-term ambitions to develop

    robotAI-agentsautomationproductivity-appsdigital-humansmachine-learningrobotics-development
  • Skild AI is giving robots a brain - The Robot Report

    Skild AI has introduced its vision for a generalized "Skild Brain," a versatile AI system designed to control a wide range of robots across different environments and tasks. This development represents a significant step in Physical AI, which integrates artificial intelligence with physical robotic systems capable of sensing, acting, and learning in real-world settings. Skild AI’s approach addresses Moravec’s paradox by enabling robots not only to perform traditionally "easy" tasks (like dancing or kung-fu) but also to tackle complex, everyday challenges such as climbing stairs under difficult conditions or assembling intricate items, tasks that require advanced vision and reasoning about physical interactions. Since closing a $300 million Series A funding round just over a year ago, Skild AI has expanded its team to over 25 employees and raised a total of $435 million. Physical AI is gaining momentum across the robotics industry, with other companies like Physical Intelligence pursuing similar goals of creating a universal robotic brain. This topic will be a major focus at RoboBusiness 202

    robotroboticsartificial-intelligencephysical-AIrobot-controlmachine-learningautomation
  • Google Trains Robot AI With Table Tennis

    Google’s DeepMind has developed a system where two robot arms continuously play table tennis against each other. This setup serves as a training ground for robot AI, allowing the machines to learn and improve their skills through constant practice and real-time interaction. The fast-paced and dynamic nature of table tennis challenges the robots to develop advanced motor control, precise timing, and adaptive strategies, which are crucial capabilities for more complex robotic tasks. By using table tennis as a training environment, DeepMind aims to advance the field of robotics by enhancing AI’s ability to handle unpredictable and rapidly changing scenarios. This approach highlights the potential for robots to acquire sophisticated physical skills through self-play and iterative learning, paving the way for more autonomous and versatile robots in various applications beyond gaming, such as manufacturing, healthcare, and service industries.

    robotartificial-intelligenceroboticsDeepMindrobot-armsmachine-learningautomation
  • China’s humanoid robot achieves human-like motion with 31 joints

    Chinese robotics company PND Robotics, in collaboration with Noitom Robotics and Inspire Robots, has introduced the Adam-U humanoid robot platform, which features 31 degrees of freedom (DOF) enabling human-like motion. The robot includes a 2-DOF head, 6-DOF dexterous hands, a 3-DOF waist with a braking system for safety, and a binocular vision system that mimics human sight. Standing adjustable between 1.35 to 1.77 meters and weighing 61 kilograms, Adam-U cannot walk as it uses a stationary platform instead of legs. It is designed for precise, flexible operation in dynamic environments and is particularly suited for reinforcement and imitation learning, making it a valuable tool for AI researchers, robotics engineers, and academic institutions. The Adam-U platform integrates hardware and software into a comprehensive ecosystem, including Noitom’s PNLink full-body wired inertial motion capture suit and Inspire Robots’ RH56E2 tactile dexterous

    roboticshumanoid-robotmotion-captureartificial-intelligencemachine-learningreinforcement-learningdata-acquisition
  • MIT vision system teaches robots to understand their bodies

    MIT researchers at CSAIL have developed a novel robotic control system called Neural Jacobian Fields (NJF) that enables robots to learn how their bodies move in response to motor commands purely through visual observation, without relying on embedded sensors or hand-coded models. Using a single camera and random exploratory movements, NJF allows robots—ranging from soft robotic hands to rigid arms and rotating platforms—to autonomously build an internal model of their 3D geometry and control sensitivities. This approach mimics how humans learn to control their limbs by observing and adapting to their own movements, shifting robotics from traditional programming toward teaching robots through experience. NJF’s key innovation lies in decoupling robot control from hardware constraints, enabling designers to create soft, deformable, or irregularly shaped robots without embedding sensors or modifying structures for easier modeling. By leveraging a neural network inspired by neural radiance fields (NeRF), NJF reconstructs the robot’s shape and its response to control inputs solely from visual data. This

    roboticsmachine-learningsoft-roboticsrobotic-control-systemsneural-networks3D-printingcomputer-vision
  • US supercomputer models airflow to reduce jet drag and emissions

    The U.S. Department of Energy’s Argonne National Laboratory is leveraging its Aurora supercomputer, one of the world’s first exascale machines capable of over a quintillion calculations per second, to advance aircraft design by modeling airflow around commercial airplanes. A research team from the University of Colorado Boulder employs Aurora’s immense computational power alongside machine learning techniques to simulate complex turbulent airflow, particularly around airplane vertical tails. These simulations aim to improve predictive models and inform the design of smaller, more efficient vertical tails that maintain effectiveness in challenging flight conditions, such as crosswinds with engine failure, thereby potentially reducing drag and emissions. The researchers use a tool called HONEE to conduct detailed airflow simulations that capture the chaotic nature of turbulence. These high-fidelity simulations train AI-based subgrid stress (SGS) models, which predict the effects of small-scale turbulent air movements often missed in lower-resolution models but critical for accurate airflow prediction. Unlike traditional turbulence modeling that relies on extensive offline data analysis, their approach integrates machine

    energysupercomputingmachine-learningaerospace-engineeringairflow-simulationturbulence-modelingexascale-computing
  • New soft robot arm scrubs toilets and dishes with drill-level force

    Researchers at Northeastern University have developed SCCRUB, a novel soft robotic arm designed to tackle tough cleaning tasks with drill-level scrubbing power while maintaining safety around humans. Unlike traditional rigid industrial robots, SCCRUB uses flexible yet strong components called TRUNC cells—torsionally rigid universal couplings—that allow the arm to bend and flex while transmitting torque comparable to a handheld drill. This combination enables the robot to apply significant force to remove stubborn grime without posing risks typical of hard robotic arms. Equipped with a counter-rotating scrubber brush and guided by a deep learning-based controller, SCCRUB can clean challenging messes such as microwaved ketchup and fruit preserves on glass dishes and toilet seats, removing over 99% of residue in lab tests. The counter-rotating brush design helps maintain firm pressure and stability by canceling frictional forces, enhancing cleaning effectiveness while preserving the arm’s soft and safe nature. The research team envisions expanding SCCRUB’s capabilities to assist humans

    robotsoft-roboticsrobotic-armmachine-learningautomationcleaning-robothuman-robot-interaction
  • New study finds 10 times more seismic activity in Yellowstone using AI

    A recent study led by Professor Bing Li from Western University, Canada, utilized machine learning to analyze 15 years of seismic data from the Yellowstone Caldera, revealing approximately 86,276 earthquakes between 2008 and 2022—about ten times more events than previously recorded. This expanded earthquake catalogue offers a significantly improved understanding of Yellowstone’s seismic activity, highlighting that over half of these earthquakes occur as swarms, which are clusters of small, interconnected tremors occurring within confined areas over short periods. These swarms differ from typical aftershock sequences and provide new insights into the complex underground dynamics of the caldera. The study also found that earthquake swarms beneath Yellowstone occur along relatively young, rough fault structures, contrasting with the smoother, more developed faults in regions like southern California. This distinction helps clarify the unique seismic behavior of Yellowstone. The application of machine learning enabled the detection of many smaller seismic events that manual analysis previously missed, making it possible to build a more comprehensive and reliable seismic catalogue. This

    energymachine-learningseismic-activitygeothermal-energyearthquake-monitoringvolcanic-riskdata-analysis
  • Turns out quantum secrets can’t be cracked by humans or AI alone

    A team of physicists and machine learning (ML) experts collaborated to solve a longstanding puzzle in condensed matter physics involving frustrated magnets—materials whose magnetic components do not align conventionally and exhibit unusual behaviors. Specifically, they investigated what happens to a quantum spin liquid state in a type of magnet called a "breathing pyrochlore" when cooled near absolute zero. While the spin liquid state, characterized by constantly fluctuating magnetic moments, was known to exist, the researchers had been unable to determine its behavior at even lower temperatures. The breakthrough came through a novel AI-human collaboration. The ML algorithm, developed by experts at LMU Munich, was designed to classify magnetic orders and was particularly interpretable, requiring no prior training and working well with limited data. By feeding Monte Carlo simulation data of the cooling spin liquid into the algorithm, the team identified previously unnoticed patterns. They then reversed the simulations, effectively heating the magnetic state, which helped confirm the nature of the low-temperature phase. This iterative dialogue between

    materialsquantum-materialsmachine-learningcondensed-matter-physicsquantum-magnetsspin-liquidsquantum-computing
  • Liquid AI releases on-device foundation model LFM2 - The Robot Report

    Liquid AI has launched LFM2, its latest Liquid Foundation Model designed for on-device deployment, aiming to balance quality, latency, and memory efficiency tailored to specific tasks and hardware. By moving large generative models from cloud servers to local devices such as phones, laptops, cars, and robots, LFM2 offers millisecond latency, offline functionality, and enhanced data privacy. The model features a new hybrid architecture that delivers twice the decode and prefill speed on CPUs compared to Qwen3 and outperforms similarly sized models across benchmarks in knowledge, mathematics, instruction following, and multilingual capabilities. Additionally, LFM2 achieves three times faster training efficiency than its predecessor. LFM2’s architecture includes 16 blocks combining double-gated short-range convolution and grouped query attention, enabling efficient operation on CPUs, GPUs, and NPUs across various devices. Liquid AI provides three model sizes (0.35B, 0.7B, and 1.2B parameters) available under an open

    robotartificial-intelligenceon-device-AIedge-computingfoundation-modelsmachine-learningAI-deployment
  • New quadruped robot climbs vertically 50 times faster than rivals

    Researchers at the University of Tokyo’s Jouhou System Kougaka Laboratory (JSK) have developed KLEIYN, a quadruped robot capable of climbing vertical walls up to 50 times faster than previous robots. Unlike other climbing robots that rely on grippers or claws, KLEIYN uses a chimney climbing technique, pressing its feet against two opposing walls for support. Its flexible waist joint allows adaptation to varying wall widths, particularly narrow gaps. The robot weighs about 40 pounds (18 kg), measures 2.5 feet (76 cm) in length, and features 13 joints powered by quasi-direct-drive motors for precise movement. KLEIYN’s climbing ability is enhanced through machine learning, specifically Reinforcement Learning combined with a novel Contact-Guided Curriculum Learning method, enabling it to transition smoothly from flat terrain to vertical surfaces. In tests, KLEIYN successfully climbed walls spaced between 31.5 inches (80 cm) and 39.4 inches (

    robotquadruped-robotmachine-learningreinforcement-learningclimbing-robotrobotics-innovationautonomous-robots
  • Pharm Robotics advances automated dairy cow healthcare - The Robot Report

    Pharm Robotics is advancing automated healthcare for dairy cows through its robotic system, Sureshot, which automates the delivery of pharmaceuticals such as vaccines and reproductive products as cows enter the milking parlor. Utilizing RFID scanners, the system identifies each cow, retrieves its medical history, and autonomously administers individualized treatments via industrial robot arms. This automation reduces manual labor, ensures consistent treatment compliance, and automatically records inoculations in dairy management software linked to each cow’s RFID tag. Real-time alerts notify dairy managers of any errors, facilitating prompt intervention. Recent updates to Sureshot include the integration of a low-cost 3D vision system for precise injection site identification and the adoption of the FANUC CRX-20 collaborative robot, which enhances safety with sensor-based shot confirmation. A new machine learning-powered software stack accelerates shot-site acquisition, and injection tooling has been adapted to the updated hardware and software. These advancements have enabled fully automated simulated injections on model cows, marking significant progress toward full automation

    roboticsautomationdairy-farmingRFID-technologymachine-learningcollaborative-robotsanimal-healthcare
  • AI-powered graphene tongue detects flavors with 98% precision

    Scientists have developed an AI-powered artificial tongue using graphene oxide within a nanofluidic device that mimics human taste with remarkable accuracy. This system integrates both sensing and computing on a single platform, enabling it to detect chemical signals and classify flavors in real time, even in moist conditions similar to the human mouth. Trained on 160 chemicals representing common flavors, the device achieved about 98.5% accuracy in identifying known tastes (sweet, salty, sour, and bitter) and 75-90% accuracy on 40 new flavors, including complex mixtures like coffee and cola. This breakthrough marks a significant advancement over previous artificial taste systems by combining sensing and processing capabilities. The sensor exploits graphene oxide’s sensitivity to chemical changes, detecting subtle conductivity variations when exposed to flavor compounds. Coupled with machine learning, it effectively recognizes flavor patterns much like the human brain processes taste signals. The researchers highlight potential applications such as restoring taste perception for individuals affected by stroke or viral infections, as well as uses

    grapheneartificial-tongueAImaterials-sciencesensorsmachine-learningnanotechnology
  • China’s new tech flags failures before lithium battery fully activates

    Chinese researchers from Tsinghua Shenzhen International Graduate School and the Shenzhen Institute of Advanced Technology have developed a predictive model that can forecast lithium metal anode failures within just the first two charging cycles of lithium metal batteries (LMBs). By analyzing electrochemical data from these initial cycles, the model identifies early-stage lithium plating and stripping behaviors that serve as “electrochemical fingerprints” indicative of different failure modes. This approach significantly reduces the time and resources needed for testing compared to traditional post-mortem analyses, which only reveal failure outcomes after degradation has occurred. The model employs machine learning algorithms trained on extensive datasets to classify three main types of battery failure: kinetics degradation, reversibility degradation, and co-degradation. It also demonstrates strong generalizability, accurately predicting failures across various electrolyte formulations, including low- and high-concentration systems based on carbonates, ethers, and siloxanes. Importantly, this predictive method requires no battery disassembly or special instruments, relying solely on cycling data, making

    energylithium-batteriesbattery-failure-predictionenergy-storagemachine-learninglithium-metal-anodebattery-technology
  • New shape memory alloys could build more efficient US fighter jets

    US scientists at Texas A&M University have developed a novel approach to designing high-temperature shape memory alloys (HTSMAs) that could significantly enhance the efficiency and performance of US fighter jets, such as the F/A-18. These alloys enable components like jet wings to change shape—folding via electrical heating and cooling—without relying on heavy mechanical parts. This innovation promises lighter, more energy-efficient jets that can be readied faster for flight, addressing current limitations in aircraft carrier operations. The research team, led by Dr. Ibrahim Karaman and Dr. Raymundo Arroyave, combined artificial intelligence (AI) with high-throughput experimentation using a framework called Batch Bayesian Optimization (BBO). This data-driven method accelerates the discovery of optimal alloy compositions by predicting metal interactions and minimizing costly trial-and-error testing. Their approach not only speeds up materials development but also allows for tailoring alloys to specific functions, such as reducing energy loss or enhancing actuation performance in aerospace, robotics, and medical devices

    materials-scienceshape-memory-alloyshigh-temperature-alloysmachine-learningAI-in-materialsaerospace-materialsenergy-efficiency
  • Surgical robot removes gallbladder without any human assistance

    Researchers at Johns Hopkins University have developed an advanced surgical robot, SRT-H (Hierarchical Surgical Robot Transformer), that autonomously performed a complete 17-step gallbladder removal procedure on a realistic anatomical model without any human intervention. Unlike previous surgical robots, which operated under rigid, pre-marked conditions, SRT-H demonstrated expert-level adaptability by responding to unpredictable anatomical variations, complications, and voice commands, much like a human surgical trainee. This marks a significant milestone in surgical robotics, shifting from tool-assisted precision to intelligent, interactive execution capable of real-time adjustments during surgery. The robot was trained using videos of gallbladder surgeries on pig cadavers, learning through a combination of visual data and spoken feedback, similar to how a junior doctor is trained. Built on machine learning architecture akin to ChatGPT, SRT-H achieved 100% accuracy across multiple tests, even when faced with altered tissue appearance and randomized starting positions. This breakthrough suggests a future where autonomous surgical systems can handle the complexities and unpredict

    robotsurgical-robotautonomous-surgerymedical-roboticsmachine-learningAI-in-healthcarerobotic-surgery
  • Johns Hopkins teaches robot to perform a gallbladder removal on a realistic patient - The Robot Report

    Johns Hopkins University has developed a surgical robot, the Surgical Robot Transformer-Hierarchy (SRT-H), capable of autonomously performing a complex phase of gallbladder removal surgery on a lifelike patient model. Unlike previous robotic systems that operated under rigid, pre-mapped conditions, SRT-H adapts in real time to individual anatomical variations and unexpected scenarios, responding to voice commands and corrections from the surgical team much like a novice surgeon learning from a mentor. Built using machine learning architecture similar to ChatGPT, the robot demonstrates human-like adaptability and understanding, marking a significant advancement toward clinically viable autonomous surgical systems. The robot was trained by analyzing videos of surgeons performing gallbladder surgeries on pig cadavers, supplemented with task-specific captions. It successfully executed a sequence of 17 intricate surgical tasks—such as identifying ducts and arteries, placing clips, and cutting tissue—with 100% accuracy, though it took longer than a human surgeon to complete the procedure. This achievement builds on prior work where the team

    robotsurgical-roboticsautonomous-surgerymachine-learningAI-in-healthcaremedical-robotsrobotic-surgery
  • This Chinese 'school' teaches robots to perform tasks using VR

    A specialized robot training facility in Hefei, China, known as an embodied intelligent robot training environment, is pioneering the use of virtual reality (VR) to teach robots practical skills in real-world scenarios. Human trainers wearing VR headsets guide robot "students" through fine motor tasks such as picking up tools and tightening screws, with each robot receiving around 200 action sequences daily. This hands-on approach allows robots to gather physical data and develop machine learning models that enable them to generalize tasks beyond memorized motions, adapting to variable conditions like different screw types or uneven surfaces. The school serves as China’s first public robot training platform offering shared resources such as computing power, datasets, and realistic simulated environments, which are typically costly for smaller companies to develop independently. It supports multiple business models, allowing companies to co-run, operate independently, or purchase training services. By bridging the gap between simulated training and real-world performance, the initiative aims to accelerate the development of versatile autonomous robots capable of functioning effectively in logistics

    robotrobotics-trainingvirtual-realitymachine-learningautomationindustrial-robotsrobot-education
  • AI-designed material captures 90% of toxic iodine from nuclear waste

    A research team from the Korea Advanced Institute of Science and Technology (KAIST), in collaboration with the Korea Research Institute of Chemical Technology (KRICT), has developed a novel material capable of capturing over 90% of radioactive iodine, specifically isotope I-129, from nuclear waste. I-129 is a highly persistent and hazardous byproduct of nuclear energy with a half-life of 15.7 million years, making its removal from contaminated water a significant environmental challenge. The new material belongs to the class of Layered Double Hydroxides (LDHs), compounds known for their structural flexibility and ability to adsorb negatively charged particles like iodate (IO₃⁻), the common aqueous form of radioactive iodine. The breakthrough was achieved by employing artificial intelligence to efficiently screen and identify optimal LDH compositions from a vast pool of possible metal combinations. Using machine learning trained on experimental data from 24 binary and 96 ternary LDH compositions, the team pinpointed a quinary compound composed of copper

    materialsartificial-intelligencenuclear-waste-cleanupradioactive-iodine-removallayered-double-hydroxidesmachine-learningenvironmental-technology
  • Google DeepMind's new AI lets robots learn by talking to themselves

    Google DeepMind is developing an innovative AI system that endows robots with an "inner voice" or internal narration, allowing them to describe visual observations in natural language as they perform tasks. This approach, detailed in a recent patent filing, enables robots to link what they see with corresponding actions, facilitating "zero-shot" learning—where robots can understand and interact with unfamiliar objects without prior training. This method not only improves task learning efficiency but also reduces memory and computational requirements, enhancing robots' adaptability in dynamic environments. Building on this concept, DeepMind introduced Gemini Robotics On-Device, a compact vision-language model designed to run entirely on robots without cloud connectivity. This on-device model supports fast, reliable performance in latency-sensitive or offline contexts, such as healthcare, while maintaining privacy. Despite its smaller size, Gemini Robotics On-Device can perform complex tasks like folding clothes or unzipping bags with low latency and can adapt to new tasks with minimal demonstrations. Although it lacks built-in semantic safety features found in

    roboticsartificial-intelligencemachine-learningzero-shot-learningDeepMindautonomous-robotson-device-AI
  • Genesis AI launches with $105M seed funding from Eclipse, Khosla to build AI models for robots

    Genesis AI, a robotics-focused startup founded in December by Carnegie Mellon Ph.D. Zhou Xian and former Mistral research scientist Théophile Gervet, has launched with a substantial $105 million seed funding round co-led by Eclipse Ventures and Khosla Ventures. The company aims to build a general-purpose foundational AI model to enable robots to automate diverse repetitive tasks, ranging from laboratory work to housekeeping. Unlike large language models trained on text, robotics AI requires extensive physical-world data, which is costly and time-consuming to collect. To address this, Genesis AI uses synthetic data generated through a proprietary physics engine capable of accurately simulating real-world physical interactions. This engine originated from a collaborative academic project involving 18 universities, with many researchers from that initiative now part of Genesis’s 20+ member team specializing in robotics, machine learning, and graphics. Genesis claims its proprietary simulation technology allows faster model development compared to competitors relying on NVIDIA’s software. The startup operates from offices in Silicon Valley and Paris and

    roboticsartificial-intelligencesynthetic-datamachine-learningrobotics-foundation-modelautomationAI-models-for-robots
  • ChatGPT: Everything you need to know about the AI-powered chatbot

    ChatGPT, OpenAI’s AI-powered text-generating chatbot, has rapidly grown since its launch to reach 300 million weekly active users. In 2024, OpenAI made significant strides with new generative AI offerings and the highly anticipated launch of its OpenAI platform, despite facing internal executive departures and legal challenges related to copyright infringement and its shift toward a for-profit model. As of 2025, OpenAI is contending with perceptions of losing ground in the AI race, while working to strengthen ties with Washington and secure one of the largest funding rounds in history. Recent updates in 2025 include OpenAI’s strategic use of Google’s AI chips alongside Nvidia GPUs to power its products, marking a diversification in hardware. A new MIT study raised concerns that ChatGPT usage may impair critical thinking by showing reduced brain engagement compared to traditional writing methods. The ChatGPT iOS app saw 29.6 million downloads in the past month, highlighting its massive popularity. OpenAI also launched o3

    energyartificial-intelligenceOpenAIGPUsAI-chipspower-consumptionmachine-learning
  • MIT CSAIL's new vision system helps robots understand their bodies - The Robot Report

    MIT CSAIL has developed a novel robotic control system called Neural Jacobian Fields (NJF) that enables robots to understand and control their own bodies using only visual data from a single camera, without relying on embedded sensors or pre-designed models. This approach allows robots to learn their own internal models by observing the effects of random movements, providing them with a form of bodily self-awareness. The system was successfully tested on diverse robotic platforms, including a soft pneumatic hand, a rigid Allegro hand, a 3D-printed arm, and a sensorless rotating platform, demonstrating its robustness across different morphologies. The key innovation of NJF lies in decoupling robot control from hardware constraints, thus enabling more flexible, affordable, and unconventional robot designs without the need for complex sensor arrays or reinforced structures. By leveraging a neural network that combines 3D geometry reconstruction with a Jacobian field predicting how robot parts move in response to commands, NJF builds on neural radiance fields (NeRF) to

    roboticssoft-roboticsrobotic-controlmachine-learningMIT-CSAILNeural-Jacobian-Fieldsautonomous-robots
  • US supercomputer unlocks nuclear salt reactor secrets with AI power

    Scientists at Oak Ridge National Laboratory (ORNL) have developed a novel artificial intelligence (AI) framework that models the behavior of molten lithium chloride with quantum-level accuracy but in a fraction of the time required by traditional methods. Utilizing the Summit supercomputer, the machine-learning model predicts key thermodynamic properties of the salt in both liquid and solid states by training on a limited set of first-principles data. This approach dramatically reduces computational time from days to hours while maintaining high precision, addressing a major challenge in nuclear engineering related to understanding molten salts at extreme reactor temperatures. Molten salts are critical for advanced nuclear reactors as coolants, fuel solvents, and energy storage media due to their stability at high temperatures. However, their complex properties—such as melting point, heat capacity, and corrosion behavior—are difficult to measure or simulate accurately. ORNL’s AI-driven method bridges the gap between fast but less precise molecular dynamics and highly accurate but computationally expensive quantum simulations. This breakthrough enables faster, more reliable

    energyAInuclear-reactorsmolten-saltsmachine-learningsupercomputingmaterials-science
  • Robot Talk Episode 126 – Why are we building humanoid robots? - Robohub

    The article summarizes a special live episode of the Robot Talk podcast recorded at Imperial College London during the Great Exhibition Road Festival. The discussion centers on the motivations and implications behind building humanoid robots—machines designed to look and act like humans. The episode explores why humanoid robots captivate and sometimes unsettle us, questioning whether this fascination stems from vanity or if these robots could serve meaningful roles in future society. The conversation features three experts: Ben Russell, Curator of Mechanical Engineering at the Science Museum, Maryam Banitalebi Dehkordi, Senior Lecturer in Robotics and AI at the University of Hertfordshire, and Petar Kormushev, Director of the Robot Intelligence Lab at Imperial College London. Each brings a unique perspective, from historical and cultural insights to technical expertise in robotics, AI, and machine learning. Their dialogue highlights the rapid advancements in humanoid robotics and the ongoing research aimed at creating adaptable, autonomous robots capable of learning and functioning in dynamic environments. The episode underscores the multidisciplinary nature

    roboticshumanoid-robotsartificial-intelligenceautonomous-robotsmachine-learningreinforcement-learningrobot-intelligence
  • Cleaner, stronger cement recipes designed in record time by AI

    Researchers at the Paul Scherrer Institute (PSI) have developed an AI-driven approach to design low-carbon cement recipes up to 1,000 times faster than traditional methods. Cement production is a major source of CO₂ emissions, primarily due to the chemical release of CO₂ from limestone during clinker formation. To address this, the PSI team, led by mathematician Romana Boiger, combined thermodynamic modeling software (GEMS) with experimental data to train a neural network that rapidly predicts the mineral composition and mechanical properties of various cement formulations. This AI model enables quick simulation and optimization of cement recipes that reduce carbon emissions while maintaining strength and quality. Beyond speeding up calculations, the researchers employed genetic algorithms to identify optimal cement compositions that balance CO₂ reduction with practical production feasibility. While these AI-designed formulations show promise, extensive laboratory testing and validation remain necessary before widespread adoption. This study serves as a proof of concept, demonstrating that AI can revolutionize the search for sustainable building materials by efficiently navigating complex chemical

    materialscementartificial-intelligencemachine-learninglow-carbonsustainable-materialsconstruction-materials
  • US scientists use machine learning for real-time crop disease alerts

    Purdue University researchers are leveraging advanced AI and machine learning technologies to transform agriculture and environmental management. Their innovations include real-time crop disease detection using semi-supervised models that identify rare diseases from limited data, enabling faster outbreak responses and reduced chemical usage. These AI tools are designed to run efficiently on low-power devices such as drones and autonomous tractors, facilitating on-the-ground, real-time monitoring without relying on constant connectivity. Additionally, Purdue scientists are using AI to analyze urban ecosystems through remote sensing data and LiDAR imagery, uncovering patterns invisible to the naked eye to improve urban living conditions. In agriculture, AI is also being applied to enhance crop yield predictions and climate resilience. For example, machine learning ensembles simulate rice yields under future climate scenarios, improving accuracy significantly. Tools like the “Netflix for crops” platform recommend optimal crops based on soil and water data, aiding farmers and policymakers in making informed, data-driven decisions. Furthermore, Purdue developed an AI-powered medical robot capable of swimming inside a cow’s stomach to

    robotAIagriculture-technologymachine-learningmedical-robotscrop-disease-detectionenvironmental-monitoring
  • Watch: Figure 02 robot achieve near-human package sorting skills

    Figure AI’s humanoid robot, Figure 02, has demonstrated significant advancements in package sorting, achieving near-human speed and dexterity by processing parcels in about 4.05 seconds each with a 95% barcode scanning success rate. This marks a 20% speed improvement over earlier demonstrations despite handling more complex tasks involving a mix of rigid boxes, deformable poly bags, and flat padded envelopes. Key to this progress is the upgraded Helix visuomotor system, which benefits from a six-fold increase in training data and new modules for short-term visual memory and force feedback. These enhancements enable the robot to remember partial barcode views, adjust grips delicately, and manipulate flexible parcels by flicking or patting them for optimal scanning. The improvements highlight the potential of end-to-end learning systems in dynamic warehouse environments, where the robot can adapt its sorting strategy on the fly and even generalize its skills to new tasks, such as recognizing a human hand as a signal for handing over parcels without additional programming

    roboticshumanoid-robotpackage-sortingmachine-learningforce-feedbackvisual-memoryautomation
  • Week in Review: WWDC 2025 recap

    The Week in Review covers major developments from WWDC 2025 and other tech news. At Apple’s Worldwide Developers Conference, the company showcased updates across its product lineup amid pressure to advance its AI capabilities and address ongoing legal challenges related to its App Store. Meanwhile, United Natural Foods (UNFI) suffered a cyberattack that disrupted its external systems, impacting Whole Foods’ ability to manage deliveries and product availability. In financial news, Chime successfully went public, raising $864 million in its IPO. Other highlights include Google enhancing Pixel phones with new features like group chat for RCS and AI-powered photo editing, and Elon Musk announcing the imminent launch of driverless Teslas in Austin, Texas. The Browser Company is pivoting from its Arc browser to develop an AI-first browser using a reasoning model designed for improved problem-solving in complex domains. OpenAI announced a partnership with Mattel, granting Mattel employees access to ChatGPT Enterprise to boost product development and creativity. However, concerns about privacy surfaced with

    robotAIautonomous-vehiclesdriverless-carsmachine-learningartificial-intelligenceautomation
  • Motional names Major president, CEO of self-driving car business

    Laura Major was appointed president and CEO of Motional, a leading autonomous vehicle company, in June 2025 after serving as interim CEO since September 2024. She succeeded Karl Iagnemma, who left to lead Vecna Robotics. Major has been with Motional since its founding in 2020, initially as CTO, where she spearheaded the development of the IONIQ 5 robotaxi, one of the first autonomous vehicles certified by the Federal Motor Vehicle Safety Standards, and created a machine learning-first autonomous driving software stack. Her leadership emphasizes leveraging AI breakthroughs and partnership with Hyundai to advance safe, fully driverless transportation as a practical part of everyday life. Before Motional, Major built expertise in autonomy and AI at Draper Laboratory and Aria Insights, focusing on astronaut, national security, and drone applications. She began her career as a cognitive engineer designing decision-support systems for astronauts and soldiers and later led Draper’s Information and Cognition Division. Recognized as an emerging leader by

    robotautonomous-vehiclesAImachine-learningroboticsself-driving-carsautomation
  • Meta V-JEPA 2 world model uses raw video to train robots

    Meta has introduced V-JEPA 2, a 1.2-billion-parameter world model designed to enhance robotic understanding, prediction, and planning by training primarily on raw video data. Built on the Joint Embedding Predictive Architecture (JEPA), V-JEPA 2 undergoes a two-stage training process: first, self-supervised learning from over one million hours of video and a million images to capture physical interaction patterns; second, action-conditioned learning using about 62 hours of robot control data to incorporate agent actions for outcome prediction. This approach enables the model to support planning and closed-loop control in robots without requiring extensive domain-specific training or human annotations. In practical tests within Meta’s labs, V-JEPA 2 demonstrated strong performance on common robotic tasks such as pick-and-place, achieving success rates between 65% and 80% in previously unseen environments. The model uses vision-based goal representations, generating candidate actions for simpler tasks and employing sequences of visual subgoals for more complex tasks

    roboticsAIworld-modelsmachine-learningvision-based-controlrobotic-manipulationself-supervised-learning
  • Meta’s new AI helps robots learn real-world logic from raw video

    Meta has introduced V-JEPA 2, an advanced AI model trained solely on raw video data to help robots and AI agents better understand and predict physical interactions in the real world. Unlike traditional AI systems that rely on large labeled datasets, V-JEPA 2 operates in a simplified latent space, enabling faster and more adaptable simulations of physical reality. The model learns cause-and-effect relationships such as gravity, motion, and object permanence by analyzing how people and objects interact in videos, allowing it to generalize across diverse contexts without extensive annotations. Meta views this development as a significant step toward artificial general intelligence (AGI), aiming to create AI systems capable of thinking before acting. In practical applications, Meta has tested V-JEPA 2 on lab-based robots, which successfully performed tasks like picking up unfamiliar objects and navigating new environments, demonstrating improved adaptability in unpredictable real-world settings. The company envisions broad use cases for autonomous machines—including delivery robots and self-driving cars—that require quick interpretation of physical surroundings and real

    roboticsartificial-intelligencemachine-learningautonomous-robotsvideo-based-learningphysical-world-simulationAI-models
  • Meta’s V-JEPA 2 model teaches AI to understand its surroundings

    Meta has introduced V-JEPA 2, a new AI "world model" designed to help artificial intelligence agents better understand and predict their surroundings. This model enables AI to make common-sense inferences about physical interactions in the environment, similar to how young children or animals learn through experience. For example, V-JEPA 2 can anticipate the next likely action in a scenario where a robot holding a plate and spatula approaches a stove with cooked eggs, predicting the robot will use the spatula to move the eggs onto the plate. Meta claims that V-JEPA 2 operates 30 times faster than comparable models like Nvidia’s, marking a significant advancement in AI efficiency. The company envisions that such world models will revolutionize robotics by enabling AI agents to assist with real-world physical tasks and chores without requiring massive amounts of robotic training data. This development points toward a future where AI can interact more intuitively and effectively with the physical world, enhancing automation and robotics capabilities.

    robotartificial-intelligenceAI-modelroboticsmachine-learningautomationAI-agents
  • MIT teaches drones to survive nature’s worst, from wind to rain

    MIT researchers have developed a novel machine-learning-based adaptive control algorithm to improve the resilience of autonomous drones against unpredictable weather conditions such as sudden wind gusts. Unlike traditional aircraft, drones are more vulnerable to being pushed off course due to their smaller size, which poses challenges for critical applications like emergency response and deliveries. The new algorithm uses meta-learning to quickly adapt to varying weather by automatically selecting the most suitable optimization method based on real-time environmental disturbances. This approach enables the drone to achieve up to 50% less trajectory tracking error compared to baseline methods, even under wind conditions not encountered during training. The control system leverages a family of optimization algorithms known as mirror descent, automating the choice of the best algorithm for the current problem, which enhances the drone’s ability to adjust thrust dynamically to counteract wind effects. The researchers demonstrated the effectiveness of their method through simulations and real-world tests, showing significant improvements in flight stability. Ongoing work aims to extend the system’s capabilities to handle multiple disturbance sources, such as shifting payloads, and to incorporate continual learning so the drone can adapt to new challenges without needing retraining. This advancement promises to enhance the efficiency and reliability of autonomous drones in complex, real-world environments.

    dronesautonomous-systemsmachine-learningadaptive-controlroboticsartificial-intelligencemeta-learning
  • Flexible soft robot arm moves with light — no wires or chips inside

    Engineers at Rice University have developed a flexible, octopus-inspired soft robotic arm that operates entirely through light beams, eliminating the need for wires or internal electronics. This innovative arm is powered by a light-responsive polymer called azobenzene liquid crystal elastomer, which contracts when exposed to blue laser light and relaxes in the dark, enabling precise bending motions. The arm’s movement mimics natural behaviors, such as a flower stem bending toward sunlight, allowing it to perform complex tasks like obstacle navigation and hitting a ball with accuracy. The control system uses a spatial light modulator to split a laser into multiple adjustable beamlets, each targeting different parts of the arm to flex or contract as needed. Machine learning, specifically a convolutional neural network trained on various light patterns and corresponding arm movements, enables real-time, automated control of the arm’s fluid motions. Although the current prototype operates in two dimensions, the researchers aim to develop three-dimensional versions with additional sensors, potentially benefiting applications ranging from implantable surgical devices to industrial robots handling soft materials. This approach promises robots with far greater flexibility and degrees of freedom than traditional rigid-jointed machines.

    soft-roboticslight-responsive-materialsazobenzene-liquid-crystal-elastomermachine-learningflexible-robot-armremote-control-roboticsbio-inspired-robotics
  • Tiny quantum processor outshines classical AI in accuracy, energy use

    Researchers led by the University of Vienna have demonstrated that a small-scale photonic quantum processor can outperform classical AI algorithms in machine learning classification tasks, marking a rare real-world example of quantum advantage with current hardware. Using a quantum photonic circuit developed at Italy’s Politecnico di Milano and a machine learning algorithm from UK-based Quantinuum, the team showed that the quantum system made fewer errors than classical counterparts. This experiment is one of the first to demonstrate practical quantum enhancement beyond simulations, highlighting specific scenarios where quantum computing provides tangible benefits. In addition to improved accuracy, the photonic quantum processor exhibited significantly lower energy consumption compared to traditional hardware, leveraging light-based information processing. This energy efficiency is particularly important as AI’s growing computational demands raise sustainability concerns. The findings suggest that even today’s limited quantum devices can enhance machine learning performance and energy efficiency, potentially guiding a future where quantum and classical AI technologies coexist symbiotically to push technological boundaries and promote greener, faster, and smarter AI solutions.

    quantum-computingphotonic-quantum-processorartificial-intelligenceenergy-efficiencymachine-learningquantum-machine-learningsupercomputing
  • Beewise brings in $50M to expand access to its robotic BeeHome - The Robot Report

    Beewise Inc., a climate technology company specializing in AI-powered robotic beekeeping, has closed a $50 million Series D funding round, bringing its total capital raised to nearly $170 million. The company developed the BeeHome system, which uses artificial intelligence, precision robotics, and solar power to provide autonomous, real-time care to bee hives. This innovation addresses the critical decline in bee populations—over 62% of U.S. colonies died last year—threatening global food security due to bees’ essential role in pollinating about three-quarters of flowering plants and one-third of food crops. BeeHome enables continuous hive health monitoring and remote intervention by beekeepers, resulting in healthier colonies, improved crop yields, and enhanced biodiversity. Since its 2022 Series C financing, Beewise has become a leading global provider of pollination services, deploying thousands of AI-driven robotic hives that pollinate over 300,000 acres annually for major growers. The company has advanced its AI capabilities using recurrent neural networks and reinforcement learning to mitigate climate risks in agriculture. The latest BeeHome 4 model features Beewise Heat Chamber Technology, which eliminates 99% of lethal Varroa mites without harmful chemicals. The new funding round, supported by investors including Fortissimo Capital and Insight Partners, will accelerate Beewise’s technological innovation, market expansion, and research efforts to further its mission of saving bees and securing the global food supply.

    roboticsartificial-intelligenceautonomous-systemsenergyagriculture-technologymachine-learningclimate-technology
  • XRobotics’ countertop robots are cooking up 25,000 pizzas a month

    XRobotics, a San Francisco-based startup, has developed the xPizza Cube, a compact countertop robot designed to automate key pizza-making tasks such as applying sauce, cheese, and pepperoni. The machine, roughly the size of a stackable washing machine, can produce up to 100 pizzas per hour and is adaptable to various pizza styles, including Detroit and Chicago deep dish. Leasing at $1,300 per month over three years, the robot aims to save pizza makers 70-80% of the labor time involved in repetitive tasks, helping both small pizzerias and large chains improve efficiency without requiring a full overhaul of their kitchen processes. Unlike previous ventures like Zume, which attempted to fully automate pizza production and ultimately failed, XRobotics focuses on assistive technology that integrates into existing kitchens. After initial challenges with a larger, more complex robot, the company pivoted to a smaller, more affordable model launched in 2023, which has since produced 25,000 pizzas monthly. The startup recently raised $2.5 million in seed funding to scale production and expand its customer base. With plans to enter the Mexican and Canadian markets, XRobotics remains committed to the pizza industry, leveraging the large market size and the founders’ personal passion for pizza.

    roboticsautomationfood-technologymachine-learningrestaurant-technologypizza-makingkitchen-robotics
  • Solid-state battery breakthrough promises 50% more range in one charge

    Researchers from Skolkovo Institute of Science and Technology (Skoltech) and the AIRI Institute have achieved a significant breakthrough in solid-state battery technology by using machine learning to accelerate the discovery of high-performance battery materials. Their innovation could enable electric vehicles (EVs) to travel up to 50% farther on a single charge while improving safety and battery lifespan. The team employed graph neural networks to rapidly identify optimal materials for solid electrolytes and protective coatings, overcoming a major hurdle in solid-state battery development. This approach is orders of magnitude faster than traditional quantum chemistry methods, enabling quicker advancement in battery design. A key aspect of the research is the identification of protective coatings that shield the solid electrolyte from reactive lithium anodes and cathodes, which otherwise degrade battery performance and increase short-circuit risks. Using AI, the team discovered promising coating compounds such as Li3AlF6 and Li2ZnCl4 for the solid electrolyte Li10GeP2S12, a leading candidate material. This work not only enhances the durability and efficiency of solid-state batteries but also paves the way for safer, more durable, and higher-performing EVs and portable electronics, potentially reshaping the future of energy storage.

    energysolid-state-batterybattery-materialselectric-vehiclesmachine-learningneural-networksenergy-storage
  • US scientists develop real-time defect detection for 3D metal printing

    Scientists from Argonne National Laboratory and the University of Virginia have developed a novel method to detect defects, specifically keyhole pores, in metal parts produced by 3D printing using laser powder bed fusion. Keyhole pores are tiny internal cavities formed when excessive laser energy creates deep, narrow holes that trap gas, compromising the structural integrity and performance of critical components such as aerospace parts and medical implants. The new approach combines thermal imaging, X-ray imaging, and machine learning to predict pore formation in real-time by correlating surface heat patterns with internal defects captured via powerful X-rays. This method leverages existing thermal cameras already installed on many 3D printers, enabling instant detection of internal flaws without the need for continuous expensive X-ray imaging. The AI model, trained on synchronized thermal and X-ray data, can identify pore formation within milliseconds, allowing for immediate intervention. Researchers envision integrating this technology with automatic correction systems that adjust printing parameters or reprint layers on the fly, thereby improving reliability, reducing waste, and enhancing safety in manufacturing mission-critical metal parts. Future work aims to expand defect detection capabilities and develop repair mechanisms during the additive manufacturing process.

    3D-printingmetal-additive-manufacturingdefect-detectionmachine-learningthermal-imagingX-ray-imagingmaterials-science
  • Autonomous trucking developer Plus goes public via SPAC - The Robot Report

    Plus Automation Inc., a developer of autonomous driving software for commercial trucks, is going public through a merger with Churchill Capital Corp IX, a special purpose acquisition company (SPAC). The combined company will operate as PlusAI, with a mission to address the trucking industry’s driver shortage by delivering advanced autonomous vehicle technology. Founded in 2016 and based in Santa Clara, California, Plus has deployed its technology across the U.S., Europe, and Asia, accumulating over 5 million miles of autonomous driving. Its core product, SuperDrive, enables SAE Level 4 autonomous driving with a three-layer redundancy system designed specifically for heavy commercial trucks. Plus achieved a significant driver-out safety validation milestone in April 2025 and is conducting public road testing in Texas and Sweden, targeting a commercial launch of factory-built autonomous trucks in 2027. Plus emphasizes an OEM-led commercialization strategy, partnering with major vehicle manufacturers such as TRATON GROUP, Hyundai, and IVECO to integrate its virtual driver software directly into factory-built trucks. This approach leverages trusted manufacturing and service networks to scale deployment and provide fleet operators with a clear path to autonomy. Strategic collaborations with companies like DSV, Bosch, and NVIDIA support this effort. Notably, Plus and IVECO launched an automated trucking pilot in Germany in partnership with logistics provider DSV and retailer dm-drogerie markt, demonstrating real-world applications of their technology. The SPAC transaction values Plus at a pre-money equity valuation of $1.2 billion and is expected to raise $300 million in gross proceeds, which will fund the company through its planned commercial launch in 2027. The deal has been unanimously approved by both companies’ boards and is anticipated to close in Q4 2025, pending shareholder approval and customary closing conditions. This public listing marks a significant step for Plus as it scales its autonomous trucking technology to address industry challenges and expand globally.

    robotautonomous-trucksAImachine-learningcommercial-vehiclesLevel-4-autonomytransportation-technology
  • Hugging Face says its new robotics model is so efficient it can run on a MacBook

    roboticsAIHugging-FaceSmolVLAmachine-learningrobotics-modelgeneralist-agents
  • Google places another fusion power bet on TAE Technologies

    energyfusion-powerTAE-Technologiesmachine-learningplasma-technologyinvestment-in-energyrenewable-energy
  • AI sorts 1 million rock samples to find cement substitutes in waste

    materialsAIcement-substituteseco-friendly-materialsconcrete-sustainabilitymachine-learningalternative-materials
  • Why Intempus thinks robots should have a human physiological state

    robotroboticsAIemotional-intelligencehuman-robot-interactionIntempusmachine-learning
  • Agibot’s humanoid readies for robot face-off with Kung Fu flair

    robotAIhumanoidroboticsautomationmachine-learninginteraction
  • Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

    robotmachine-learningadaptable-robotsroboticsartificial-intelligenceautonomous-machinesreinforcement-learning
  • Robot see, robot do: System learns after watching how-tos

    robotartificial-intelligencemachine-learningimitation-learningroboticstask-automationvideo-training
  • SS Innovations to submit SSi Mantra 3 to FDA in July

    robotsurgical-roboticstelesurgeryFDA-approvalhealthcare-technologymachine-learningmodular-design
  • Mô hình AI cho phép điều khiển robot bằng lời

    robotAIMotionGlotmachine-learningroboticshuman-robot-interactionautomation
  • EPS đảm bảo công tác sửa chữa bảo dưỡng các nhà máy điện đầu năm 2025

    energymaintenancepower-plantsreliabilityremote-monitoringoperational-efficiencymachine-learning
  • Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

    robotIoTenergymaterialsmachine-learningsemiconductor-manufacturingvirtual-metrology
  • DeepSeek upgrades its AI model for math problem solving

    AImath-problem-solvingDeepSeektechnology-upgradesmachine-learningartificial-intelligenceeducation-technology
  • Meta says its Llama AI models have been downloaded 1.2B times

    MetaLlama-AIartificial-intelligencedownloadstechnology-newsmachine-learningAI-models
  • Meta previews an API for its Llama AI models

    MetaLlama-AIAPIartificial-intelligencetechnologymachine-learningsoftware-development
  • Alibaba unveils Qwen 3, a family of ‘hybrid’ AI reasoning models

    AlibabaQwen-3AI-modelshybrid-AImachine-learningtech-newsopen-source-AI