Articles tagged with "reinforcement-learning"
Coco Robotics taps UCLA professor to lead new physical AI research lab
Coco Robotics, a startup specializing in last-mile delivery robots, has established a new physical AI research lab led by UCLA professor Zhou, who has also joined the company as chief AI scientist. The move aims to leverage the extensive data—spanning millions of miles collected over five years in complex urban environments—to advance autonomous operation of their delivery bots and reduce delivery costs. Coco Robotics co-founder and CEO Zach Rash emphasized that the company now has sufficient data scale to accelerate research in physical AI, particularly in robot navigation and reinforcement learning, areas where Zhou is a leading expert. The new research lab operates independently from Coco Robotics’ partnership with OpenAI, which provides access to language models, while the lab focuses on utilizing the company’s proprietary robot-collected data. Coco Robotics plans to use the insights gained exclusively to enhance its own automation capabilities and improve the efficiency of its local robot models, rather than selling the data. Additionally, the company intends to share relevant research findings with the cities where it operates to help address
roboticsartificial-intelligenceautonomous-deliveryphysical-AIrobot-navigationreinforcement-learninglast-mile-deliveryChina’s wearable suit trains humanoid robots with high accuracy
Researchers at China’s National University of Defense Technology, in collaboration with Midea Group, have developed HumanoidExo, a wearable suit system designed to train humanoid robots with high accuracy by capturing real-time human motion. Unlike traditional training methods that rely on videos and simulations—often causing robots to lose balance—HumanoidExo uses motion sensors and a LiDAR scanner to track seven arm joints and body movements, providing robots with precise, real-world data. The system’s AI component, HumanoidExo-VLA, combines a Vision-Language-Action model to interpret human tasks and a reinforcement learning controller to maintain robot balance during learning. Testing on the Unitree G1 humanoid robot demonstrated significant improvements: after training with data from five teleoperated and 195 exoskeleton-recorded sessions, the robot’s success rate on a pick-and-place task rose from 5% to nearly 80%, approaching the performance level of 200 human demonstrations. The robot also learned to walk effectively
robothumanoid-robotswearable-suitmotion-captureAI-trainingreinforcement-learningexoskeletonHumanoid robot walks naturally down passageway into a 'friends' den
PND Robotics recently showcased its humanoid robot, Adam, which demonstrates a natural, human-like gait and directional sense as it walks down a hallway into a room filled with other robots performing various tasks. Using a proprietary reinforcement learning algorithm, Adam was trained through simulation-to-real-world methods to mimic human walking and movement with impressive fluidity and accuracy. In the video, Adam is greeted by another robot, Adam-U, highlighting PND Robotics’ focus on creating affordable, modular humanoid robots that combine biomimetic design with real-world adaptability and continuous self-learning. Adam and Adam-U made notable public debuts at events in Shanghai and Zhejiang Province, where they impressed audiences by performing human-like actions such as passing objects and navigating environments naturally. PND Robotics aims to democratize personal robotics by developing machines that integrate physical interaction, perception, and learning in a lifelike manner. Compared to other advanced humanoid robots like Cassie, Digit, and HRP-5P, Adam stands out for its human
robothumanoid-robotreinforcement-learningbiomimetic-designmodular-robotspersonal-roboticsrobot-locomotionChina’s humanoid robot Bumblebee now walks with human-like gait
Shanghai Kepler Robotics has unveiled a significant advancement in its humanoid robot K2 “Bumblebee,” showcasing China’s first “hybrid-architecture disturbance-resistant” gait that enables the robot to walk with a natural, human-like straight-knee motion. This breakthrough is achieved through a novel hybrid actuation system combining planetary roller screw linear actuators and rotary actuators in a series-parallel configuration. The linear actuators act as the robot’s primary “leg muscles,” providing walking force, while rotary actuators manage fine adjustments and terrain adaptation. This design offers high energy efficiency (81.3%), precise positioning, and strong load-bearing capacity, allowing Bumblebee to maintain balance over uneven surfaces such as bricks and grass, and to carry payloads up to 30 kilograms (66 pounds). To bridge the gap between simulation and real-world performance, Kepler addressed mechanical and control challenges by integrating reinforcement learning, imitation learning, and torque control, enabling dynamic gait switching and robust stability despite sensor noise, actuator
robothumanoid-robothybrid-actuatorlocomotion-technologyreinforcement-learningindustrial-roboticsrobotic-gaitHumanoid robot HITTER plays table tennis with human-like speed
UC Berkeley has developed a humanoid robot named HITTER that can play table tennis with human-like speed and agility. Demonstrated in a video, HITTER successfully engaged in rallies exceeding 100 shots against human opponents, using its left hand for balance and executing precise, fluid movements. The robot’s performance relies on a dual-system design: a high-level planner that tracks and predicts the ball’s trajectory using external cameras, and a low-level controller that converts these calculations into coordinated arm and leg motions. Trained on human motion data, HITTER can move naturally, reacting to balls traveling up to 5 m/s in under a second. The development team combined model-based planning with reinforcement learning to overcome the challenges of split-second decision-making and unpredictable shots inherent in table tennis. This hybrid approach enabled HITTER to fine-tune its movements through trial and error, resulting in lifelike swings and footwork. Tested on a general-purpose humanoid platform (likely the Unitree G1), HITTER demonstrated its
roboticshumanoid-robotreinforcement-learningAI-planninghuman-robot-interactiontable-tennis-robotrobot-motion-controlInside Singapore's physical AI revolution
The article summarizes Episode 210 of The Robot Report Podcast, which centers on Singapore’s emerging leadership in physical AI and robotics. Key guests from the Singapore Economic Development Board (EDB), Certis Group, and the Home Team Science & Technology Agency discuss Singapore’s strategic initiatives to grow its robotics sector. The country leverages its strong manufacturing base, government incentives, and a collaborative ecosystem involving industry and academia to foster innovation and talent development. Emphasis is placed on the importance of integration, reliability, and scalability for successful deployment of robotics and AI technologies. The episode also covers notable robotics news, including Boston Dynamics’ Spot robot performing a public triple backflip, showcasing advancements in reinforcement learning for robot agility and recovery. Despite the impressive feat, Spot’s performance in America’s Got Talent did not advance to the quarterfinals. Additionally, Intuitive Surgical announced a permanent layoff of 331 employees (about 2% of its workforce) at its Sunnyvale headquarters. Lastly, John Deere expanded its agricultural
roboticsartificial-intelligencephysical-AISingaporeBoston-Dynamicsreinforcement-learningautomationRoboBallet makes robotic arms dance in sync on factory floors
RoboBallet is a new AI system developed by a team from UCL, Google DeepMind, and Intrinsic that choreographs the movements of multiple robotic arms on factory floors, significantly improving efficiency and scalability in manufacturing. Traditional robotic coordination requires extensive manual programming to avoid collisions and complete tasks, a process that is time-consuming and prone to errors. RoboBallet overcomes these challenges by using reinforcement learning combined with graph neural networks, enabling it to plan coordinated movements for up to eight robotic arms performing 40 tasks in seconds, even in previously unseen layouts. This approach treats obstacles and tasks as points in a network, allowing rapid and adaptable planning that outperforms existing methods by generating plans hundreds of times faster than real-time. The system’s scalability is a major breakthrough, as it learns general coordination rules rather than memorizing specific scenarios, making it capable of handling complex, dynamic environments where factory layouts or robot configurations change frequently. RoboBallet’s ability to instantly generate high-quality plans could prevent costly
roboticsindustrial-automationAIrobotic-armsmanufacturing-technologyreinforcement-learningfactory-efficiency#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour - Robohub
The article discusses advancements presented at IJCAI 2025 concerning the integration of Multi-Objective Reinforcement Learning (MORL) with restraining bolts to enable AI agents to learn normative behavior. Autonomous agents, powered by reinforcement learning (RL), are increasingly deployed in real-world applications such as self-driving cars and smart urban planning. While RL agents excel at optimizing behavior to maximize rewards, unconstrained optimization can lead to actions that, although efficient, may be unsafe or socially inappropriate. To address safety, formal methods like linear temporal logic (LTL) have been used to impose constraints ensuring agents act within defined safety parameters. However, safety constraints alone are insufficient when AI systems interact closely with humans, as normative behavior involves compliance with social, legal, and ethical norms that go beyond mere safety. Norms are expressed through deontic concepts—obligations, permissions, and prohibitions—that describe ideal or acceptable behavior rather than factual truths. This introduces complexity in reasoning, especially with contrary-to-duty
robotartificial-intelligencereinforcement-learningautonomous-agentssafe-AImachine-learningnormative-behaviorGoogle DeepMind, Intrinsic build AI for multi-robot planning
The article discusses a new AI-driven approach to programming and coordinating multiple industrial robots in shared workspaces, developed through a collaboration between Google DeepMind Robotics, Intrinsic, and University College London. Traditional methods for robot motion planning rely heavily on manual programming, teach pendants, and trial-and-error, which are time-consuming and become increasingly complex when managing multiple robots to avoid collisions. The researchers introduced "RoboBallet," an AI model that leverages reinforcement learning and graph neural networks (GNNs) to generate collision-free motion plans efficiently. This model represents robots, tasks, and obstacles as nodes in a graph and learns generalized planning strategies by training on millions of synthetic scenarios, enabling it to produce near-optimal trajectories rapidly without manual intervention. Intrinsic, a company spun out of Alphabet’s X in 2021, aims to simplify industrial robot programming and scaling. Their RoboBallet system requires only CAD files and high-level task descriptions to generate motion plans, eliminating the need for detailed coding or fine
roboticsartificial-intelligencemulti-robot-planningreinforcement-learninggraph-neural-networksindustrial-robotsautomationHumanoid robots lack data to keep pace with explosive rise of AI
The recent International Humanoid Olympiad held in Olympia, Greece, showcased humanoid robots competing in sports like boxing and soccer, highlighting their growing capabilities. Despite these advances, humanoid robots remain significantly behind AI software in learning from data, with experts estimating they are roughly "100,000 years" behind due to limited data availability. Organizers and researchers emphasize that while AI tools benefit from vast datasets enabling rapid advancement, humanoid robots struggle to acquire and process comparable real-world data, which hinders their ability to perform complex, dexterous household tasks. Experts predict that humanoid robots may first find practical use in space exploration before becoming common in homes, a transition expected to take over a decade. To address this gap, researchers are exploring reinforcement learning techniques that allow robots to learn from real-time experiences rather than relying solely on pre-programmed actions. Additionally, innovative approaches such as developing biological computer brains using real brain cells on chips aim to enable robots to learn and adapt more like humans. The Olymp
robothumanoid-robotsartificial-intelligencerobotic-learningreinforcement-learningrobotic-brainrobotics-competitionUnique robot welded from online parts walks on two legs with ease
MEVITA is a newly developed open-source bipedal robot created by engineers at the University of Tokyo's JSK Robotics Laboratory. It addresses common challenges in DIY robotics platforms by combining durability, simplicity, and accessibility. Unlike many existing designs that rely on fragile 3D-printed parts or complex metal assemblies with hard-to-source components, MEVITA uses sheet metal welding to integrate complex shapes into just 18 unique metal parts, four of which are welded. This approach significantly reduces the number of components, making the robot easier to build using parts readily available through online e-commerce. The robot’s control system leverages advanced AI techniques, specifically reinforcement learning trained in simulation environments (IsaacGym and MuJoCo), before transferring the learned behaviors to the physical robot via Python scripts. This Sim-to-Real transfer enables MEVITA to walk effectively across diverse terrains such as uneven indoor floors, grassy fields, dirt, concrete tiles, and gentle slopes. Safety and control are enhanced by features including wireless
roboticsbipedal-robotopen-source-robotsheet-metal-weldingAI-control-systemreinforcement-learningrobot-assemblyBoston Dynamics’ robot dog nails daring backflips in new video
Boston Dynamics has showcased its robot dog, Spot, performing consistent backflips in a new video, highlighting the robot’s advanced agility and refined design. While these gymnastic feats are unlikely to be part of Spot’s routine tasks, they serve a critical engineering purpose: pushing the robot to its physical limits to identify and address potential balance failures. This helps improve Spot’s ability to recover quickly from slips or trips, especially when carrying heavy payloads in industrial settings, thereby enhancing its reliability and durability. The development of Spot’s backflip capability involved reinforcement learning techniques, where the robot was trained in simulations to optimize its movements by receiving rewards for successful actions, akin to training a dog with treats. This iterative process of simulation and real-world testing allows engineers to fine-tune Spot’s behavior and ensure robust performance. Beyond technological advancements, Spot’s agility has also been demonstrated in entertainment contexts, such as performing dance routines on America’s Got Talent, showcasing its versatility. Looking forward, Spot’s ongoing evolution through
robotroboticsBoston-Dynamicsrobot-dogreinforcement-learningmachine-learningquadruped-robotHumanoids, robot dogs master unseen terrains with attention mapping
Researchers at ETH Zurich have developed an advanced control system for legged robots, including the quadrupedal ANYmal-D and humanoid Fourier GR-1, enabling them to navigate complex and previously unseen terrains. This system employs a machine learning technique called attention-based map encoding, trained via reinforcement learning, which allows the robot to focus selectively on the most critical areas of a terrain map rather than processing the entire map uniformly. This focused attention helps the robots identify safe footholds even in challenging environments, improving robustness and generalization across varied terrains. The system demonstrated successful real-time locomotion at speeds up to 2 meters per second, with notably low power consumption relative to the robot’s motors. While the current approach is limited to 2.5D height-map locomotion and cannot yet handle overhanging 3D obstacles such as tree branches, the researchers anticipate extending the method to full 3D environments and more complex loco-manipulation tasks like opening doors or climbing. The attention mechanism also provides
robothumanoid-robotsquadrupedal-robotsmachine-learningreinforcement-learningattention-mappinglocomotion-controlVideo: Swiss robot dog plays perfect badminton match with a human
Researchers at Switzerland’s ETH Zurich have developed a quadruped robot dog named ANYmal, capable of playing badminton with a human at the skill level of a seven-year-old child. ANYmal, created by ANYbotics, uses a sophisticated control system equipped with two cameras to track and predict the shuttlecock’s trajectory. It swings a racket attached to a multi-axis arm to hit the shuttlecock precisely. The robot was trained using reinforcement learning in a virtual environment, where it practiced thousands of rallies to learn positioning, shot accuracy, and anticipatory movement, enabling it to perform with remarkable precision in real-world play. A key challenge addressed in the development was maintaining balance while lunging and moving quickly to return shots. ANYmal’s reinforcement learning algorithm enhances its coordination and stability, allowing it to move with agility and balance comparable to a human player. Originally designed for industrial inspection and navigating rough terrains, including disaster zones, ANYmal’s capabilities have now been extended to dynamic sports environments. Priced at around
robotroboticsreinforcement-learningquadruped-robotrobot-dogautonomous-robotsrobot-control-systemsChina’s robot dog sprints 328 feet in 16.33 seconds, breaks record
China’s Zhejiang University announced that its quadruped robot, White Rhino, set a new Guinness World Record by sprinting 100 meters (328 feet) in 16.33 seconds, surpassing the previous record of 19.87 seconds held by South Korea’s Hound robot. The run took place in Hangzhou and marks a significant advancement in robotic speed, narrowing the gap between machine and human sprint performance (Usain Bolt’s human record is 9.58 seconds). This achievement demonstrates the robot’s explosive power, speed, stability, and precise control during rapid movement. White Rhino was developed through a collaborative effort involving Zhejiang University’s Center for X-Mechanics, School of Aeronautics and Astronautics, and the Hangzhou Global Scientific and Technological Innovation Center. The design process employed a “robot forward design” approach, using comprehensive dynamics simulations and multi-objective optimization to simultaneously refine geometry, motor specifications, and reduction systems. The robot features high-power-density joint actuators
robotquadruped-robotroboticsactuatorscontrol-algorithmsreinforcement-learningmechanical-designHumanoid robots Adam and Adam-U display lifelike AI movement
At the World Artificial Intelligence Conference 2025 in Shanghai, Chinese robotics company PNDbotics unveiled two advanced humanoid robots, Adam and Adam-U, showcasing significant strides in AI-driven robotics. Adam is a full-sized, 1.6-meter-tall, 132-pound humanoid robot designed for high agility and precision, featuring 44 degrees of freedom and powered by deep reinforcement learning (DRL) and imitation-learning algorithms. It boasts patented quasi-direct drive actuators that enable smooth, human-like movements, including balanced posture and deft manipulation, even without visual input. Adam’s modular, biomimetic design and real-time control system allow it to perform complex tasks dynamically, such as playing musical instruments and dancing. Adam-U, developed in partnership with Noitom Robotics and Inspire Robots, serves as a high-precision, stationary data acquisition platform with 31 degrees of freedom. It integrates advanced motion capture technology, including Noitom’s PNLink suit and Inspire’s dexterous robotic hand,
robothumanoid-robotAImotion-capturerobotics-innovationreinforcement-learningimitation-learningOli: LimX’s new humanoid robot masters gym, warehouse, dance floor
LimX Dynamics, a Chinese robotics company, has unveiled its full-sized humanoid robot named LimX Oli, designed to advance embodied AI and automation in manufacturing, warehousing, and research. Available in three variants—Lite, EDU, and Super—starting at about $21,800, Oli features a modular arm system with interchangeable attachments such as standard hands, precision grippers, and dexterous robotic hands. This modularity allows the robot to perform a wide range of tasks across different environments, from lifting dumbbells in a gym to sorting items in a warehouse and even performing Chinese kung fu and dancing, showcasing its strength, agility, balance, and full-body disturbance recovery capabilities. Standing 1.65 meters tall with 31 degrees of freedom, Oli is tailored for AI researchers, robotics engineers, and system integrators, offering an open SDK that provides full access to sensor data, joint control, and task scheduling. This flexible hardware-software design and scalable development toolchain make it a powerful
robothumanoid-robotAI-roboticsmodular-roboticswarehouse-automationreinforcement-learningembodied-intelligenceChina’s humanoid robot stuns by opening car door in a 'world-first'
AiMOGA Robotics has achieved a significant breakthrough with its humanoid robot, Mornine, which autonomously opened a car door inside a functioning Chery dealership in China—marking a world-first in embodied AI. Unlike scripted or teleoperated robots, Mornine used only onboard sensors, full-body motion control, and reinforcement learning to identify the door handle, adjust its posture, and apply coordinated force to open the door without any human input. This task, performed in a live commercial setting, demonstrates advanced autonomy and a shift from simulation-based robotics to real-world service applications. Mornine’s sophisticated sensor suite includes 3D LiDAR, depth and wide-angle cameras, and a visual-language model, enabling real-time perception and continuous learning through a cloud-based training loop. The robot was not explicitly programmed to recognize door handles but learned through millions of simulated cycles, with the learned model transferred to real-world operation via Sim2Real methods. Currently deployed in multiple Chery 4S dealerships
roboticshumanoid-robotautonomous-robotsAI-roboticsservice-robotsreinforcement-learningsensor-technologyChina’s humanoid robot achieves human-like motion with 31 joints
Chinese robotics company PND Robotics, in collaboration with Noitom Robotics and Inspire Robots, has introduced the Adam-U humanoid robot platform, which features 31 degrees of freedom (DOF) enabling human-like motion. The robot includes a 2-DOF head, 6-DOF dexterous hands, a 3-DOF waist with a braking system for safety, and a binocular vision system that mimics human sight. Standing adjustable between 1.35 to 1.77 meters and weighing 61 kilograms, Adam-U cannot walk as it uses a stationary platform instead of legs. It is designed for precise, flexible operation in dynamic environments and is particularly suited for reinforcement and imitation learning, making it a valuable tool for AI researchers, robotics engineers, and academic institutions. The Adam-U platform integrates hardware and software into a comprehensive ecosystem, including Noitom’s PNLink full-body wired inertial motion capture suit and Inspire Robots’ RH56E2 tactile dexterous
roboticshumanoid-robotmotion-captureartificial-intelligencemachine-learningreinforcement-learningdata-acquisitionEngineAI raises nearly $140M to develop legged, humanoid robots - The Robot Report
EngineAI, a Shenzhen-based robotics company, has raised nearly $140 million (RMB 1 billion) through its pre-A++ and A1 funding rounds to advance the development and commercialization of legged humanoid robots. The company plans to use the capital to scale trial production, expand its workforce fivefold, and diversify its product lines, focusing on bipedal and full humanoid robots. EngineAI’s technology combines proprietary joint modules that deliver high power, torque, and precision with a hybrid control system integrating traditional controls and reinforcement learning (RL), enabling lifelike, dynamic movements such as complex dances and sprinting with millimeter-level accuracy. EngineAI aims to penetrate the growing global humanoid robotics market, projected by various analysts to reach anywhere from $15 billion by 2030 to $5 trillion by 2050, driven by demand in manufacturing, logistics, and services. The company employs an “open-source hardware + ecosystem profit-sharing” model to accelerate market adoption through strategic partnerships with
roboticshumanoid-robotsreinforcement-learningAI-roboticsrobot-hardwarerobot-softwarerobotics-marketRobot Adam grooves on keytar at China’s futuristic music festival
The article highlights the debut of Adam, a full-sized humanoid robot developed by PNDbotics, performing as a keytar player alongside Chinese musician Hu Yutong’s band at the VOYAGEX Music Festival in Changchun, China, on July 12, 2025. Adam impressed the audience with fluid, human-like movements and precise musical timing, showcasing a seamless integration of robotics and live performance art. Standing 1.6 meters tall and weighing 60 kilograms, Adam’s agility and control stem from 25 patented quasi-direct drive (QDD) PND actuators with advanced force control, enabling smooth, coordinated motions that closely mimic human dexterity. Powered by a proprietary reinforcement learning algorithm and supported by a robust control system featuring an Intel i7-based unit, Adam demonstrates sophisticated real-time coordination across its limbs and joints. The robot’s modular design enhances its versatility, maintainability, and adaptability to dynamic environments, including congested or uneven terrain. PNDbotics has continuously
robothumanoid-robotroboticsartificial-intelligencereinforcement-learningactuatorsrobot-control-systemsNew quadruped robot climbs vertically 50 times faster than rivals
Researchers at the University of Tokyo’s Jouhou System Kougaka Laboratory (JSK) have developed KLEIYN, a quadruped robot capable of climbing vertical walls up to 50 times faster than previous robots. Unlike other climbing robots that rely on grippers or claws, KLEIYN uses a chimney climbing technique, pressing its feet against two opposing walls for support. Its flexible waist joint allows adaptation to varying wall widths, particularly narrow gaps. The robot weighs about 40 pounds (18 kg), measures 2.5 feet (76 cm) in length, and features 13 joints powered by quasi-direct-drive motors for precise movement. KLEIYN’s climbing ability is enhanced through machine learning, specifically Reinforcement Learning combined with a novel Contact-Guided Curriculum Learning method, enabling it to transition smoothly from flat terrain to vertical surfaces. In tests, KLEIYN successfully climbed walls spaced between 31.5 inches (80 cm) and 39.4 inches (
robotquadruped-robotmachine-learningreinforcement-learningclimbing-robotrobotics-innovationautonomous-robotsSwiss robot dog can now pick up and throw a ball accurately like humans
ETH Zurich’s robotic dog ANYmal, originally designed for autonomous operation in challenging environments, has been enhanced with a custom arm and gripper, enabling it to pick up and throw objects with human-like accuracy. The robot’s advanced actuators and integrated sensors allow it to navigate complex terrain while maintaining stability and situational awareness. Unlike traditional factory robots, ANYmal is built to handle unpredictable outdoor conditions, making it suitable for tasks such as industrial inspection, disaster response, and exploration. The research team, led by Fabian Jenelten, trained ANYmal using reinforcement learning within a highly realistic virtual environment that simulated real-world physics. This approach, known as sim-to-real transfer, allowed the robot to practice millions of throws safely and ensured its skills transferred effectively to real-world scenarios. In testing, ANYmal successfully picked up and threw various objects—including balls, bottles, and fruit—across different surfaces and environmental challenges, such as wind and uneven ground, demonstrating adaptability and precise control without pre-programmed steps. This
roboticsautonomous-robotsreinforcement-learninglegged-robotsrobot-manipulationsim-to-real-transferrobot-perceptionNBC’s AGT pushes Spot to perform under pressure
Boston Dynamics showcased its Spot quadruped robots on NBC’s America’s Got Talent (AGT), performing a live, choreographed dance routine to Queen’s “Don’t Stop Me Now.” Five Spots danced synchronously, using their robot arms to “lip-sync” Freddie Mercury’s vocals, impressing all four AGT judges who voted to advance the act. This high-profile appearance was both an entertainment milestone and a rigorous technical stress test for the robots and engineering team. The performance combined autonomous dancing via proprietary choreography software with teleoperated interactions, pushing Spot’s capabilities with aggressive moves like high-speed spins and one-legged balancing. These advanced maneuvers, enabled by recent improvements in reinforcement learning and dynamic behavior modeling, also enhance Spot’s real-world applications, such as maintaining balance on slippery factory floors. The decision to bring Spot to AGT followed successful live performances at the 2024 Calgary Stampede, which built confidence in managing the technical and logistical challenges of a live broadcast. Despite over 100
roboticsBoston-DynamicsSpot-robothumanoid-robotsrobot-performanceautonomous-robotsreinforcement-learningRobot Talk Episode 126 – Why are we building humanoid robots? - Robohub
The article summarizes a special live episode of the Robot Talk podcast recorded at Imperial College London during the Great Exhibition Road Festival. The discussion centers on the motivations and implications behind building humanoid robots—machines designed to look and act like humans. The episode explores why humanoid robots captivate and sometimes unsettle us, questioning whether this fascination stems from vanity or if these robots could serve meaningful roles in future society. The conversation features three experts: Ben Russell, Curator of Mechanical Engineering at the Science Museum, Maryam Banitalebi Dehkordi, Senior Lecturer in Robotics and AI at the University of Hertfordshire, and Petar Kormushev, Director of the Robot Intelligence Lab at Imperial College London. Each brings a unique perspective, from historical and cultural insights to technical expertise in robotics, AI, and machine learning. Their dialogue highlights the rapid advancements in humanoid robotics and the ongoing research aimed at creating adaptable, autonomous robots capable of learning and functioning in dynamic environments. The episode underscores the multidisciplinary nature
roboticshumanoid-robotsartificial-intelligenceautonomous-robotsmachine-learningreinforcement-learningrobot-intelligenceSweater-wearing humanoid robot gets brain upgrade to clean, cook solo
1X Technologies has introduced Redwood, an advanced AI model powering its humanoid robot NEO, designed to autonomously perform complex household tasks such as laundry, door answering, and home navigation. Redwood is a 160 million-parameter vision-language model that integrates perception, locomotion, and control into a unified system running onboard NEO Gamma’s embedded GPU. This integration enables full-body coordination, allowing NEO to simultaneously control arms, legs, pelvis, and walking commands, which enhances its ability to brace against surfaces, handle higher payloads, and manipulate objects bi-manually. Redwood’s training on diverse real-world data, including both successful and failed task demonstrations, equips NEO with strong generalization capabilities to adapt to unfamiliar objects and task variations, improving robustness and autonomy even in offline or low-connectivity environments. Complementing Redwood, 1X Technologies has developed a comprehensive Reinforcement Learning (RL) controller that expands NEO’s mobility and dexterity for navigating real home environments. This controller supports fluid
robothumanoid-robotAI-modelrobotics-autonomymotion-controlmobile-manipulationreinforcement-learningChinese firm eases humanoid, legged robot development with new suite
EngineAI Robotics, a Shenzhen-based Chinese firm, has launched EngineAI RL Workspace, an open-source, modular reinforcement learning platform tailored specifically for legged robotics development. This comprehensive suite includes dual frameworks—a training code repository and a deployment code repository—that together provide an end-to-end solution from algorithm training to real-world application. The platform is designed to enhance development efficiency through reusable logic structures, a unified single-algorithm executor for both training and inference, and decoupled algorithms and environments that enable seamless iteration without interface changes. The EngineAI RL Workspace integrates the entire development pipeline with four core components: environment modules, algorithm engines, shared toolkits, and integration layers, each independently encapsulated to facilitate multi-person collaboration and reduce communication overhead. Additional features include dynamic recording systems for capturing training and inference videos, intelligent version management to maintain experiment consistency, and detailed user guides to support rapid onboarding. At CES 2025, EngineAI showcased humanoid robots like the SE01, a versatile 5.
roboticshumanoid-robotsreinforcement-learninglegged-robotsrobot-developmentAI-in-roboticsmodular-robotics-platformChinese firm achieves agile, human-like walking with AI control
Chinese robotics startup EngineAI has developed an advanced AI-driven control system that enables humanoid robots to walk with straight legs, closely mimicking natural human gait. This innovative approach integrates human gait data, adversarial learning, and real-world feedback to refine robot movement across diverse environments, aiming to achieve more energy-efficient, stable, and agile locomotion. EngineAI’s lightweight humanoid platform, the PM01, has demonstrated impressive agility, including successfully performing a frontflip and executing complex dance moves from the film Kung Fu Hustle, showcasing the system’s potential for fluid, human-like motion. The PM01 robot features a compact, lightweight aluminum alloy exoskeleton with 24 degrees of freedom and a bionic structure that supports dynamic movement at speeds up to 2 meters per second. It incorporates advanced hardware such as an Intel RealSense depth camera for visual perception and an Intel N97 processor paired with an NVIDIA Jetson Orin CPU for high-performance processing and neural network training. This combination allows the PM01 to interact effectively with its environment and perform intricate tasks, making it a promising platform for research into human-robot interaction and agile robotic assistants. EngineAI’s work parallels other Chinese developments like the humanoid robot Adam, which uses reinforcement learning and imitation of human gait to achieve lifelike locomotion. Unlike traditional control methods such as Model Predictive Control used by robots like Boston Dynamics’ Atlas, EngineAI’s AI-based framework emphasizes adaptability through real-world learning, addressing challenges in unpredictable environments. While still in the research phase, these advancements mark significant progress toward next-generation humanoid robots capable of natural, efficient, and versatile movement.
robothumanoid-robotAI-controlgait-controlreinforcement-learningrobotics-platformenergy-efficient-roboticsCongratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners - Robohub
The 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), held from May 19-23 in Detroit, recognized outstanding contributions in the field with awards for best paper, best demo, and distinguished dissertation. The Best Paper Award went to the team behind "Soft Condorcet Optimization for Ranking of General Agents," led by Marc Lanctot and colleagues. Several other papers were finalists, covering topics such as commitments in BDI agents, curiosity-driven partner selection, reinforcement learning for vehicle-to-building charging, and drone delivery systems. The Best Student Paper Award was given to works on decentralized planning using probabilistic hyperproperties and large language models for virtual human gesture selection. In addition, the Blue Sky Ideas Track honored François Olivier and Zied Bouraoui for their neurosymbolic approach to embodied cognition, while the Best Demo Award recognized a project on serious games for ethical preference elicitation by Jayati Deshmukh and team. The Victor Lesser Distinguished Dissertation Award, which highlights originality, impact, and quality in autonomous agents research, was awarded to Jannik Peters for his thesis on proportionality in selecting committees, budgets, and clusters. Lily Xu was the runner-up for her dissertation on AI decision-making for planetary health under conditions of low-quality data. These awards underscore the innovative research advancing autonomous agents and multiagent systems.
robotautonomous-agentsmultiagent-systemsdronesreinforcement-learningenergy-storageAITesla’s Optimus robot takes out trash, vacuums, cleans like a pro
robotTeslaOptimusAIautomationhumanoid-robotreinforcement-learningWatch humanoid robots clash in a tug of war, pull cart, open doors
robothumanoidreinforcement-learningcontrol-systemforce-awareloco-manipulationCMURobot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto
robotmachine-learningadaptable-robotsroboticsartificial-intelligenceautonomous-machinesreinforcement-learningShlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award
robotautonomous-agentsmulti-agent-systemsdecision-makingreinforcement-learningresearch-awardAI