RIEM News LogoRIEM News

Articles tagged with "AI-control"

  • Figure robot gets AI brain that enables human-like full-body control

    Figure’s humanoid robot has been enhanced with Helix 02, an advanced AI brain that enables unified full-body control, integrating walking, manipulation, and balance through a single neural network. Unlike previous models limited to upper-body tasks, Helix 02 processes raw sensor data—including vision, touch, and proprioception—to coordinate all actuators seamlessly. This system replaces traditional hand-coded controls with learned, human-like motion, allowing the robot to perform complex, continuous tasks autonomously. A key demonstration involved the robot unloading and reloading a dishwasher across a kitchen without resets or human intervention, showcasing its ability to maintain delicate grasps, coordinate both arms, and recover from errors over extended periods. Helix 02 builds on Figure’s earlier Helix AI by introducing System 0, a foundational control layer operating at kilohertz rates to manage balance and coordination, complementing System 1 (full-body motion translation) and System 2 (high-level reasoning and language). Trained on over 1

    robothumanoid-robotAI-controlfull-body-controlloco-manipulationneural-networkrobotics-autonomy
  • 3D magnetic field ‘breakthrough’ for fusion plasma control wins US award

    Three researchers from the US Department of Energy’s Princeton Plasma Physics Laboratory (PPPL)—Seong-Moo Yang, SangKyeun Kim, and Ricardo Shousha—have been awarded the 2025 Kaul Foundation Prize for their pioneering work in optimizing three-dimensional (3D) magnetic fields within tokamaks to control edge instabilities in fusion plasma. Their approach uses real-time artificial intelligence (AI) adjustments to proactively prevent plasma instabilities, such as tearing mode disruptions, which can damage the tokamak and halt the fusion process. This marks a significant advancement over traditional methods that react only after instabilities occur. The team’s research highlights the advantages of 3D magnetic fields over conventional two-dimensional fields for maintaining plasma stability. Due to the complexity of calculating and optimizing these fields, they employed machine learning to forecast potential instabilities and make micro-adjustments in real time. This AI-driven method was validated through international collaboration, incorporating experimental data from South Korea’s KSTAR and the DIII

    energyfusion-energyplasma-physicstokamakmagnetic-fieldsAI-controlmachine-learning
  • Video: NASA's cute cube robot flies autonomously for first time on ISS

    Stanford researchers have successfully demonstrated the first AI-based autonomous flight of Astrobee, a cube-shaped, fan-powered robot aboard the International Space Station (ISS). Astrobee is designed to navigate the ISS’s confined, equipment-filled corridors to perform tasks such as leak detection and supply delivery, potentially reducing astronauts’ workload. The team developed a novel route-planning system using sequential convex programming combined with machine learning, which enables the robot to generate safe and efficient trajectories more quickly by leveraging patterns learned from thousands of previous path solutions. This AI-assisted control marks a significant advancement in space robotics, where limited onboard computing resources and stringent safety requirements have traditionally constrained autonomy. During the ISS experiment, the AI system operated autonomously for four hours with minimal astronaut intervention, under remote supervision. The researchers compared conventional “cold start” planning with the new AI-assisted “warm start” approach, finding that the latter reduced trajectory planning time by 50–60%, especially in complex, cluttered environments. Multiple safety measures ensured

    roboticsautonomous-robotsAI-controlspace-roboticsNASAISS-technologymachine-learning
  • AI takes control in orbit, speeds ISS flying robot tasks by 60%

    Stanford researchers have successfully demonstrated the first machine-learning-based control system operating aboard the International Space Station (ISS), enabling the free-flying Astrobee robot to navigate the station’s complex interior 50 to 60% faster than traditional methods. The AI system uses a trained model to provide an informed initial guess ("warm start") for motion planning, which is then refined through optimization while maintaining strict safety constraints. This approach addresses the challenges posed by the ISS’s dense, cluttered environment and the limited computational resources available on space hardware. Tested initially on a microgravity-simulating platform at NASA Ames and then on the ISS itself, the AI-powered system allowed astronauts to step back from direct control, with commands issued remotely from NASA’s Johnson Space Center. The success of these tests has elevated the technology to NASA’s Technology Readiness Level 5, indicating operational viability in space. Researchers emphasize that such autonomy will be critical for future space missions, especially as robots operate farther from Earth and require minimal

    roboticsspace-roboticsAI-controlautonomous-navigationInternational-Space-Stationmachine-learningrobotic-motion-planning
  • Bedrock Robotics announces autonomous excavation milestone - The Robot Report

    Bedrock Robotics LLC has achieved a significant milestone in autonomous excavation by deploying its AI-driven excavator technology in partnership with Sundt Construction Inc. on a large-scale project preparing a 130-acre manufacturing facility. The autonomous system has moved over 65,000 cubic yards of earth by loading human-operated dump trucks using workflows identical to manual operations. This advancement addresses a critical industry challenge: retaining skilled operators who often avoid repetitive excavation tasks. By automating the monotonous truck-loading process, Bedrock’s technology allows experienced workers to focus on more specialized and creative construction tasks, enhancing overall project efficiency. Founded in early 2024, Bedrock Robotics rapidly developed its core product, the Bedrock Operator—an AI controller trained through machine learning to operate excavators ranging from 20 to 80 tons. The company’s leadership includes former Waymo engineers who shifted from traditional robotics methods to data-driven approaches, leveraging large-scale datasets to emulate expert operators’ actions. This approach parallels developments in large language models, but

    roboticsautonomous-excavationAI-controlconstruction-technologymachine-learningheavy-machineryautomation
  • Rethinking how robots move: Light and AI drive precise motion in soft robotic arm - Robohub

    Researchers at Rice University have developed a novel soft robotic arm that can perform complex tasks such as navigating obstacles or hitting a ball, controlled remotely by laser beams without any onboard electronics or wiring. The arm is made from azobenzene liquid crystal elastomer, a polymer that responds to light by shrinking under blue laser illumination and relaxing in the dark, enabling rapid and reversible shape changes. This material’s fast relaxation time and responsiveness to safer, longer wavelengths of light allow real-time, reconfigurable control, a significant improvement over previous light-sensitive materials that required harmful UV light or slow reset times. The robotic system integrates a spatial light modulator to split a single laser into multiple adjustable beamlets, each targeting different parts of the arm to induce bending or contraction with high precision, akin to the flexible tentacles of an octopus. A neural network was trained to predict the necessary light patterns to achieve specific movements, simplifying the control of the arm and enabling virtually infinite degrees of freedom beyond traditional robots with fixed joints

    roboticssoft-roboticssmart-materialsAI-controllight-responsive-materialsmachine-learningazobenzene-elastomer
  • Korean humanoid nails Michael Jackson’s Moonwalk dance with AI

    Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a humanoid robot capable of performing advanced lower-body movements, including Michael Jackson’s iconic Moonwalk, high-speed running up to 12 km/h, and complex gait patterns such as duck walking and straight-leg bounds. A demonstration video showcases the robot’s stability and adaptability, highlighting its ability to recover from external pushes and navigate obstacles without relying on vision-based sensors. Instead, the robot uses internal sensing and AI-driven control trained via reinforcement learning to maintain balance and traverse uneven terrain, including stairs and debris. Standing 165 cm tall and weighing 75 kg, the robot’s core mechanical components—motors, reducers, and drivers—were developed in-house, ensuring technological independence. The research team successfully bridged the simulation-to-reality gap, enabling reliable real-world performance. The work will be presented at upcoming robotics conferences CoRL 2025 and Humanoids 2025. Future plans include enhancing the robot’s capabilities

    roboticshumanoid-robotAI-controlrobot-locomotionKAISTrobotics-researchrobot-stability
  • AI robot arm builds meals and helps users with limited mobility

    Engineers at Virginia Tech have developed an advanced robotic arm designed to assist people with limited mobility in performing everyday tasks, such as preparing meals. The system features adaptive grippers that combine rigid mechanics with soft, switchable adhesives, enabling the robot to handle a wide range of objects—from heavy items like metal pans to delicate ingredients like sprinkles. This innovation addresses the challenge that traditional robots face when gripping irregular or fragile items, by allowing the grippers to switch between strong adhesion and easy release. The robotic arm is controlled via a joystick-style interface, allowing users to guide the robot’s movements while artificial intelligence interprets and completes the tasks. This collaboration was demonstrated through complex activities like assembling a pizza, which involves handling diverse textures and shapes, and building an ice cream sundae with small, delicate toppings. Funded by over $600,000 from the National Science Foundation, the project aims to enhance independence for people with disabilities by making robotic assistance more intuitive and closely aligned with natural human motions. The research

    roboticsassistive-technologyrobotic-armadaptive-grippersAI-controlsoft-roboticsdisability-aid
  • Chinese firm achieves agile, human-like walking with AI control

    Chinese robotics startup EngineAI has developed an advanced AI-driven control system that enables humanoid robots to walk with straight legs, closely mimicking natural human gait. This innovative approach integrates human gait data, adversarial learning, and real-world feedback to refine robot movement across diverse environments, aiming to achieve more energy-efficient, stable, and agile locomotion. EngineAI’s lightweight humanoid platform, the PM01, has demonstrated impressive agility, including successfully performing a frontflip and executing complex dance moves from the film Kung Fu Hustle, showcasing the system’s potential for fluid, human-like motion. The PM01 robot features a compact, lightweight aluminum alloy exoskeleton with 24 degrees of freedom and a bionic structure that supports dynamic movement at speeds up to 2 meters per second. It incorporates advanced hardware such as an Intel RealSense depth camera for visual perception and an Intel N97 processor paired with an NVIDIA Jetson Orin CPU for high-performance processing and neural network training. This combination allows the PM01 to interact effectively with its environment and perform intricate tasks, making it a promising platform for research into human-robot interaction and agile robotic assistants. EngineAI’s work parallels other Chinese developments like the humanoid robot Adam, which uses reinforcement learning and imitation of human gait to achieve lifelike locomotion. Unlike traditional control methods such as Model Predictive Control used by robots like Boston Dynamics’ Atlas, EngineAI’s AI-based framework emphasizes adaptability through real-world learning, addressing challenges in unpredictable environments. While still in the research phase, these advancements mark significant progress toward next-generation humanoid robots capable of natural, efficient, and versatile movement.

    robothumanoid-robotAI-controlgait-controlreinforcement-learningrobotics-platformenergy-efficient-robotics