RIEM News LogoRIEM News

Articles tagged with "robot-navigation"

  • Coco Robotics taps UCLA professor to lead new physical AI research lab

    Coco Robotics, a startup specializing in last-mile delivery robots, has established a new physical AI research lab led by UCLA professor Zhou, who has also joined the company as chief AI scientist. The move aims to leverage the extensive data—spanning millions of miles collected over five years in complex urban environments—to advance autonomous operation of their delivery bots and reduce delivery costs. Coco Robotics co-founder and CEO Zach Rash emphasized that the company now has sufficient data scale to accelerate research in physical AI, particularly in robot navigation and reinforcement learning, areas where Zhou is a leading expert. The new research lab operates independently from Coco Robotics’ partnership with OpenAI, which provides access to language models, while the lab focuses on utilizing the company’s proprietary robot-collected data. Coco Robotics plans to use the insights gained exclusively to enhance its own automation capabilities and improve the efficiency of its local robot models, rather than selling the data. Additionally, the company intends to share relevant research findings with the cities where it operates to help address

    roboticsartificial-intelligenceautonomous-deliveryphysical-AIrobot-navigationreinforcement-learninglast-mile-delivery
  • Robots cut 30% travel time using human-like memory in smart factories

    Researchers at South Korea’s Daegu Gyeongbuk Institute of Science and Technology (DGIST) have developed a new “Physical AI” technology that enhances the navigation efficiency of autonomous mobile robots (AMRs) in environments such as logistics centers and smart factories. This technology mimics human-like memory by modeling the social phenomenon of spreading and forgetting information, enabling robots to distinguish between relevant, real-time obstacles and outdated, unnecessary data. By forgetting obsolete information—such as obstacles that have been cleared—the robots avoid unnecessary detours, improving movement efficiency and productivity in complex, dynamic settings. Testing in a simulated logistics center demonstrated significant performance improvements, with average travel times reduced by up to 30.1% and task throughput increased by 18.0% compared to conventional ROS 2 navigation systems. The technology requires only 2D LiDAR sensors, making it cost-effective and easy to integrate as a plugin into existing ROS 2 navigation stacks without hardware modifications. Beyond industrial applications, this approach holds promise

    robotsautonomous-mobile-robotsphysical-AIsmart-factorieslogistics-automationrobot-navigationcollective-intelligence-algorithm
  • Watch: China’s MagicBot humanoid robot pulls 551 pounds with ease -65 Main, SEO

    MagicLab, a Chinese robotics startup, has unveiled its AI-enabled humanoid robot, MagicBot, showcasing its remarkable strength by pulling a cart carrying three adults weighing approximately 551 pounds (250 kg). The demonstration video highlights the robot’s ability to pull progressively heavier loads—176 pounds, 375 pounds, and finally 551 pounds—while maintaining a walking speed that decreases from 1.57 mph to 0.67 mph as the weight increases. MagicBot is a third-generation AI-controlled robot designed primarily for industrial automation but is versatile enough to perform domestic tasks, public service roles, and specialized functions such as search and rescue. Equipped with 42 degrees of freedom, advanced sensors including LiDAR, RGBD and fisheye cameras, ultrasonic sensors, and a proprietary navigation algorithm, MagicBot achieves human-like movement and situational awareness. It can carry loads of up to 44 lbs per arm and handle delicate objects with sub-millimeter precision, thanks to high-torque servo actu

    robothumanoid-robotAI-roboticsindustrial-automationMagicBotrobotic-sensorsrobot-navigation
  • Amazon-backed firm unveils shared brains for all types of robots

    Skild AI, a robotics startup backed by Amazon and prominent investors including Jeff Bezos, has unveiled Skild Brain, an artificial intelligence model designed to operate across a wide range of robots—from humanoids to quadrupeds and mobile manipulators. This AI enables robots to think, navigate, and respond with human-like adaptability, allowing them to perform complex tasks such as climbing stairs, maintaining balance after being pushed or kicked, and handling objects in cluttered environments. Skild Brain is continuously improved through data collected from deployed robots, addressing the challenge of limited real-world robotics data by combining simulated scenarios, human-action videos, and live feedback. Unlike existing robotics models that rely heavily on vision-language models (VLMs) trained on vast image and text datasets but lack physical action capabilities, Skild Brain is built specifically to overcome the scarcity of robotics data and provide true physical common sense. The founders emphasize that traditional VLM-based approaches are superficial and insufficient for complex robotic tasks, whereas Skild’s shared brain approach

    roboticsartificial-intelligencehumanoid-robotsrobot-navigationrobot-adaptabilitySkild-AIrobotics-foundational-model
  • Apera AI updates Apera Forge design and AI training studio - The Robot Report

    Apera AI Inc. has released an updated version of Apera Forge, its web-based, no-code design and AI training studio aimed at simplifying 4D vision-guided robotic projects. The latest update enhances advanced robotic cell design capabilities, supports end-of-arm-tooling (EOAT)-mounted camera configurations, and introduces full simulation and AI training for de-racking applications. These improvements enable users to simulate and validate complex robotic environments—including robot, gripper, camera, part geometry, and cell layout—within minutes, significantly reducing development time from weeks or months to hours. Trained AI models developed in Forge reportedly achieve over 99.9% reliability in object recognition and task execution, with complete vision programs ready for deployment within 24 to 48 hours. Key new features include greater flexibility in cell design, allowing arbitrary positioning of cameras and bins, integration of reference CAD files for accurate visualization, and an Obstacle Autopilot for improved robot navigation and collision avoidance. The platform now supports EO

    roboticsAI-trainingvision-guided-robotsrobotic-simulationindustrial-automationend-of-arm-toolingrobot-navigation
  • Interview with Amar Halilovic: Explainable AI for robotics - Robohub

    Amar Halilovic, a PhD student at Ulm University in Germany, is conducting research on explainable AI (XAI) for robotics, focusing on how robots can generate explanations of their actions—particularly in navigation—that align with human preferences and expectations. His work involves developing frameworks for environmental explanations, especially in failure scenarios, using black-box and generative methods to produce textual and visual explanations. He also studies how to plan explanation attributes such as timing, representation, and duration, and is currently exploring dynamic selection of explanation strategies based on context and user preferences. Halilovic finds it particularly interesting how people interpret robot behavior differently depending on urgency or failure context, and how explanation expectations shift accordingly. Moving forward, he plans to extend his framework to enable real-time adaptation, allowing robots to learn from user feedback and adjust explanations on the fly. He also aims to conduct more user studies to validate the effectiveness of these explanations in real-world human-robot interaction settings. His motivation for studying explainable robot navigation stems from a broader interest in human-machine interaction and the importance of understandable AI for trust and usability. Before his PhD, Amar studied Electrical Engineering and Computer Science in Bosnia and Herzegovina and Sweden. Outside of research, he enjoys traveling and photography and values building a supportive network of mentors and peers for success in doctoral studies. His interdisciplinary approach combines symbolic planning and machine learning to create context-sensitive, explainable robot systems that adapt to diverse human needs.

    roboticsexplainable-AIhuman-robot-interactionrobot-navigationAI-researchPhD-researchautonomous-robots
  • Congratulations to the #ICRA2025 best paper award winners - Robohub

    The 2025 IEEE International Conference on Robotics and Automation (ICRA), held from May 19-23 in Atlanta, USA, announced its best paper award winners and finalists across multiple categories. The awards recognized outstanding research contributions in areas such as robot learning, field and service robotics, human-robot interaction, mechanisms and design, planning and control, and robot perception. Each category featured a winning paper along with several finalists, highlighting cutting-edge advancements in robotics. Notable winners include "Robo-DM: Data Management for Large Robot Datasets" by Kaiyuan Chen et al. for robot learning, "PolyTouch: A Robust Multi-Modal Tactile Sensor for Contact-Rich Manipulation Using Tactile-Diffusion Policies" by Jialiang Zhao et al. for field and service robotics, and "Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition" by Shengchent Luo et al. for human-robot interaction. Other winning papers addressed topics such as soft robot worm behaviors, robust sequential task solving via dynamically composed gradient descent, and metrics-aware covariance for stereo visual odometry. The finalists presented innovative work ranging from drone detection to adaptive navigation and assistive robotics, reflecting the broad scope and rapid progress in the robotics field showcased at ICRA 2025.

    roboticsrobot-learninghuman-robot-interactiontactile-sensorsrobot-automationsoft-roboticsrobot-navigation