RIEM News LogoRIEM News

Articles tagged with "autonomous-navigation"

  • NASA's Perseverance rover completes first Mars drives planned by AI

    NASA’s Perseverance rover has successfully completed the first Mars surface drives planned entirely by artificial intelligence, marking a significant advancement in autonomous space exploration. In early December, the rover followed routes generated by generative AI models, specifically vision-language models that analyzed rover imagery, terrain maps, and hazard data to create safe driving paths without human input. These AI-planned drives occurred on Mars sols 1,707 and 1,709, covering distances of 210 and 246 meters respectively, both executed safely and within operational limits. The project was led by NASA’s Jet Propulsion Laboratory (JPL) in collaboration with Anthropic, which provided the Claude AI models. Before sending commands to Mars, engineers rigorously tested the AI-generated instructions using a digital twin of Perseverance to ensure compatibility and safety, verifying over 500,000 telemetry variables. This cautious approach highlights the potential of AI to reduce the workload of rover operators and accelerate mission timelines, especially given the communication delays caused by the vast

    robotAIautonomous-navigationMars-roverspace-explorationNASAmachine-learning
  • LimX unveils operating system for humanoid robots to navigate alone

    Chinese robotics company LimX Dynamics has introduced LimX COSA, an operating system specifically designed for humanoid robots to autonomously navigate and operate in real-world environments. COSA (Cognitive OS of Agents) integrates three layers—motion control, perception, and cognitive decision-making—to enable robots to perceive surroundings, reason, and act without human supervision. The system is built for embodied agents functioning alongside humans, rather than for simulations, and supports complex interactions such as balance on uneven terrain and task execution based on spoken instructions. The humanoid robot Oli, standing about 5 feet 5 inches tall with 31 joints, showcases COSA’s capabilities by independently interpreting commands, planning routes, manipulating objects, and adapting actions in real time. COSA’s architecture tightly couples cognition with physical motion, mirroring the human brain’s integration of reasoning and movement. It incorporates memory for recalling environments and objects, allowing anticipation of future actions, and continuously processes sensor data to adjust balance and gait dynamically. This unified approach

    robothumanoid-robotsautonomous-navigationrobot-operating-systemmotion-controlcognitive-roboticsartificial-intelligence
  • Robots to navigate hiking trails - Robohub

    The article discusses the development of autonomous robots capable of navigating hiking trails, a challenging task due to the unpredictable and varied terrain conditions such as fallen trees, uneven ground, erosion, and environmental changes after storms. The motivation behind this research is the potential real-world applications of such robots, including trail monitoring and maintenance, environmental data collection, search-and-rescue operations, and assisting park staff in remote or hazardous areas. The complexity increases when robots must decide whether to stay on the trail to avoid environmental damage or leave it temporarily for safety reasons. The key innovation presented is a dual perception system combining geometric terrain analysis via LiDAR and semantic terrain detection through camera images. LiDAR provides information on slopes and large obstacles, while semantic segmentation identifies specific terrain types like trails, grass, rocks, and roots. By fusing these data sources into a single traversability map, the robot can better assess safe paths. The researchers created a labeled dataset of hiking trail images and trained a model to recognize trail terrain effectively. Navigation

    roboticsautonomous-navigationhiking-trailsLiDARsemantic-segmentationterrain-analysissearch-and-rescue
  • Brunswick unveils largest-ever CES display of AI-powered marine tech

    At CES 2026, Brunswick Corporation, the world’s largest marine technology company, unveiled its largest-ever display focused on AI-powered marine innovations, highlighting how artificial intelligence, autonomy, and electrification are transforming boating. The exhibit showcased the global launch of Sea Ray’s most technologically advanced SLX model, featuring an automotive-inspired helm with integrated controls and dual Simrad NSX ULTRAWIDE displays connected to Mercury Marine propulsion. Central to the display was the Simrad AutoCaptain system, an assisted autonomous navigation technology designed to help operators with complex tasks such as docking and route planning, enhancing safety and accessibility for a broader range of users. Brunswick’s CES presence also emphasized its ACES strategy—Autonomous and Assisted, Connected, Electrified, and Shared—through various products including the NAVAN C30, which integrates solar panels and autonomous support to improve accessibility, and the Lund Crossover XS, a family-oriented fish-and-ski boat equipped with advanced Lowrance and Mercury Marine technologies. Additionally,

    robotAIautonomous-navigationelectric-propulsionmarine-technologyIoTenergy
  • Photos: This solar robot is a power station on wheels that tracks sunlight autonomously

    Jackery unveiled the Solar Mars Bot at CES 2026, an innovative autonomous solar-powered robot designed to shift solar energy capture from fixed panels to a mobile platform. Equipped with AI-enhanced computer vision, the bot can independently navigate its environment, track the sun’s position using a 60-degree tilt and full-angle tracking, and follow a designated user if needed. Its wheels and solar panels rotate automatically to optimize sunlight exposure, significantly improving charging efficiency compared to stationary systems. The Solar Mars Bot houses a substantial 5000Wh modular LiFePO4 battery system, capable of powering a small cabin for two to three days and supporting loads up to 3000W, making it suitable for off-grid living, residential backup during outages, and outdoor events. Built with durable materials like impact-resistant plastics and aluminum alloys, it can traverse rugged terrain with about 5cm ground clearance and obstacle avoidance sensors, enhancing its utility in inspection, rescue, and leisure scenarios. The device features retractable 300W

    robotenergysolar-powerautonomous-navigationbattery-technologyrenewable-energymaterials
  • Small firefighting robot detects fires using radar and AI without GPS

    The Hong Kong Polytechnic University (PolyU) and its startups received three prestigious awards at the CES Innovation Awards 2026, notably for their Smart Firefighting Robot. This compact, AI-enabled robot tank uses millimeter-wave radar SLAM technology to navigate smoke-filled environments without relying on GPS or vision systems. It can detect flames, classify burning materials in real time using deep learning, and autonomously select the optimal extinguishing agents. Equipped with onboard sensors and a closed-loop vision–actuation system, the robot provides live updates to control centers, enhancing firefighting efficiency and safety by reducing human exposure to hazardous conditions. In addition to the firefighting robot, PolyU was recognized for two healthcare innovations: the Powered Rehab Skateboard, a lightweight, portable device aiding stroke patients in upper limb rehabilitation, and the FattaLab Fatty Liver Diagnostic Device, the world’s first handheld smart system for rapid fatty liver screening with medical-grade accuracy. These awards underscore PolyU’s commitment to impactful research and innovation

    robotAIfirefighting-robotmm-wave-radarSLAMautonomous-navigationsmart-sensors
  • China's wheeled robot dog climbs stairs at 5 feet per second in demo

    Pudu Robotics recently released a video showcasing its PUDU D5 wheeled quadruped robot climbing stairs at a speed of 1.5 meters per second (nearly 5 feet per second) in real time, without edits. The robot demonstrates a hybrid locomotion system, seamlessly switching between wheels on flat terrain and legs for stair climbing, enabling efficient navigation of mixed environments with smooth surfaces and sudden elevation changes. This hybrid approach distinguishes the D5 from other quadrupeds that rely solely on legged movement, emphasizing speed and fluidity. Unveiled in December, the PUDU D5 Series includes two configurations: a fully legged version and a wheeled variant optimized for mixed terrain. Designed for autonomous operation in complex outdoor and industrial settings, the D5 integrates powerful onboard computing using NVIDIA’s Orin platform and an RK3588 chip, supporting real-time mapping, obstacle avoidance, and path planning without constant human supervision. Its 360-degree perception system combines fisheye

    robotquadruped-robotautonomous-navigationhybrid-locomotionindustrial-roboticsAI-computingLiDAR-sensors
  • Lightweight crawling robot navigates tight spaces without extra motors

    Researchers at the University of Genoa have developed Porcospino Flex, a lightweight, bio-inspired crawling robot designed to navigate tight pipes and debris-filled environments without the need for extra motors. Measuring 670 mm long and weighing just 3.6 kilograms, the robot draws inspiration from millipedes’ segmented bodies and porcupines’ spines. Its core feature is a single 3D-printed thermoplastic polyurethane (TPU) spine with 15 grooves that enable up to 120° passive bending, allowing the robot to absorb impacts and adapt its shape naturally when encountering obstacles. This flexible spine design reduces weight and enhances durability compared to previous ABS-based models. Porcospino Flex is powered by four gear motors: two for forward movement and two that pull internal ropes to control bending. The robot’s end sections house essential electronics, including batteries, control drivers, and a Raspberry Pi 4 for operation management. Its broad spines help grip uneven surfaces such as loose soil, grass

    robotroboticsbio-inspired-robot3D-printingflexible-spineinspection-robotautonomous-navigation
  • Mass-production of hospitality humanoid robots begins at Chinese firm

    Chinese company Zerith Robotics has begun mass-producing its H1 service humanoid robots, scaling production to over 100 units per month within a year. Priced at approximately RMB 99,000 (around $13,600), the H1 is positioned at the lower end of the humanoid market, driving strong demand with orders exceeding RMB 100 million. The robots are already deployed in commercial settings across major Chinese cities like Beijing and Shenzhen, performing autonomous cleaning and sanitation tasks in shopping malls and other indoor public and enterprise locations. Distinct from general-purpose bipedal robots, the H1 features a wheeled base with a height-adjustable upper body and two articulated arms, optimized for indoor service and housekeeping roles. It uses advanced sensors including 3D LiDAR and depth cameras for navigation and obstacle avoidance, runs on the ROS2 framework, and offers up to four hours of continuous operation per charge. Zerith’s focus is on reliable, sustained task execution rather than experimental capabilities, signaling a transition

    robothumanoid-robotservice-robotrobotics-manufacturingautonomous-navigationROS2indoor-service-robot
  • Noetix unveils humanoid robot receptionist with lifelike face

    Chinese robotics startup Noetix has launched Hobbs W1, a humanoid service robot designed for public-facing roles such as reception and guidance in hospitality, retail, education, and corporate environments. Hobbs W1 features a lifelike female-styled bionic head combined with an interactive display, dexterous six-degree-of-freedom hands, and five-degree-of-freedom robotic arms, enabling it to perform natural gestures, hand over items, and carry out light physical tasks. The robot also boasts fully autonomous navigation, emotion recognition, natural conversation abilities, and real-time information synchronization, allowing it to operate independently in complex indoor settings while supporting human workers by handling routine tasks. In addition to Hobbs W1, Noetix recently introduced Bumi, a child-sized humanoid robot priced under US$1,400, following a US$41 million pre-Series B funding round. This pricing significantly disrupts the typical high cost of humanoid robots, which often reach six figures. No

    robothumanoid-robotservice-robotautonomous-navigationrobotics-startupbionic-headdexterous-robotic-arms
  • China's humanoid robot handles rough terrain with human-like motion

    Chinese robotics company LimX Dynamics has introduced significant advancements in its full-size humanoid robot, Oli, demonstrating impressive human-like mobility across challenging terrains such as loose sand, rocks, unstable boards, and debris. Equipped with 31 finely tuned joints and a sophisticated perception system—including depth cameras and a motion-tracking unit—Oli continuously processes environmental data to maintain balance and adapt its movements in real time. During tests, the robot successfully compensated for shifting surfaces and obstacles, adjusting its gait dynamically to stay upright and stable without hesitation. Additional capabilities like object pickup and full-body stretching suggest practical applications in navigating cluttered or uneven environments and performing complex tasks. Oli, standing 165 centimeters tall and weighing 55 kilograms, features 31 degrees of freedom that enable fine motor skills through interchangeable end-effectors. Its modular design supports rapid disassembly and component swapping, facilitating accelerated research and development. The robot’s mobility is powered by high-fidelity sensors—including a 6-axis IMU, Intel RealSense depth

    roboticshumanoid-robotmotion-controlsensorsautonomous-navigationmodular-designartificial-intelligence
  • US: Robot dog gets AI power to carry out rescue missions effectively

    Texas A&M University engineering students have developed a memory-based navigation framework that significantly enhances the capabilities of AI-powered robotic dogs for rescue missions. Unlike traditional robots that merely follow commands, this system enables the robot to see, remember locations, and make real-time decisions using a multimodal large language model (MLLM) that integrates visual input, voice commands, and advanced path planning. The robot can navigate chaotic, GPS-denied environments such as disaster zones by recalling previously traveled routes, avoiding obstacles instantly, and employing high-level reasoning to optimize its movements. This approach represents a novel integration of visual memory and language-model-based navigation within a modular platform, improving efficiency and adaptability in unpredictable settings. Supported by the National Science Foundation, the team demonstrated how the robot’s AI blends reactive behaviors with deliberate planning, making it a smarter, more intuitive partner for search-and-rescue teams, emergency crews, and disaster response units. Beyond emergency applications, the technology holds promise for broader use cases including hospital and warehouse operations, mobility

    robotAIrescue-robotsautonomous-navigationrobotic-dogdisaster-responsemultimodal-language-model
  • AI takes control in orbit, speeds ISS flying robot tasks by 60%

    Stanford researchers have successfully demonstrated the first machine-learning-based control system operating aboard the International Space Station (ISS), enabling the free-flying Astrobee robot to navigate the station’s complex interior 50 to 60% faster than traditional methods. The AI system uses a trained model to provide an informed initial guess ("warm start") for motion planning, which is then refined through optimization while maintaining strict safety constraints. This approach addresses the challenges posed by the ISS’s dense, cluttered environment and the limited computational resources available on space hardware. Tested initially on a microgravity-simulating platform at NASA Ames and then on the ISS itself, the AI-powered system allowed astronauts to step back from direct control, with commands issued remotely from NASA’s Johnson Space Center. The success of these tests has elevated the technology to NASA’s Technology Readiness Level 5, indicating operational viability in space. Researchers emphasize that such autonomy will be critical for future space missions, especially as robots operate farther from Earth and require minimal

    roboticsspace-roboticsAI-controlautonomous-navigationInternational-Space-Stationmachine-learningrobotic-motion-planning
  • Video: China firm unveils combat-ready humanoid robot fighter

    China’s Shenzhen-based company EngineAI has introduced the T800, a full-scale humanoid robot designed for combat and dynamic physical demonstrations. Unveiled at the World Robot Conference in Beijing, the T800 stands 5.6 feet tall, weighs 165 pounds, and features 29 degrees of freedom plus highly dexterous hands. Built with aviation-grade aluminum and equipped with an active cooling system, it can sustain high-intensity operations for up to four hours. The robot incorporates advanced perception technologies like 360-degree LiDAR and stereo vision, and powerful joint motors capable of complex movements such as flying kicks and rapid directional changes. Its computing system combines an Intel N97 base unit with an NVIDIA AGX Orin module, delivering 275 TOPS of AI processing power, and supports secondary development with an integrated remote controller. Despite its impressive hardware and athletic capabilities, the T800’s software ecosystem remains unclear. EngineAI has not provided detailed information on software development kits, APIs, or programming tools

    robothumanoid-robotroboticsAI-processinglithium-batteryaviation-grade-materialsautonomous-navigation
  • US robot dog patrols massive construction sites for faster progress

    FieldAI, an AI robotics company, has partnered with DPR Construction, a major U.S. contractor, to automate construction site processes using autonomous quadruped robots equipped with FieldAI’s autonomy software, Field Foundation Models™. Deployed at a DPR jobsite in Santa Clara, California, the Boston Dynamics Spot robot autonomously conducted extensive surveys and data collection, capturing over 45,000 photos, walking more than 100 miles, and mapping large interior and roofing areas. This system addresses inefficiencies in manual documentation, labor shortages, safety hazards, and operational delays by generating real-time, structured digital records that support decision-making, risk detection, and long-term project documentation. Traditionally, construction documentation involves engineers manually capturing 360° photos over days or weeks, resulting in outdated data and slow progress. FieldAI’s robot navigates dynamic, GPS-free environments autonomously, adapting to daily site changes and performing tasks such as progress tracking, hazard detection, material movement monitoring, and security checks. DPR

    roboticsconstruction-automationquadruped-robotAI-roboticsautonomous-navigationconstruction-site-monitoringlabor-shortage-solutions
  • Teaching robots to map large environments - Robohub

    The article discusses a new AI-driven system developed by MIT researchers to enable robots to efficiently map large and complex environments in real-time. Traditional machine-learning models for simultaneous localization and mapping (SLAM) are limited by their capacity to process only a small number of images at once, which restricts their usefulness in time-sensitive scenarios like search-and-rescue missions. To address this, the MIT team created a method that incrementally generates smaller submaps from onboard camera images and then stitches these submaps together to form a complete 3D reconstruction of the environment. This approach allows the system to handle thousands of images quickly and estimate the robot’s position simultaneously, without requiring calibrated cameras or expert tuning. The system’s simplicity, speed, and accuracy make it well-suited for real-world applications beyond disaster response, including extended reality devices and industrial robotics. By enabling rapid and scalable 3D mapping, the technology helps robots navigate complex spaces more effectively while maintaining ease of implementation. The research, led by graduate student

    roboticsartificial-intelligence3D-mappingSLAMmachine-learningsearch-and-rescue-robotsautonomous-navigation
  • Top 10 smartest robot dogs in the world redefining technology

    The article highlights the top 10 smartest robot dogs worldwide, emphasizing their diverse applications and technological advancements that are redefining robotics. Initially developed for military and industrial use, these robotic dogs now serve in various roles such as industrial inspection, security, logistics, and companionship. Boston Dynamics’ Spot leads the pack with its agility, AI autonomy, and ability to operate in hazardous environments like oil rigs and nuclear plants, making it a vital tool for industrial automation. Similarly, ANYbotics’ ANYmal excels in extreme conditions, autonomously detecting faults in chemical plants and mines, enhancing safety and productivity. Other notable models include Unitree B2, which balances performance and affordability for logistics and monitoring tasks, and Ghost Robotics’ Vision 60, designed for defense and security with modular payload capabilities for surveillance in harsh terrains. On the companion side, Sony’s Aibo stands out by providing emotional support through interactive, lifelike behavior, catering to households and individuals unable to keep real pets. Collectively, these robot dogs

    robotroboticsrobot-dogsindustrial-automationAIautonomous-navigationinspection-robots
  • China tests robot dogs to unlock moon’s secrets, help build lunar base

    China is developing robotic dogs to explore lunar lava tubes—underground tunnels formed by ancient volcanic activity—that could serve as protective sites for future moon bases. Researchers from Peking University have created two types of robot dogs, named “Anteater” and “Salamander,” each with specialized features to navigate the moon’s challenging terrain. These robots are equipped with autonomous navigation, obstacle avoidance, 3D mapping, and embodied intelligence, enabling them to scout narrow, uneven, and low-light environments similar to lunar caves. Testing is currently underway in a lava tube–like cave in Northeastern China, chosen for its geological similarity to expected lunar conditions. The motivation behind this research is China’s ambition to establish a crewed lunar base within these underground caves, which offer natural protection from radiation, micrometeorites, and extreme temperature fluctuations on the moon’s surface. More than 200 pits and large underground cavities have been identified on the moon, with NASA confirming a significant cavity in 2024. These

    roboticslunar-explorationrobot-dogsautonomous-navigationspace-technologyAI-roboticslunar-base-development
  • New disaster-response robot hauls 330-lb across rubble to save lives

    Researchers in Germany have developed ROMATRIS, an AI-supported semi-autonomous robot designed to aid disaster relief efforts by transporting heavy equipment—up to 150 kilograms (approximately 330 pounds)—across challenging and hazardous terrain inaccessible to conventional vehicles or stretchers. The project is a collaboration between the German Research Center for Artificial Intelligence (DFKI) and the Federal Agency for Technical Relief (THW). ROMATRIS combines rugged mechanical design with advanced sensor technologies, including depth cameras, ultrasonic and laser sensors, and neural networks that enable gesture recognition and autonomous navigation. This allows emergency personnel to control the robot intuitively via hand gestures or remote control, or to set it to follow or shuttle modes for autonomous operation. The robot was tested extensively in field scenarios at THW training centers, with input from over 20 volunteers across 14 THW local associations, ensuring it meets real-world civil protection needs. The system demonstrated its capability to transport bulky equipment such as generators, pumps, and hoses across rough terrain

    robotroboticsdisaster-responseAIautonomous-navigationgesture-recognitionemergency-services
  • OpenMind launches OM1 Beta open-source, robot-agnostic operating system - The Robot Report

    OpenMind has launched OM1 Beta, described as the world’s first open-source, robot-agnostic operating system designed to enable intelligent robots to perceive, reason, and act without being limited by proprietary ecosystems. The San Francisco-based company aims to address robotics fragmentation by providing a universal platform that supports diverse robot types—including quadrupeds, humanoids, wheeled robots, and drones—and integrates AI models from OpenAI, Gemini, DeepSeek, and xAI. Key features include natural voice and vision communication, autonomous navigation with real-time SLAM and lidar support, preconfigured agents for popular robot platforms, simulation capabilities via Gazebo, and cross-platform compatibility delivered through Docker. OM1 Beta is supported by OpenMind’s decentralized FABRIC coordination layer, which ensures secure machine identity and enables global collaboration among smart systems. The platform offers developers a streamlined path to build intelligent behaviors and applications without needing to piece together disparate tools and drivers. By releasing OM1 as open-source on GitHub, OpenMind aims

    robotopen-source-softwarerobot-operating-systemAI-integrationautonomous-navigationrobot-interoperabilitymachine-intelligence
  • Robots explore lunar caves using advanced autonomous descent system

    Scientists have successfully tested autonomous robots exploring lava tubes in a volcanic cave on Lanzarote, chosen for its similarity to underground structures on Mars and the moon. These natural lava tubes, formed by flowing lava that leaves hollow tunnels, are considered promising sites for future extraterrestrial exploration because they could shield astronauts from extreme temperatures, radiation, and meteorite impacts, as well as potentially harbor microbial life. The 21-day field trials involved two rovers collaboratively mapping the cave entrance, deploying a sensor-laden cube to create a 3D model, and performing a coordinated descent into the cave, with the smaller rover detaching to travel 235 meters while building a 3D map of the tunnel. The experiments demonstrated the feasibility of robotic cooperation and 3D mapping in dark, confined environments, though challenges remain. Moisture affected ground-penetrating radar accuracy, some sensors experienced interference, and autonomous navigation without human intervention still requires more advanced algorithms and reliable inter-robot communication. Despite these hurdles, the

    robotsautonomous-robotslunar-explorationcave-mappingspace-roboticsautonomous-navigationextraterrestrial-exploration
  • Orbbec touts Pulsar ME450 as a multi-pattern 3D lidar - The Robot Report

    At the World Robot Conference 2025 in Beijing, Orbbec introduced the Pulsar ME450, a novel multi-pattern 3D lidar sensor designed to enhance robotic perception by allowing users to switch scanning modes without changing devices. The sensor combines a micro-electromechanical systems (MEMS) mirror with motorized azimuth control to offer configurable scanning patterns and an adjustable vertical field of view. This design enables the Pulsar ME450 to adapt to diverse robotics applications, such as smart forklifts, logistics robots, lawn mowers, and surveying equipment, by supporting non-repetitive, non-dense repetitive, and dense repetitive scanning modes. The sensor delivers millimeter-level precision and high-fidelity 3D reconstruction, maintaining stable performance across various materials and interference conditions. Orbbec emphasizes that the Pulsar ME450’s flexibility addresses the evolving demands of robotics, balancing the need for fast, real-time obstacle avoidance with detailed mapping capabilities. By integrating multiple scanning patterns into a single device, the

    roboticslidar3D-perceptionMEMS-technologyautonomous-navigationsensor-technologyrobotics-applications
  • Unitree G1 robot impresses Dubai leadership, joins museum exhibit

    The Unitree G1 humanoid robot recently gained significant attention in Dubai when it was showcased during a live demonstration at the historic Union House, engaging with His Highness Sheikh Mohammed bin Rashid Al Maktoum. Developed through collaboration between Dubai Future Labs and Chinese robotics firm Unitree, the G1 robot exemplifies advanced humanoid robotics with capabilities such as handshakes, hugs, waves, voice command input, and situational awareness via sensors including Intel RealSense depth cameras and 3D LiDAR. Compact and agile, the robot stands 1.32 meters tall, weighs 35 kilograms, and features a foldable design for easy transport. It will soon be part of the interactive exhibits at Dubai’s Museum of the Future, aligning with the UAE’s ambitions to integrate AI and robotics into public life and enhance tourism. This development is part of Dubai’s broader strategy to position itself as a global innovation hub and attract investors and entrepreneurs, supported by a growing affluent population and nearly 10 million

    robothumanoid-robotAIrobotics-innovationautonomous-navigationsmart-policinginteractive-exhibits
  • Unitree’s glass-shattering robot dog scales slopes, carries loads

    Unitree Robotics has unveiled its latest quadruped robot dog, the A2, designed for demanding industrial applications with enhanced mobility, endurance, and performance. Weighing about 82 pounds (37 kg), the A2 features 12 degrees of freedom and powerful motors delivering up to 180 Nm of torque, enabling it to carry loads up to 55 pounds (25 kg) and support standing loads of 220 pounds (100 kg). The robot can navigate challenging terrain, including climbing 45° slopes, ascending 30 cm stairs, and traversing rough pathways with agility. Equipped with front and rear industrial-grade LiDAR sensors, an HD camera, and a front light, the A2 can detect and respond to its environment in real time, ensuring precise movement and stability. The A2 demonstrates remarkable agility and durability, as showcased in a promotional video where it performs backflips, balances on one leg, and even crashes through glass without losing functionality. Its 12 high-density motors allow

    robotquadruped-robotindustrial-robotLiDARrobot-dogroboticsautonomous-navigation
  • Insect-inspired drones get AI brains to race through tight spaces

    Researchers at Shanghai Jiao Tong University have developed an innovative AI-based system that enables drone swarms to navigate complex, cluttered environments at high speeds without expensive hardware or human control. Unlike traditional modular drone navigation systems that separate tasks like mapping and obstacle detection—often leading to slow reactions and accumulated errors—the team created a compact, end-to-end neural network using differentiable physics. This approach allows the system to learn flight control directly through simulation and backpropagation, significantly improving learning speed and real-world performance. The drones rely on ultra-low-resolution 12x16 pixel depth cameras, inspired by insect compound eyes, to make real-time navigation decisions, achieving speeds up to 20 meters per second and a 90% success rate in cluttered spaces, outperforming previous methods. A key advantage of this system is its low cost and efficiency: the neural network runs on a $21 development board without requiring a graphics processing unit, making large-scale swarm deployment more accessible. The AI was trained entirely in simulation

    roboticsdrone-technologyswarm-intelligenceartificial-intelligenceautonomous-navigationAI-in-roboticslightweight-AI-systems
  • China’s ‘slim-waisted’ humanoid robot debuts with human-like skills

    China’s Robotera has unveiled the Q5 humanoid robot, a slim-waisted, 1650 mm tall machine weighing 70 kg, designed for practical deployment in sectors like healthcare, retail, tourism, and education. Featuring 44 degrees of freedom (DoF), including the highly dexterous 11-DoF XHAND Lite robotic hand, Q5 excels in precise manipulation and smooth navigation within complex indoor environments. Its compact size and fused LiDAR with stereo vision enable autonomous movement with minimal human oversight. The robot supports full-body teleoperation via VR and sensor gloves and interacts through AI-powered natural dialogue, facilitating responsive, context-aware communication. Powered by the EraAI platform, Q5 integrates a complete AI lifecycle from teleoperation data collection to model training and closed-loop learning, offering over four hours of runtime on a 60V supply. Its 7-DoF robotic arms have a reach extending beyond two meters, allowing it to handle objects at various heights safely and compliantly.

    robothumanoid-robotAI-roboticsautonomous-navigationrobotic-manipulationteleoperationservice-robots
  • A European Startup's Spacecraft Made It to Orbit. Now It's Lost at Sea

    The Exploration Company, a European startup focused on developing orbital spacecraft, recently conducted a test flight of its 2.5-meter diameter demonstration vehicle, Mission Possible, launched aboard SpaceX's Transporter 14 mission. The flight achieved several key milestones: successful launch, nominal payload operation in orbit, stabilization after separation, reentry, and reestablishment of communication post-blackout. However, contact was lost shortly before the capsule's planned ocean touchdown, likely due to a failure in the deployment of its parachutes—critical for safe recovery. The parachutes, sourced from US-based Airborne Systems and with proven flight heritage, were expected to deploy between Mach 0.8 and Mach 0.6, but the vehicle was ultimately lost at sea, marking a partial failure in the mission’s recovery objective. Despite this setback, the company communicated transparently and promptly, acknowledging the partial success and ongoing investigation into the root cause. Mission Possible was developed rapidly and cost-effectively, with a budget of

    robotautonomous-navigationspacecraftspaceflightparachute-deploymentorbital-flightaerospace-materials
  • Robots get brain-like navigation to run for days using 90% less power

    Researchers at the QUT Centre for Robotics have developed a brain-inspired robot navigation system called Locational Encoding with Neuromorphic Systems (LENS) that operates using less than 10% of the energy required by conventional navigation systems. By mimicking the human brain’s efficient processing, LENS uses specialized algorithms that process information as electrical spikes, similar to neuronal signals. This neuromorphic computing approach drastically reduces the energy consumption for visual localization by up to 99%, enabling robots to operate longer and travel further on limited power supplies. The system demonstrated effective location recognition along an 8 km route while requiring only 180KB of storage, which is about 300 times smaller than traditional systems. LENS achieves its efficiency through a combination of advanced technologies, including an event camera that detects pixel-level brightness changes continuously rather than capturing full images, closely replicating human visual processing. This “movement-focused” data is then processed by a spiking neural network on a low-power chip within a compact system. Such

    robotenergy-efficiencyneuromorphic-computingautonomous-navigationspiking-neural-networksevent-cameralow-power-robotics