Articles tagged with "human-robot-interaction"
Robot Talk Episode 142 – Collaborative robot arms, with Mark Gray - Robohub
The article summarizes Episode 142 of the Robot Talk podcast, featuring a conversation between host Claire and Mark Gray from Universal Robots. Mark Gray, with 30 years of experience in automation and robotics, discusses Universal Robots' lightweight collaborative robot arms (cobots) designed to work safely alongside humans. As the UK country manager and the company's first UK employee, Gray has led projects with prominent research institutions such as the Advanced Manufacturing Research Centre (AMRC), The Manufacturing Technology Centre (MTC), the National Robotarium, and Bristol Robotics Lab, highlighting the integration of cobots in advanced manufacturing and research environments. Robot Talk is a weekly podcast exploring robotics, AI, and autonomous machines, with recent episodes covering diverse topics such as human interaction with robot swarms, robotic agility inspired by animals, advanced robot hearing, and AI-powered robotic dogs for emergency response. The podcast serves as a platform to showcase cutting-edge developments in robotics and their practical applications across various fields.
roboticscollaborative-robotscobotsautomationartificial-intelligencerobotic-armshuman-robot-interactionUS firm unveils small humanoid robot butler for household chores
Fauna, a New York-based robotics startup, has unveiled Sprout, a compact humanoid robot designed specifically for operation in everyday human environments such as homes, schools, offices, and service spaces. Unlike traditional industrial robots adapted for public use, Sprout is built from the ground up with safety, interaction, and accessibility as priorities. Standing 3.5 feet tall, it features a lightweight, soft exterior with quiet actuation and avoids sharp edges, enabling safe close physical proximity without safety cages. Its simple one-degree-of-freedom grippers support basic tasks like fetching objects and hand-offs, while the robot is engineered to fall, crawl, and recover without damage. Sprout also incorporates an expressive face to facilitate intuitive, nonverbal human-robot communication. Sprout is positioned as a developer-centric platform, offering whole-body behaviors such as walking, kneeling, crawling, compliant physical interaction, and fall recovery, alongside core capabilities like teleoperation, mapping, navigation, and expressive interaction primitives
robothumanoid-robotservice-robothuman-robot-interactionrobotics-platformhome-automationrobot-safetyWatch: ALLEX shows how humanoid robots can shake hands safely
ALLEX is a Korean-developed humanoid robot unveiled by WIRobotics at CES 2026, designed to enable safe and natural physical interaction between humans and robots. Its standout feature is a high degree of force sensitivity and control, allowing it to detect forces as low as 100 gram-force without tactile sensors while exerting up to 40 newtons of fingertip force. This capability enables ALLEX to perform human-like tasks such as shaking hands with a controlled grip that adjusts in real time, balancing strength and flexibility to avoid injury. The robot’s hands and arms are back-drivable, meaning they can be safely pushed or guided, and its arm system features low friction and rotational inertia to facilitate smooth, fluid motion suitable for close human interaction. ALLEX’s design includes 15 degrees of freedom, gravity compensation from the waist to upper body, and a lightweight build—its hand weighs about 1.5 pounds and the shoulder-down assembly about 11 pounds—yet it can lift over
robothumanoid-robotforce-controlhuman-robot-interactionroboticstactile-sensingautomationRobot Talk Episode 141 – Our relationship with robot swarms, with Razanne Abu-Aisheh - Robohub
In the Robot Talk Episode 141, Claire interviews Razanne Abu-Aisheh, a Senior Research Associate at the University of Bristol’s Centre for Sociodigital Futures, about human interactions with robot swarms. Abu-Aisheh’s research focuses on how collective behaviors of robot swarms shape human perceptions and experiences. She emphasizes the importance of community-centered design, collaborating with diverse communities to envision inclusive and meaningful futures involving robotics. Abu-Aisheh’s broader work aims to integrate robot swarms into real-world environments while prioritizing human-centered design principles. The episode highlights the evolving relationship between people and autonomous robot groups, exploring how these interactions can be designed to foster acceptance and usability. Overall, the discussion underscores the significance of involving communities in the development process to ensure robot swarms meet societal needs and values.
robotrobot-swarmsroboticsautonomous-machineshuman-robot-interactionartificial-intelligencecommunity-centred-designUK lab’s humanoid robots get NVIDIA grant to turn sound into motion
Chengxu Zhou, an associate professor at UCL Computer Science, has received an NVIDIA Academic Grant to advance his research on real-time, audio-driven whole-body motion for humanoid robots. The grant provides critical resources, including two NVIDIA RTX PRO 6000 GPUs and two Jetson AGX Orin devices, which will accelerate training and deployment cycles by enabling faster iteration and reducing the gap between simulation and real-robot testing. Zhou’s project, called Beat-to-Body, aims to develop humanoid robots that respond dynamically to audio cues such as tempo, accents, and loudness fluctuations, allowing them to adapt their movements in real time rather than following pre-scripted commands. The Beat-to-Body system leverages large-scale simulation training with GPU compute and low-latency inference directly on the robot, minimizing dependence on offboard processing and enhancing responsiveness to sound. This approach aligns with recent research demonstrating that robots can generate expressive locomotion and gestures from music and speech without predefined motion templates, and
roboticshumanoid-robotsNVIDIAmachine-learningreal-time-motionaudio-driven-controlhuman-robot-interactionHumanoid robot masters lip-sync, predicts face reaction with new system
Researchers at Columbia University’s Creative Machines Lab have developed an advanced humanoid robot named Emo that can synchronize lifelike lip movements with speech audio and anticipate human facial expressions in real time. Emo features significant hardware improvements over its predecessor Eva, including 26 actuators for asymmetric facial expressions and flexible silicone skin manipulated by magnets for precise control. Equipped with high-resolution RGB cameras in its eyes, Emo uses a dual neural network framework: one model predicts its own facial movements, while another anticipates the human interlocutor’s expressions. This allows Emo to perform coexpressions—mirroring human facial reactions before they fully manifest—across multiple languages, including those not in its training data. The system’s predictive model, trained on 970 videos from 45 participants, analyzes subtle initial facial changes to forecast target expressions with high speed and accuracy, running at 650 frames per second. The inverse model executes motor commands at 8,000 fps, enabling Emo to generate facial expressions within 0.002 seconds,
robothumanoid-robotfacial-roboticshuman-robot-interactionmotor-controlneural-networksreal-time-expressionRobots learn human touch with less data using adaptive motion system
Researchers at Keio University in Japan have developed an adaptive motion reproduction system that enables robots to perform human-like grasping and manipulation using minimal training data. Traditional robotic systems struggle to adjust when objects vary in weight, stiffness, or texture, limiting their use to controlled factory environments. The new system leverages Gaussian process regression to model complex nonlinear relationships between object properties and human-applied forces, allowing robots to infer human motion intent and adapt their movements to unfamiliar objects in dynamic, real-world settings such as homes and hospitals. Testing showed that this approach significantly outperforms conventional motion reproduction and imitation learning methods, reducing position and force errors by substantial margins both within and beyond the training data range. By requiring less data and lowering machine learning costs, the technology has broad potential applications, including life-support robots that must adapt to diverse tasks. This advancement builds on Keio University’s expertise in force-tactile feedback and haptic technologies and represents a key step toward enabling robots to operate reliably in unpredictable environments. The
roboticsadaptive-motionmachine-learningGaussian-process-regressionhuman-robot-interactionrobotic-manipulationautomationChina’s new humanoid robot senses delicate touch with soft skin tech
China’s startup Matrix Robotics has unveiled MATRIX-3, its third-generation humanoid robot that marks a significant advancement in physical artificial intelligence. Unlike previous robots limited to pre-set tasks, MATRIX-3 is designed for adaptive, real-world interaction, aiming to operate safely and autonomously in everyday commercial, medical, and home environments. The robot features a biomimetic “skin” made of flexible woven fabric embedded with distributed sensors, enabling it to detect soft touch and real-time impacts, thus enhancing safety during human–robot interaction. Its tactile sensor arrays in the fingertips can sense pressures as low as 0.1 newtons, and combined with an advanced vision system, MATRIX-3 can assess object properties and handle fragile or deformable items reliably. MATRIX-3 also boasts human-like dexterity and mobility, with a 27-degree-of-freedom hand that mimics human anatomy and uses lightweight cable-driven actuators for precise, fast movements. Its full-body motion is powered by linear actuators
roboticshumanoid-robotbiomimetic-skintactile-sensorsartificial-intelligencehuman-robot-interactiondexterous-manipulationFourier's humanoid robot brings 'warm tech companionship' to CES 2026
At CES 2026, Chinese robotics firm Fourier made its US debut with the GR-3, a full-size humanoid "Care-bot" designed to deliver warm, human-centered interaction through advanced perception and intuitive intelligence. Standing 165 cm tall and weighing 156 pounds, GR-3 integrates a Full-Perception Multimodal Interaction System that combines sight, sound, and touch, enabling it to localize voices, recognize faces, track movements, and respond naturally in real time. Its hybrid control architecture blends reflexive responses with advanced language-model reasoning, allowing it to engage in natural conversation, emotional reassurance, and routine assistance. The robot's expressive gestures, animated facial interface, and 31 pressure sensors create lifelike reactions, while its soft-touch shell and warm design foster a familiar, comforting presence suitable for homes, public spaces, and assisted living environments. In addition to GR-3, Fourier unveiled a doll-sized companion robot prototype at CES 2026, embodying the same design
roboticshumanoid-robothuman-robot-interactionAI-companionshipCES-2026multimodal-perceptionassistive-technologyNew cyber pet for home companionship aims to strengthen family bonds
At CES 2026 in Las Vegas, Chinese brand OLLOBOT introduced a new type of emotionally supportive robot designed as a cyber-pet for home companionship. Unlike traditional humanoid robots, OLLOBOT focuses on creating warm, humorous, and emotionally engaging interactions to strengthen family bonds. The robot adapts easily to users through an embodied intelligence system powered by a Vision-Language-Action (VLA) model, which processes multimodal inputs—such as sight, sound, and touch—in real time. This allows the cyber-pet to perceive user moods, activities, and environmental factors, enabling proactive assistance like reminders and personalized interactions. OLLOBOT aims to bridge the gap between technology and family life by encouraging intentional interaction, especially among children who might otherwise be absorbed by screens. It communicates in a unique “pet language” that sparks curiosity and prompts parent-child conversations. The robot also functions as a digital assistant, offering timely reminders to help maintain family connections. Privacy is a key feature, with all
robotembodied-intelligencehome-companionshipAI-assistantcyber-pethuman-robot-interactionCES-2026Video: Humanoid robot kicks teleoperator's groin in demo-gone-wrong
During a public demonstration of Unitree’s G1 humanoid robot, a teleoperator wearing a motion capture suit attempted a martial arts-style kick that inadvertently struck himself. Because the robot mirrors the operator’s movements exactly and both faced the same direction, the robot lifted its leg in sync, causing the operator’s own foot to hit his groin. The operator collapsed in pain while the robot mimicked his posture, creating a viral moment that highlighted the risks of human-robot interaction when movements are mirrored without spatial adjustment. Unitree recently introduced the G1-D, a wheeled humanoid robot designed for data collection, AI training, and practical tasks in industrial and service environments. The G1 robot itself has been showcased performing advanced martial arts maneuvers, including kicks, spins, and flips, demonstrating impressive agility and balance. However, some viewers have questioned the practical applications of these demonstrations, as Unitree markets the G1 primarily as a research and education platform rather than a consumer home assistant. Pr
roboticshumanoid-robotUnitree-G1motion-capturehuman-robot-interactionAI-trainingrobot-agilityRobohub highlights 2025 - Robohub
The article "Robohub highlights 2025" provides a comprehensive review of notable contributions and activities featured on Robohub throughout the year. It highlights a variety of research and discussions from global experts in robotics and AI, including innovative frameworks for robot manipulation learned solely from language instructions, as presented by Jiahui Zhang and Jesse Zhang, and RobustDexGrasp, a novel grasping framework introduced by Hui Zhang at CoRL 2025. The article also covers insightful interviews and podcasts, such as conversations with Heather Knight on integrating performing arts methods into robotics, and Professor Marynel Vázquez on human-robot interactions and social navigation by robots. Further, the summary touches on advancements in reliable controller design under uncertain environments (IJCAI 2025), reinforcement learning guided by social and ethical norms, and scalable deep learning for human activity recognition using wearable sensors. It also features updates from RoboCup 2025, including award-winning research, AI applications in the Small Size League, and the Robo
roboticsrobot-manipulationhuman-robot-interactionreinforcement-learningAI-in-roboticsRoboCuprobot-graspingHFT Stuttgart's Patrick Planing on why good technology still fails
Patrick Planing, a professor of business psychology at Stuttgart Technology University of Applied Sciences (HFT Stuttgart) and former innovation manager at Mercedes-Benz, argues that the success or failure of new technologies hinges less on technical readiness and more on human factors—specifically how people feel about using the technology and whether they perceive a reason to change their behavior. Drawing from his experience with innovations like autonomous vehicles, air taxis, and delivery robots, Planing emphasizes that social norms, risk perception, and lived experience critically influence technology adoption. He highlights that engineering excellence alone rarely ensures acceptance, as technologists often underestimate the complexity of human behavior and social dynamics. Planing’s insights stem from his early work at Mercedes-Benz, where he noticed a disconnect between available advanced automotive technologies and actual user adoption. Despite technical capabilities such as autonomous driving features, many drivers preferred the sensory experience of manual driving and found automated systems unappealing. This realization led him to focus on understanding what mobility solutions people genuinely want, rather than assuming
robotautonomous-vehiclesinnovation-managementhuman-robot-interactiontechnology-adoptionmobility-solutionsbusiness-psychologyUS humanoid robot hands out swag before Christmas using advanced brain
The article highlights a recent demonstration of Figure AI’s humanoid robot, Figure 03, showcased by CEO Brett Adcock in a video posted just before Christmas. The robot, powered by the company’s proprietary Helix Vision-Language-Action (VLA) model, demonstrated its ability to answer questions about its origin and capabilities, as well as perform practical tasks such as visually recognizing and handing over medium and large-sized shirts correctly. Figure 03 represents the latest generation of Figure AI’s humanoids, featuring advanced visual recognition, smoother coordination, and a softer, safer design compared to its predecessors. Despite its impressive task execution and conversational abilities, the robot exhibited a noticeable speech latency of 2 to 3 seconds per response, which drew mixed reactions from viewers and highlighted an ongoing challenge in humanoid robotics—making interactions feel natural and fluid. Released in October, Figure 03 is smaller and lighter than earlier models, equipped with enhanced audio clarity, wireless charging through coils in its feet, and a five
robothumanoid-robotvisual-recognitionAIautomationrobotics-technologyhuman-robot-interactionThe science of human touch – and why it’s so hard to replicate in robots - Robohub
The article by Perla Maiolino from the University of Oxford explores the complexity of human touch and the challenges in replicating it in robots. While robots have advanced significantly in visual perception and navigation, their ability to touch objects gently, safely, and meaningfully remains limited. Human touch is highly sophisticated, involving multiple types of mechanoreceptors in the skin that detect various stimuli such as vibration, stretch, and texture. Moreover, touch is an active sense, involving constant movement and adjustment to transform raw sensory input into perception. Replicating this dynamic and distributed sensory system across a robot’s entire soft body, and enabling it to interpret the rich sensory data, presents a formidable challenge. The article also highlights the concept of distributed or embodied intelligence, where behavior emerges from the interaction between body, material, and environment rather than centralized brain control. The octopus is cited as an example, with most of its neurons located in its limbs, allowing local adaptation and movement. This principle is influential in soft robotics,
roboticssoft-roboticstactile-sensorsartificial-skinembodied-intelligencehuman-robot-interactionsensor-technologyHoliday prep goes robotic as Christmas machines tackle decor and meals
As the Christmas season approaches, robotics and autonomous systems are increasingly being employed to handle festive preparations, blending holiday traditions with advanced automation. HEBI Robotics demonstrated this trend with their mobile manipulator Treadward, equipped with a 7-DoF arm, which efficiently decorated a Christmas tree and staged festive scenes in just two days. The robot showcased impressive strength, coordination, and adaptability, highlighting the potential of mobile manipulators to assist in real-world holiday tasks. Meanwhile, Germany’s FZI Research Center explored the challenges of robotic meal preparation under the guidance of large language models and human teleoperation. Their staged demonstration humorously illustrated how small miscommunications between AI instructions and robotic execution can lead to chaotic outcomes, while emphasizing ongoing research in robotic manipulation, human-robot interaction, and AI decision support. Additionally, Fraunhofer IOSB presented an autonomous system that assembled and decorated a large outdoor Christmas tree using coordinated multi-robot operations, including autonomous cranes and quadruped robots. This project undersc
roboticsautonomous-systemsAIrobotic-manipulationteleoperationhuman-robot-interactionautomationGenerations in Dialogue: Human-robot interactions and social robotics with Professor Marynel Vasquez - Robohub
The article discusses the fourth episode of the AAAI podcast series "Generations in Dialogue: Bridging Perspectives in AI," which features a conversation between host Ella Lan and Professor Marynel Vázquez, a computer scientist and roboticist specializing in Human-Robot Interaction (HRI). The episode explores Professor Vázquez’s research journey and evolving perspectives on how robots navigate social environments, particularly in multi-party settings. Key topics include the use of graph-based models to represent social interactions, challenges in recognizing and addressing errors in robot behavior, and the importance of incorporating user feedback to create adaptive, socially aware robots. The discussion also highlights potential applications of social robotics in education and the broader societal implications of human-robot interactions. Professor Vázquez’s interdisciplinary approach combines computer science, behavioral science, and design to develop perception and decision-making algorithms that enable robots to understand and respond to complex social dynamics such as spatial behavior and social influence. The podcast, hosted by Ella Lan—a Stanford student passionate about AI ethics and interdisciplinary dialogue—
robothuman-robot-interactionsocial-roboticsAI-ethicsautonomous-robotsmulti-party-HRIrobotic-perceptionGenerations in Dialogue: Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín - Robohub
The article discusses the third episode of the AAAI podcast series "Generations in Dialogue: Bridging Perspectives in AI," which features a conversation between host Ella Lan and Professor Roberto Martín-Martín. The series aims to explore how generational experiences influence perspectives on AI, addressing challenges, opportunities, and ethical considerations in the field. In this episode, Martín-Martín shares insights from his childhood curiosity about technology to his current research focus on embodied AI, robotics, perception, and action. He emphasizes the importance of making robots accessible to everyone and discusses how machines can augment human capabilities, drawing inspiration from human cognition and interdisciplinary fields like psychology and cognitive science. Professor Roberto Martín-Martín is an Assistant Professor of Computer Science at the University of Texas at Austin, specializing in integrating robotics, computer vision, and machine learning to develop autonomous agents capable of real-world perception and action. His research covers a range of tasks from basic manipulation and navigation to complex activities such as cooking and mobile manipulation. With a background that includes positions
roboticsembodied-AIautonomous-agentsmachine-learningcomputer-visionhuman-robot-interactionmobile-manipulationPopular AI models aren’t ready to safely run robots, say CMU researchers - The Robot Report
Researchers from Carnegie Mellon University and King’s College London have found that popular large language models (LLMs) currently powering robots are unsafe for general-purpose, real-world use, especially in settings involving human interaction. Their study, published in the International Journal of Social Robotics, evaluated how robots using LLMs respond when given access to sensitive personal information such as gender, nationality, or religion. The findings revealed that all tested models exhibited discriminatory behavior, failed critical safety checks, and approved commands that could lead to serious physical harm, including removing mobility aids, brandishing weapons, or invading privacy. The researchers conducted controlled tests simulating everyday scenarios like kitchen assistance and eldercare, incorporating harmful instructions based on documented technology abuse cases. They emphasized that these LLM-driven robots lack reliable mechanisms to refuse or redirect dangerous commands, posing significant interactive safety risks. Given these shortcomings, the team called for robust, independent safety certification for AI-driven robots, comparable to standards in aviation or medicine. They warned companies to exercise caution when
robotartificial-intelligencelarge-language-modelsrobot-safetyhuman-robot-interactiondiscriminationrobotics-researchRobot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge - Robohub
In Episode 135 of the Robot Talk podcast, Claire interviews Chapa Sirithunge, a Marie Sklodowska-Curie fellow in robotics at the University of Cambridge, about the interplay between robot anatomy and human anatomy. Sirithunge, who holds a PhD in Electrical Engineering from the University of Moratuwa and has experience as a lecturer in Sri Lanka, discusses how studying robots can provide insights into human anatomical functions and vice versa. Her research focuses on assistive robotics, soft robotics, and physical human-robot interaction, highlighting the importance of designing robots that can effectively interact with humans. Additionally, Sirithunge is the founder of Women in Robotics Cambridge, an initiative aimed at supporting and guiding young women pursuing careers in robotics. The episode emphasizes the mutual benefits of cross-disciplinary study between robotics and human biology, as well as the significance of fostering diversity within the robotics community. Robot Talk continues to serve as a platform exploring advancements in robotics, AI, and autonomous systems through expert conversations like
roboticsrobot-designassistive-roboticssoft-robotshuman-robot-interactionartificial-intelligenceautonomous-machinesHuman-robot interaction design retreat - Robohub
The Human-Robot Interaction (HRI) Design Retreat, held earlier in 2025, convened experts from both academia and industry to focus on the future of design in HRI. Over two days, participants engaged in hands-on, interactive activities aimed at exploring and shaping the trajectory of HRI design. A key outcome of the retreat was the development of a roadmap outlining priorities and goals for the next five to ten years in the field. Organized by Patrícia Alves-Oliveira and Anastasia Kouvaras Ostrowski, the event emphasized collaboration and forward-thinking strategies to advance human-robot interaction design. Additional resources, including a short documentary about the retreat, are available for those interested in learning more about the discussions and insights generated during the event.
robothuman-robot-interactionHRI-designroboticsAIrobotics-industryinteraction-designAgile Robots launches Agile ONE industrial humanoid - The Robot Report
Agile Robots SE, a Munich-based company, has launched Agile ONE, its first industrial humanoid robot designed to work safely and efficiently alongside humans and other systems in structured industrial environments. Agile ONE features intuitive human-robot interaction (HRI) capabilities, including responsive eye rings, proximity sensors, a rearview camera, and a chest display for real-time information. Its dexterous five-fingered hands, equipped with multiple sensors for force and tactile feedback, enable precise manipulation tasks such as handling tiny screws or operating power tools. The robot embodies Agile Robots’ vision of “physical AI,” combining intelligence, autonomy, and flexibility to perceive, understand, and act in the physical world. A key differentiator for Agile ONE is its layered AI approach, described as a “data pyramid” that integrates real-world teleoperation and field data, physical simulation data, and visual data from videos and images. Its cognitive architecture includes three layers: slow thinking for task planning, fast thinking for dynamic individual actions,
robothumanoid-robotindustrial-automationAI-roboticshuman-robot-interactionrobotic-handautonomous-robotsAnthropic study finds Claude helps humans train robots faster
Anthropic conducted an internal one-day study, dubbed Project Fetch, to evaluate how its AI model Claude impacts human performance in real-world robotics tasks. Two teams of software engineers were tasked with programming a quadruped robot dog to fetch a beach ball, with only one team having access to Claude. The Claude-assisted team completed seven out of eight tasks, outperforming the non-AI team, which completed six. The most significant advantage was seen in hardware-level tasks such as connecting to the robot and accessing sensor data, where Claude helped quickly identify solutions and troubleshoot issues, while the non-AI team struggled and required external hints. The study also revealed that the Claude-assisted team wrote about nine times more code and explored multiple approaches in parallel, boosting creativity and iteration speed, although sometimes pursuing unproductive directions. While the non-AI team occasionally moved faster in some tasks, the AI-assisted system ultimately provided smoother and more user-friendly control. Additionally, analysis of team interactions showed that the non-AI group experienced
robotAIroboticsrobot-doghuman-robot-interactionautomationmachine-learningSeventh sense: Humans can sense buried objects like shorebirds
A recent study by researchers at Queen Mary University of London and University College London reveals that humans possess a “remote touch” ability, enabling them to detect objects buried beneath sand without direct contact. This challenges the traditional view that touch is limited to physical contact with surfaces. Participants in the study were able to locate hidden cubes under sand by sensing subtle mechanical vibrations and displacements transmitted through the granular material, a capability similar to that of shorebirds like sandpipers and plovers, which detect prey beneath sand via mechanical cues. The study also compared human performance with a robotic tactile sensor trained using a Long Short-Term Memory (LSTM) algorithm. Humans achieved a higher precision (70.7%) in detecting buried objects than the robot (40%), despite the robot sensing objects from slightly greater distances but producing more false positives. Both human and robotic detection approached the theoretical physical limits of sensitivity. These findings expand the scientific understanding of human touch, showing it extends beyond direct contact, and suggest new directions for designing tactile
roboticstactile-sensorsremote-touchhuman-robot-interactionmachine-learningLSTM-algorithmrobotic-explorationMoxi 2.0 mobile manipulator is built for AI, says Diligent Robotics - The Robot Report
Diligent Robotics has announced Moxi 2.0, the next-generation version of its mobile manipulator robot designed primarily for healthcare environments. Building on three years of real-world data from over 1.25 million hospital deliveries, Moxi 2.0 incorporates one of the largest datasets of human-robot interaction to date. The robot currently operates in more than 25 U.S. hospitals, assisting nurses and pharmacy staff by handling routine tasks such as delivering medications and lab samples, thereby improving workflow efficiency and reducing staff burnout. The upgraded Moxi 2.0, powered by NVIDIA Thor for enhanced AI compute, is designed to better navigate complex, dynamic indoor environments with improved reasoning, prediction, and adaptability, including pre-emptive navigation around obstacles like beds and wheelchairs. The new hardware platform of Moxi 2.0 is optimized for manufacturability and durability to support fleet expansion, with physical design improvements such as enhanced handles and servicing panels based on user feedback. Dilig
robotAImobile-manipulatorhealthcare-roboticshospital-automationNVIDIA-Thorhuman-robot-interactionRobot Talk Episode 130 – Robots learning from humans, with Chad Jenkins - Robohub
In the Robot Talk Episode 130 podcast, Claire interviews Chad Jenkins, a Professor of Robotics and Electrical Engineering and Computer Science at the University of Michigan, about how robots can learn from humans to better assist in daily tasks. Jenkins’ research focuses on robot learning from demonstration, human-robot interaction, dexterous mobile manipulation, and robot perception. Notably, he founded the Robotics Major Degree Program at the University of Michigan in 2022 and received the 2024 ACM/CMD-IT Richard A. Tapia Achievement Award for his contributions to scientific scholarship, civic science, and diversity in computing. The episode highlights the intersection of robotics and human collaboration, emphasizing how robots can be taught by observing human actions to improve their functionality and integration into everyday life. This discussion fits within the broader context of the Robot Talk podcast series, which explores advancements in robotics, AI, and autonomous machines, featuring experts from various fields. The episode also connects to related topics such as robotic applications in smart cities, museum
roboticsrobot-learninghuman-robot-interactionautonomous-machinesrobot-perceptionmicrorobotsrobotic-systemsChina's humanoid robot takes over presentation, car salesperson gig
China’s automaker Chery, in collaboration with AiMOGA Robotics, unveiled Mornine, a humanoid robot designed to integrate automotive technology with embodied intelligence. At the AiMOGA Global Business Conference in Wuhu, China, Mornine delivered a 30-minute multilingual presentation on robotics and automotive innovations, acted as an autonomous car sales assistant by greeting visitors, explaining car features, and even opening a car door—making it the world’s first humanoid robot to do so autonomously. Mornine’s capabilities stem from advanced technologies including full-body motion control, reinforcement learning, and a multilingual AI model called MoNet, enabling it to perceive, plan, and interact naturally with humans using vision-language understanding and semantic reasoning. Powered by AiMOGA’s L3 Assistance Level framework, Mornine features high-torque joints and dexterous hands with 17 degrees of freedom, allowing smooth and precise movements. The robot’s AI adapts its gestures and tone based on visitor reactions,
robothumanoid-robotAIautonomous-systemsautomotive-technologyreinforcement-learninghuman-robot-interactionUpcoming 'Yogi' humanoid robot to focus on human connections
Cartwheel Robotics is developing a humanoid robot named Yogi, designed primarily to foster genuine human connections and serve as a friendly, emotionally intelligent companion in homes and workplaces. Unlike many other robotics firms focusing on factory automation—such as Tesla’s Optimus robot—Cartwheel emphasizes natural movement, safety, and approachability. Yogi is constructed with medical-grade silicone and soft protective materials, features modular swappable batteries for extended operation, and incorporates precision-engineered actuators with overload protection. The robot aims to assist with light household tasks while maintaining intuitive and reliable interactions, reflecting Cartwheel’s goal to integrate humanoid AI into everyday life by enhancing how people live, work, and care for one another. Humanoid Global Holdings Corp., Cartwheel’s parent investment company, highlighted that Yogi is built on a proprietary full-stack humanoid platform combining custom hardware, AI models, motion systems, and software. Cartwheel is expanding operations with a new facility in Reno, Nevada, set to open in January
robothumanoid-robotAIhome-automationrobotics-technologyhuman-robot-interactionbattery-technologyChina builds humanoid robot with realistic eye movements, bionic skin
China’s AheadForm Technology has developed a highly advanced humanoid robot named Elf V1, featuring lifelike bionic skin and realistic eye movements designed for natural daily interactions. The robot integrates 30 facial muscles controlled by brushless micro-motors and a high-precision control system, enabling expressive facial features, synchronized speech, and the ability to convey emotions and interpret human non-verbal cues. This design aims to overcome the “uncanny valley” effect, making interactions with humans more natural and engaging. Powered by self-supervised AI algorithms and enhanced with Large Language Models (LLMs) and Vision-Language Models (VLMs), Elf V1 can perceive its environment, communicate intelligently, and adapt in real-time to human emotions and behaviors. AheadForm envisions these robots providing assistance, companionship, and support across various industries, bridging the gap between humans and machines. The company’s previous Lan Series offered more cost-efficient humanoids with 10 degrees of freedom, while Elf V1 represents a
roboticshumanoid-robotbionic-skinAI-roboticshuman-robot-interactionadvanced-control-systemsemotion-recognitionDiligent Robotics adds two members to AI advisory board - The Robot Report
Diligent Robotics, known for its Moxi mobile manipulator used in hospitals, has expanded its AI advisory board by adding two prominent experts: Siddhartha Srinivasa, a robotics professor at the University of Washington, and Zhaoyin Jia, a distinguished engineer specializing in robotic perception and autonomy. The advisory board, launched in late 2023, aims to guide the company’s AI development with a focus on responsible practices and advancing embodied AI. The board includes leading academics and industry experts who provide strategic counsel as Diligent scales its Moxi robot deployments across health systems nationwide. Srinivasa brings extensive experience in robotic manipulation and human-robot interaction, having led research and development teams at Amazon Robotics and Cruise, and contributed influential algorithms and systems like HERB and ADA. Jia offers deep expertise in computer vision and large-scale autonomous systems from his leadership roles at Cruise, DiDi, and Waymo, focusing on safe and reliable AI deployment in complex environments. Diligent Robotics’
roboticsAIhealthcare-robotsautonomous-robotshuman-robot-interactionrobotic-manipulationembodied-AIIEEE study group publishes framework for humanoid standards
The IEEE Humanoid Study Group has published a comprehensive framework aimed at guiding the development of standards for humanoid robots. This framework addresses the unique risks and capabilities of humanoids to support their safe and effective deployment across industrial, service, and public sectors. The study group focused on three key interconnected areas: Classification, Stability, and Human-Robot Interaction (HRI). Classification involves creating a clear taxonomy to define humanoid robots by their physical and behavioral traits and application domains, serving as a foundation for identifying applicable standards and gaps. Stability focuses on developing measurable metrics and safety standards for balancing robots, including dynamic balance and fall-response behaviors. HRI guidelines aim to ensure safe, trustworthy interactions between humans and humanoid robots, covering collaborative safety, interpretable behavior, and user training. Led by Aaron Prather of ASTM International, the working group comprised over 60 experts from industry, academia, and regulatory bodies who collaborated for more than a year. Their efforts included market research, vendor and end-user interviews,
roboticshumanoid-robotsrobot-standardshuman-robot-interactionrobotics-safetyIEEE-standardsautonomous-systemsChina's humanoid robot head shocks with 'lifelike facial expressions'
Chinese robotics company AheadForm has developed a humanoid robotic head capable of expressing a wide range of realistic facial emotions, aiming to enhance human-robot interaction. Their robot head, showcased in a viral YouTube video, features lifelike eye movements, blinking, and expressive facial cues achieved through a combination of self-supervised AI algorithms and advanced bionic actuation technology. AheadForm’s “Elf series” of robots, characterized by elf-like features such as large ears, incorporate up to 30 degrees of freedom in facial movement, powered by precise control systems and AI learning algorithms. Their latest model, “Xuan,” is a full-body bionic figure with a static body but a highly interactive head capable of rich facial expressions and lifelike gaze behaviors. A key innovation enabling these realistic expressions is a specialized brushless motor designed for ultra-quiet, responsive, and energy-efficient facial control, allowing subtle and precise movements. AheadForm’s founder, Hu Yuhang, envisions humanoid robots that feel
robothumanoid-robotAI-algorithmsbionic-actuationbrushless-motorhuman-robot-interactionlifelike-facial-expressionsLaunch of the World's Cuddliest Robot
The article announces the release of the GR-3, described as the world’s cuddliest robot, now available for purchase. Developed by Fourier, the GR-3 embodies the company’s commitment to creating empathic robot companions designed to assist humans in everyday activities. The robot aims to provide emotional support and practical help, blending advanced technology with a comforting, approachable design. Key takeaways include Fourier’s emphasis on empathy in robotics, positioning the GR-3 not just as a functional assistant but also as a companion that can enhance users’ emotional well-being. While specific features and capabilities of the GR-3 are not detailed in the article, its launch marks a significant step in the integration of robotics into daily human life, focusing on both utility and emotional connection.
robotroboticsempathic-robotscompanion-robotsGR-3-robothuman-robot-interactionThis $30M startup built a dog crate-sized robot factory that learns by watching humans
San Francisco-based startup MicroFactory has developed a compact, dog crate-sized robotic manufacturing system designed for precision tasks such as circuit board assembly, soldering, and cable routing. Unlike traditional humanoid or large-scale factory robots, MicroFactory’s enclosed workstation features two robotic arms that can be trained through direct human demonstration as well as AI, enabling faster and more intuitive programming for complex manufacturing sequences. Co-founder and CEO Igor Kulakov emphasized that this approach simplifies both hardware and AI development while allowing users to observe the manufacturing process in real time. Founded in 2024 by Kulakov and Viktor Petrenko, who previously ran a manufacturing business, MicroFactory built its prototype within five months and has since received hundreds of preorders for diverse applications, including electronics assembly and even food processing. The company recently raised $1.5 million in a pre-seed funding round, valuing it at $30 million post-money, with investors including executives from Hugging Face and Naval Ravikant. MicroFactory plans to
roboticsmanufacturing-automationAI-roboticsrobotic-armstabletop-robot-factoryhuman-robot-interactionprecision-manufacturing'World’s cutest' humanoid carries out chores with warmth, care
The Fourier GR-3 humanoid robot, developed by Chinese firm Fourier Robotics, is designed to support meaningful human interaction by combining emotional intelligence with practical functionality. Unlike traditional robots, the GR-3 can express empathy and kindness, making it feel more like a companion than a machine. It demonstrates capabilities such as eidetic memory to assist an art curator, multilingual communication to guide museum visitors, and home assistance by managing daily schedules. The robot also exhibits advanced visual recognition and human-like locomotion, responding naturally to gestures like waving. Weighing 71 kg and standing 165 cm tall, the GR-3 features 55 degrees of freedom for balanced, fluid movement and an animated facial interface that enhances its lifelike presence. Its emotional intelligence is powered by Fourier’s Full-Perception Multimodal Interaction System, integrating sight, sound, and touch, with 31 pressure sensors enabling responsive actions such as blinking and eye tracking. The robot supports continuous operation with a swappable battery and adaptable movement modes
robothumanoid-robotemotional-intelligencehuman-robot-interactionrobotics-technologyautonomous-robotssmart-roboticsHumans can ‘borrow’ robot hands as their own, scientists discover
Researchers from the Italian Institute of Technology and Brown University have discovered that humans can unconsciously incorporate a humanoid robot’s hand into their body schema—the brain’s internal map of the body and its spatial relationship to the environment—especially when collaborating on a task. In experiments involving a child-sized robot named iCub, participants who jointly sliced a soap bar with the robot showed faster reactions to visual cues near the robot’s hand, indicating that their brains treated the robot’s hand as part of their own near space. This effect was contingent on active collaboration and was influenced by the robot’s movement style, with broader, fluid, and well-synchronized gestures enhancing the cognitive integration. The study also found that physical proximity and the participant’s perception of the robot’s competence and pleasantness strengthened this integration. Participants who attributed more human-like traits or emotions to the robot exhibited a stronger cognitive bond, suggesting that empathy and partnership play important roles in human-robot interaction. These findings provide valuable insights for designing future robots that can
robothumanoid-robothuman-robot-interactionbody-schemacognitive-integrationrehabilitation-roboticsiCub-robotHumanoid robot HITTER plays table tennis with human-like speed
UC Berkeley has developed a humanoid robot named HITTER that can play table tennis with human-like speed and agility. Demonstrated in a video, HITTER successfully engaged in rallies exceeding 100 shots against human opponents, using its left hand for balance and executing precise, fluid movements. The robot’s performance relies on a dual-system design: a high-level planner that tracks and predicts the ball’s trajectory using external cameras, and a low-level controller that converts these calculations into coordinated arm and leg motions. Trained on human motion data, HITTER can move naturally, reacting to balls traveling up to 5 m/s in under a second. The development team combined model-based planning with reinforcement learning to overcome the challenges of split-second decision-making and unpredictable shots inherent in table tennis. This hybrid approach enabled HITTER to fine-tune its movements through trial and error, resulting in lifelike swings and footwork. Tested on a general-purpose humanoid platform (likely the Unitree G1), HITTER demonstrated its
roboticshumanoid-robotreinforcement-learningAI-planninghuman-robot-interactiontable-tennis-robotrobot-motion-controlEmotional intelligence is ElliQ's core strength, says Intuition Robotics - The Robot Report
Intuition Robotics, founded in 2016 by Dor Skuler, developed ElliQ, an AI care companion robot designed to promote independence and healthy living among older adults. Skuler’s personal experiences caring for his grandfather highlighted the importance of emotional connection and personality in caregiving, beyond just technical skills. This insight led Intuition Robotics to focus on emotional intelligence as the core strength of ElliQ, aiming to create empathetic interactions that can address loneliness and provide meaningful companionship rather than merely performing physical tasks. Unlike many developers pursuing fully mobile humanoid robots, Intuition Robotics chose to create a stationary device that emphasizes social interaction and emotional engagement. ElliQ’s design centers on a “social interaction stack” that enables it to initiate conversations naturally and understand the nuances of human behavior and etiquette within the home environment. Skuler emphasized that true utility in caregiving robots requires blending seamlessly into the complexities of daily life, making ElliQ more of a friend or roommate than just a functional tool. The company’s approach reflects
robotAI-care-companionemotional-intelligencehuman-robot-interactionelder-care-technologysocial-robotsIntuition-RoboticsNew algorithm teaches robots how not to hurt humans in workplaces
Researchers at the University of Colorado Boulder have developed a new algorithm that enables robots to make safer decisions when working alongside humans in factory environments. Inspired by game theory, the algorithm treats the robot as a player seeking an “admissible strategy” that balances task completion with minimizing potential harm to humans. Unlike traditional approaches focused on winning or perfect prediction, this system prioritizes human safety by anticipating unpredictable human actions and choosing moves that the robot will not regret in the future. The algorithm allows robots to respond intelligently and proactively in collaborative workspaces. If a human partner acts unexpectedly or makes a mistake, the robot first attempts to correct the issue safely; if unsuccessful, it may relocate its task to a safer area to avoid endangering the person. This approach acknowledges the variability in human expertise and behavior, requiring robots to adapt to all possible scenarios rather than expecting humans to adjust. The researchers envision that such robots will complement human strengths by handling repetitive, physically demanding tasks, potentially addressing labor shortages in sectors like elder
robotroboticshuman-robot-interactionsafety-algorithmsindustrial-robotsworkplace-safetyartificial-intelligenceMIT roboticists debate the future of robotics, data, and computing - The Robot Report
At the IEEE International Conference on Robotics and Automation (ICRA), leading roboticists debated the future direction of robotics, focusing on whether advances will be driven primarily by code-based models or data-driven approaches. The panel, moderated by Ken Goldberg of UC Berkeley and featuring experts such as Daniela Rus, Russ Tedrake, Leslie Kaelbling, and others, highlighted a growing divide in the field. Rus and Tedrake strongly advocated for data-centric methods, emphasizing that real-world robotics requires machines to learn from extensive, multimodal datasets capturing human actions and environmental variability. They argued that traditional physics-based models work well in controlled settings but fail to generalize to unpredictable, human-centered tasks. Rus’s team at MIT’s CSAIL is pioneering this approach by collecting detailed sensor data on everyday human activities like cooking, capturing nuances such as gaze and force interactions to train AI systems that enable robots to generalize and adapt. Tedrake illustrated how scaling data enables robots to develop "common sense" for dexter
roboticsartificial-intelligencemachine-learningrobotics-researchdata-driven-roboticshuman-robot-interactionrobotic-automationUL Solutions opens 1st service robot testing lab
UL Solutions, a global leader in applied safety science, has opened its first testing laboratory for commercial and service robots in Seoul, South Korea. The lab aims to provide testing and certification services focused on identifying emerging hazards, especially those related to human-robot interactions. It will primarily test compliance with UL 3300, the Standard for Safety for Service, Communication, Information, Education and Entertainment Robots. This standard addresses critical safety aspects such as mobility, fire and shock hazards, and safe interaction with vulnerable individuals, requiring features like speed limits, object detection, and audible/visual indicators to ensure robots operate safely alongside people in public and commercial settings. The establishment of this lab reflects the rapid growth of the robotics industry, where robots are increasingly deployed in diverse environments including hotels, healthcare, retail, and delivery services. UL Solutions highlights the importance of addressing new safety concerns as robots take on more roles outside traditional industrial floors. The global service robotics market is expanding, particularly in the Asia-Pacific region, driven by labor
robotservice-robotsrobot-testinghuman-robot-interactionUL-3300-standardrobotics-safetycommercial-robotsSoft robot jacket offers support for upper-limb disabilities
Researchers at Harvard John A. Paulson School of Engineering and Applied Sciences, in collaboration with Massachusetts General Hospital and Harvard Medical School, have developed a soft, wearable robotic jacket designed to assist individuals with upper-limb impairments caused by conditions such as stroke and ALS. This device uses a combination of machine learning and a physics-based hysteresis model to personalize movement assistance by accurately detecting the user’s motion intentions through sensors. The integrated real-time controller adjusts the level of support based on the user’s specific movements and kinematic state, enhancing control transparency and practical usability in daily tasks like eating and drinking. In trials involving stroke and ALS patients, the robotic jacket demonstrated a 94.2% accuracy in identifying subtle shoulder movements and reduced the force needed to lower the arm by nearly one-third compared to previous models. It also improved movement quality by increasing range of motion in the shoulder, elbow, and wrist, reducing compensatory trunk movements by up to 25.4%, and enhancing hand-path efficiency by up
soft-roboticswearable-robotsupper-limb-supportassistive-technologymachine-learningrehabilitation-roboticshuman-robot-interactionInterview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy - Robohub
In this interview, Haimin Hu discusses his PhD research at Princeton Safe Robotics Lab, which centers on the algorithmic foundations of human-centered autonomy. His work integrates dynamic game theory, machine learning, and safety-critical control to develop autonomous systems—such as self-driving cars, drones, and quadrupedal robots—that are safe, reliable, and adaptable in human-populated environments. A key innovation is a unified game-theoretic framework that enables robots to plan motion by considering both physical and informational states, allowing them to interact safely with humans, adapt to their preferences, and even assist in skill refinement. His contributions span trustworthy human-robot interaction through real-time learning to reduce uncertainty, verifiable neural safety analysis for complex robotic systems, and scalable game-theoretic planning under uncertainty. Hu highlights the challenge of defining safety in human-robot interaction, emphasizing that statistical safety metrics alone are insufficient for trustworthy deployment. He argues for robust safety guarantees comparable to those in critical infrastructure, combined with runtime learning
robothuman-robot-interactionautonomous-systemssafety-critical-controlgame-theorymachine-learningautonomous-vehiclesElephant Robotics builds myCobot Pro 450 to meet industrial expectations - The Robot Report
Elephant Robotics has launched the myCobot Pro 450, a compact collaborative robot arm designed to meet industrial-level demands across education, research, and commercial applications. The robot features a modular design with a 1 kg payload, 450 mm reach, and high positioning accuracy of ±0.1 mm. Weighing under 5 kg, it incorporates harmonic reducers, servo motors, joint brakes, and integrated controllers within an all-metal, durable housing. The myCobot Pro 450 supports various end effectors such as cameras, suction pumps, and grippers, enabling rapid deployment for tasks like data collection, fine manipulation, and intelligent human-robot interaction (HRI). The cobot supports personalized applications including 3D visual random sorting, robotic writing and painting, and compound mobile inspections. It integrates with peripherals like 3D cameras, recognition software, industrial PCs, and mobile platforms (e.g., myAGV Pro) to offer scalable solutions. Notably, the myC
robotcollaborative-robotmyCobot-Pro-450industrial-automationAI-integrationhuman-robot-interactionrobotic-armChina’s Kaiwa plans world’s first pregnancy humanoid robot
Chinese tech company Kaiwa Technology, based in Guangzhou, is developing what it claims will be the world’s first pregnancy humanoid robot, set to debut by 2026 at a price under $13,900. This humanoid robot features an embedded artificial womb designed to carry a fetus through the entire ten-month gestation period, replicating natural pregnancy by using artificial amniotic fluid and nutrient delivery via a hose. The technology, reportedly mature in laboratory settings, aims to offer an alternative to human pregnancy, potentially benefiting those who wish to avoid the physical burdens of gestation. The project has sparked significant public debate over ethical, legal, and scientific implications, with discussions already underway with authorities in Guangdong Province. The artificial womb technology builds on prior advances, such as the 2017 “biobag” experiment where premature lambs were nurtured in artificial amniotic fluid, though current artificial wombs mainly support partial gestation rather than full-term pregnancy. Kaiwa’s vision requires further breakthroughs
robothumanoid-robotartificial-wombAI-technologypregnancy-robotrobotics-innovationhuman-robot-interactionSensing robot hand flicks, flinches, and grips like a human
A student team at USC Viterbi, led by assistant professor Daniel Seita, has developed the MOTIF Hand, a robotic hand designed to mimic human touch by sensing multiple modalities such as pressure, temperature, and motion. Unlike traditional robot grippers, the MOTIF Hand integrates a thermal camera embedded in its palm to detect heat without physical contact, allowing it to "flinch" away from hot surfaces much like a human would. It also uses force sensors in its fingers to apply precise pressure and can gauge the weight or contents of objects by flicking or shaking them, replicating human instincts in object interaction. The MOTIF Hand builds on previous open-source designs like Carnegie Mellon’s LEAP Hand, with the USC team also committing to open-source their work to foster collaboration in the robotics community. The developers emphasize that this platform is intended as a foundation for further research, aiming to make advanced tactile sensing accessible to more teams. Their findings have been published on Arxiv, highlighting a significant step toward
robotrobotic-handsensorshuman-robot-interactiontactile-sensingthermal-detectionrobotics-researchGR-3 humanoid robot debuts with empathy, emotion, and lifelike walk
The GR-3 humanoid robot, unveiled by Fourier on August 6, 2025, represents a significant advancement in human-robot interaction by emphasizing empathy, emotional awareness, and lifelike movement. Standing 165 cm tall and weighing 71 kg, GR-3 features 55 degrees of freedom enabling natural, balanced motion, including expressive gaits such as a “bouncy walk.” Its design incorporates a soft-touch shell with warm tones and premium upholstery to create a familiar, comforting presence rather than a mechanical one. Central to its capabilities is Fourier’s Full-Perception Multimodal Interaction System, which integrates vision, audio, and tactile inputs into a real-time emotional processing engine. This system allows GR-3 to localize voices, maintain eye contact, recognize faces, and respond to touch via 31 pressure sensors, producing subtle emotional gestures that simulate genuine empathy. Beyond sensing, GR-3 employs a dual-path cognitive architecture combining fast, reflexive responses with slower, context-aware reasoning
roboticshumanoid-robotemotional-AIhuman-robot-interactionhealthcare-roboticsempathetic-robotsassistive-technologyJapan team builds falcon-like drone that lands softly on your palm
Researchers at the University of Tokyo’s DRAGON Lab have developed a falcon-inspired flapping-wing drone capable of safely landing on a person’s palm without cushions. Unlike traditional propeller drones, this drone uses soft, flexible wings that mimic bird flight, resulting in quieter operation and a gentler presence ideal for close human interaction. The design is inspired by falconry and represents the first successful contact-based interaction between a flapping-wing drone and a human, emphasizing safety through careful flight planning that accounts for physical and psychological factors such as distance, altitude, approach direction, and velocity. The drone maintains a minimum distance of 0.3 meters from the user’s chest, slows down as it approaches, and stays within a comfortable altitude range between the elbow and eye level. It is controlled through intuitive hand gestures—bending the arm signals the drone to hover, while extending the arm commands it to approach and land. A sophisticated motion capture system with multiple cameras tracks markers on the user and drone, enabling
robotdroneflapping-wing-dronehuman-robot-interactiongesture-controlmotion-planningsafe-landingInterview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions - Robohub
In this interview, Kate Candon, a PhD student at Yale University, discusses her research on improving human-robot interaction by leveraging both explicit and implicit feedback. Traditional robot learning often relies on explicit feedback, such as simple "good job" or "bad job" signals from a human teacher who is not actively engaged in the task. However, Candon emphasizes that humans naturally provide a range of implicit cues—like facial expressions, gestures, or subtle actions such as moving an object away—that convey valuable information without additional effort. Her current research aims to develop a framework that combines these implicit signals with explicit feedback to enable robots to learn more effectively from humans in natural, interactive settings. Candon explains that interpreting implicit feedback is challenging due to variability across individuals and cultures. Her initial approach focuses on analyzing human actions within a shared task to infer appropriate robot responses, with plans to incorporate visual cues such as facial expressions and gestures in future work. The research is tested in a pizza-making scenario, chosen for
robothuman-robot-interactionimplicit-feedbackexplicit-feedbackinteractive-agentsrobot-learningAICutest Humanoid Robot Ready For Launch
The article introduces the Fourier GR-3, a new humanoid robot designed primarily for companionship and caregiving purposes. It highlights the robot's notably cute appearance, which sets it apart from previous models and may enhance its acceptance and integration into human environments. The robot's design aims to foster more natural and engaging interactions between humans and robots. While specific capabilities of the Fourier GR-3 are not detailed in the provided content, the article suggests that its launch could mark a significant step forward in how robots assist with caregiving and social companionship. The potential impact includes improving the quality of life for individuals needing support and advancing the development of empathetic and interactive robotic companions. However, further information about its functionalities and deployment remains unclear from the excerpt.
robothumanoid-robotroboticsAIcompanion-robotcaregiving-robothuman-robot-interactionFourier to unveil world's most 'adorable' humanoid robot next week
Shanghai-based robotics company Fourier Robotics is set to unveil its newest humanoid robot, the GR-3, on August 6, 2025. The GR-3 follows the GR-1 and GR-2 models but features a notably smaller and friendlier design, standing approximately 4 feet 5 inches (134 cm) tall, compared to the taller predecessors. The robot’s aesthetic is described as “softer” and more “adorable,” with expressive eyes aimed at enhancing user engagement. Designed primarily for domestic, educational, healthcare, and public environments, the GR-3 integrates a large language model (LLM) to facilitate natural speech interaction, positioning it as a companion or caregiver robot optimized for friendly human interaction. Building on Fourier’s previous models, which showcased advanced mobility, perception, and dexterous manipulation, the GR-3 is expected to emphasize compact hardware and approachable design suitable for home and classroom settings. While likely featuring simpler actuation and sensing compared to the GR-2
roboticshumanoid-robotAI-companionsmart-actuatorsdomestic-robotseducational-robotshuman-robot-interactionNew soft robot arm scrubs toilets and dishes with drill-level force
Researchers at Northeastern University have developed SCCRUB, a novel soft robotic arm designed to tackle tough cleaning tasks with drill-level scrubbing power while maintaining safety around humans. Unlike traditional rigid industrial robots, SCCRUB uses flexible yet strong components called TRUNC cells—torsionally rigid universal couplings—that allow the arm to bend and flex while transmitting torque comparable to a handheld drill. This combination enables the robot to apply significant force to remove stubborn grime without posing risks typical of hard robotic arms. Equipped with a counter-rotating scrubber brush and guided by a deep learning-based controller, SCCRUB can clean challenging messes such as microwaved ketchup and fruit preserves on glass dishes and toilet seats, removing over 99% of residue in lab tests. The counter-rotating brush design helps maintain firm pressure and stability by canceling frictional forces, enhancing cleaning effectiveness while preserving the arm’s soft and safe nature. The research team envisions expanding SCCRUB’s capabilities to assist humans
robotsoft-roboticsrobotic-armmachine-learningautomationcleaning-robothuman-robot-interactionMIT’s 3-in-1 training tool eases robot learning
MIT engineers have developed a novel three-in-one training interface that allows robots to learn new tasks through any of three common demonstration methods: remote control (teleoperation), physical manipulation (kinesthetic training), or by observing a human perform the task (natural teaching). This handheld, sensor-equipped tool can attach to many standard robotic arms, enabling users to teach robots in whichever way best suits the task or user preference. The interface was tested on a collaborative robotic arm by manufacturing experts performing typical factory tasks, demonstrating increased flexibility in robot training. This versatile demonstration interface aims to broaden the range of users who can effectively teach robots, potentially expanding robot adoption beyond manufacturing into areas like home care and healthcare. For example, one person could remotely train a robot to handle hazardous materials, another could physically guide the robot in packaging, and a third could demonstrate drawing a logo for the robot to mimic. The research, led by MIT’s Department of Aeronautics and Astronautics and CSAIL, was presented at the IEEE I
roboticsrobot-learninghuman-robot-interactioncollaborative-robotsrobot-training-toolsMIT-roboticsintelligent-robotsUnveiling the Tree of Robots: A new taxonomy for understanding robotic diversity - The Robot Report
Researchers at the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich (TUM) have developed the “Tree of Robots,” a novel taxonomy and evaluation scheme designed to measure and compare the sensitivity of autonomous robots. Sensitivity, which is critical for safe and flexible human-robot interaction, previously lacked a standardized assessment method. This new framework enables the categorization of various robotic systems—including industrial robots, cobots, soft robots, and tactile robots—based on 25 specific measurements related to physical contact sensitivity, such as force alignment and safety in human interaction. The resulting spider diagrams provide an accessible visual summary of a robot’s sensitivity performance, facilitating better understanding and comparison even for non-experts. The Tree of Robots draws inspiration from Darwin’s Tree of Life, illustrating the diversity and specialization of robotic “species” according to their design and operational environments. By analyzing single-armed robots from different manufacturers, the researchers identified distinct capabilities related to sensors, motors, and control
roboticsrobotic-manipulatorsrobot-sensitivityhuman-robot-interactionindustrial-robotsautonomous-robotsrobotic-taxonomyWeek in Review: X CEO Linda Yaccarino steps down
The Week in Review highlights several major tech developments, starting with the departure of Linda Yaccarino as CEO of X after a challenging two-year period marked by advertiser backlash, controversies involving Elon Musk, and AI-related issues on the platform. Despite her leadership, the company faces ongoing difficulties ahead. Apple is adjusting its user interface by reducing transparency in features like Notifications and Apple Music to improve readability ahead of its fall OS launch. Hugging Face introduced Reachy Mini, an affordable, programmable robot aimed at AI developers, priced from $299 and integrated with its AI hub. In consumer tech, Nothing launched its ambitious Phone 3 with innovative features like a second screen and AI capabilities, though mixed reactions to design and pricing may limit its market impact. Samsung released new foldable phones, including the Z Fold7, Z Flip7, and a more affordable Z Flip7 FE. Rivian unveiled a high-performance electric vehicle boasting over 1,000 horsepower and advanced software features, positioning it as a flagship
robotAIprogrammable-robotsHugging-Facerobotics-safetyAI-developershuman-robot-interactionHugging Face launches Reachy Mini robot as embodied AI platform
Hugging Face, following its acquisition of Pollen Robotics in April 2025, has launched Reachy Mini, an open-source, compact robot designed to facilitate experimentation in human-robot interaction, creative coding, and AI. Standing 11 inches tall and weighing 3.3 pounds, Reachy Mini features motorized head and body rotation, expressive animated antennas, and multimodal sensing via an integrated camera, microphones, and speakers, enabling rich AI-driven audio-visual interactions. The robot is offered as a kit in two versions, encouraging hands-on assembly and deeper mechanical understanding, and will provide over 15 robot behaviors at launch. A key advantage of Reachy Mini is its seamless integration with Hugging Face’s AI ecosystem, allowing users to utilize advanced open-source models for speech, vision, and personality development. It is fully programmable in Python, with planned future support for JavaScript and Scratch, catering to developers of varying skill levels. The robot’s open-source hardware, software, and simulation
robotembodied-AIopen-source-roboticshuman-robot-interactionAI-powered-robotprogrammable-robotHugging-Face-roboticsRobot Talk Episode 125 – Chatting with robots, with Gabriel Skantze - Robohub
In episode 125 of the Robot Talk podcast, Claire interviews Gabriel Skantze, a Professor of Speech Communication and Technology at KTH Royal Institute of Technology. Skantze specializes in conversational AI and human-robot interaction, focusing on creating natural face-to-face conversations between humans and robots. His research integrates both verbal and non-verbal communication elements, such as prosody, turn-taking, feedback, and joint attention, to improve the fluidity and naturalness of spoken interactions with robots. Skantze also co-founded Furhat Robotics in 2014, where he continues to contribute as Chief Scientist. Furhat Robotics develops social robots designed to engage in human-like conversations, leveraging Skantze’s expertise in computational models of spoken interaction. The episode highlights ongoing advancements in conversational systems and the challenges involved in making robot communication more natural and effective, emphasizing the importance of combining multiple communication cues to enhance human-robot interaction.
robotroboticsconversational-AIhuman-robot-interactionspeech-communicationautonomous-machinesFurhat-RoboticsTesla sues former Optimus engineer over alleged trade secret theft
Tesla has filed a lawsuit against Zhongjie “Jay” Li, a former engineer in its Optimus humanoid robotics program, accusing him of stealing trade secrets related to advanced robotic hand sensors. Li, who worked at Tesla from August 2022 to September 2024, allegedly downloaded confidential information onto personal devices and conducted research on humanoid robotic hands and startup funding sources during his final months at the company. Shortly after his departure, Li founded a startup called Proception, which claims to have developed advanced humanoid robotic hands resembling Tesla’s designs. The complaint highlights that Proception was incorporated less than a week after Li left Tesla and publicly announced its achievements within five months, raising concerns about the misuse of Tesla’s proprietary technology. Tesla’s Optimus program, launched in 2021, has faced development challenges and delays, with Elon Musk indicating in mid-2024 that the company would continue work on the project despite earlier setbacks. The lawsuit underscores ongoing tensions in the competitive field of humanoid robotics
robothumanoid-roboticsTesla-Optimusrobotic-hand-sensorstrade-secret-theftrobotics-startuphuman-robot-interactionSensitive skin to help robots detect information about surroundings
Researchers from the University of Cambridge and University College London have developed a highly sensitive, low-cost, and durable robotic skin that can detect various types of touch and environmental information similarly to human skin. This flexible, conductive skin is made from a gelatine-based hydrogel that can be molded into complex shapes, such as a glove for robotic hands. Unlike traditional robotic touch sensors that require multiple sensor types for different stimuli, this new skin acts as a single sensor capable of multi-modal sensing, detecting taps, temperature changes, cuts, and multiple simultaneous touches through over 860,000 tiny conductive pathways. The team employed a combination of physical testing and machine learning to interpret signals from just 32 electrodes placed at the wrist, enabling the robotic skin to process more than 1.7 million data points across the hand. Tests included exposure to heat, gentle and firm touches, and even cutting, with the collected data used to train the system to recognize different types of contact efficiently. While not as sensitive as human skin
roboticsrobotic-skinsensorsflexible-materialsconductive-hydrogelmulti-modal-sensinghuman-robot-interactionInterview with Amar Halilovic: Explainable AI for robotics - Robohub
Amar Halilovic, a PhD student at Ulm University in Germany, is conducting research on explainable AI (XAI) for robotics, focusing on how robots can generate explanations of their actions—particularly in navigation—that align with human preferences and expectations. His work involves developing frameworks for environmental explanations, especially in failure scenarios, using black-box and generative methods to produce textual and visual explanations. He also studies how to plan explanation attributes such as timing, representation, and duration, and is currently exploring dynamic selection of explanation strategies based on context and user preferences. Halilovic finds it particularly interesting how people interpret robot behavior differently depending on urgency or failure context, and how explanation expectations shift accordingly. Moving forward, he plans to extend his framework to enable real-time adaptation, allowing robots to learn from user feedback and adjust explanations on the fly. He also aims to conduct more user studies to validate the effectiveness of these explanations in real-world human-robot interaction settings. His motivation for studying explainable robot navigation stems from a broader interest in human-machine interaction and the importance of understandable AI for trust and usability. Before his PhD, Amar studied Electrical Engineering and Computer Science in Bosnia and Herzegovina and Sweden. Outside of research, he enjoys traveling and photography and values building a supportive network of mentors and peers for success in doctoral studies. His interdisciplinary approach combines symbolic planning and machine learning to create context-sensitive, explainable robot systems that adapt to diverse human needs.
roboticsexplainable-AIhuman-robot-interactionrobot-navigationAI-researchPhD-researchautonomous-robotsPepper humanoid robot powered by ChatGPT conducts real-world interaction
Researchers from the University of Canberra showcased Pepper, a humanoid robot integrated with ChatGPT, at an Australian innovation festival to study public reactions to AI-powered social robots in real-world settings. Pepper captures audio from users, transcribes it, generates responses via ChatGPT, and communicates back through text-to-speech. The trial involved 88 participants who interacted with Pepper, many for the first time, providing feedback that revealed a broad spectrum of emotions including curiosity, amusement, frustration, and unease. The study underscored the importance of first impressions and real-world contexts in shaping societal acceptance of humanoid robots, especially as they become more common in sectors like healthcare, retail, and education. Key findings highlighted four main themes: user suggestions for improvement, expectations for human-like interaction, emotional responses, and perceptions of Pepper’s physical form. Participants noted a disconnect between Pepper’s human-like appearance and its limited interactive capabilities, such as difficulties in recognizing facial expressions and following social norms like turn-taking. Feedback also pointed to technical and social challenges, including the need for faster responses, greater cultural and linguistic inclusivity—particularly for Indigenous users—and improved accessibility. The study emphasizes that testing social robots “in the wild” provides richer, human-centered insights into how society may adapt to embodied AI companions beyond controlled laboratory environments.
robothumanoid-robotChatGPTAI-powered-robotshuman-robot-interactionsocial-roboticsSoftBank-RoboticsCongratulations to the #ICRA2025 best paper award winners - Robohub
The 2025 IEEE International Conference on Robotics and Automation (ICRA), held from May 19-23 in Atlanta, USA, announced its best paper award winners and finalists across multiple categories. The awards recognized outstanding research contributions in areas such as robot learning, field and service robotics, human-robot interaction, mechanisms and design, planning and control, and robot perception. Each category featured a winning paper along with several finalists, highlighting cutting-edge advancements in robotics. Notable winners include "Robo-DM: Data Management for Large Robot Datasets" by Kaiyuan Chen et al. for robot learning, "PolyTouch: A Robust Multi-Modal Tactile Sensor for Contact-Rich Manipulation Using Tactile-Diffusion Policies" by Jialiang Zhao et al. for field and service robotics, and "Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition" by Shengchent Luo et al. for human-robot interaction. Other winning papers addressed topics such as soft robot worm behaviors, robust sequential task solving via dynamically composed gradient descent, and metrics-aware covariance for stereo visual odometry. The finalists presented innovative work ranging from drone detection to adaptive navigation and assistive robotics, reflecting the broad scope and rapid progress in the robotics field showcased at ICRA 2025.
roboticsrobot-learninghuman-robot-interactiontactile-sensorsrobot-automationsoft-roboticsrobot-navigationWhy Intempus thinks robots should have a human physiological state
robotroboticsAIemotional-intelligencehuman-robot-interactionIntempusmachine-learningWhat’s coming up at #ICRA2025?
robotroboticsautomationICRA2025human-robot-interactionsoft-roboticsmulti-robot-systemsMô hình AI cho phép điều khiển robot bằng lời
robotAIMotionGlotmachine-learningroboticshuman-robot-interactionautomationRobot Talk Episode 110 – Designing ethical robots, with Catherine Menon
robot-ethicsassistive-technologyautonomous-systemsAI-safetyhuman-robot-interactionethical-designpublic-trust-in-AI