Articles tagged with "robot-safety"
US firm unveils small humanoid robot butler for household chores
Fauna, a New York-based robotics startup, has unveiled Sprout, a compact humanoid robot designed specifically for operation in everyday human environments such as homes, schools, offices, and service spaces. Unlike traditional industrial robots adapted for public use, Sprout is built from the ground up with safety, interaction, and accessibility as priorities. Standing 3.5 feet tall, it features a lightweight, soft exterior with quiet actuation and avoids sharp edges, enabling safe close physical proximity without safety cages. Its simple one-degree-of-freedom grippers support basic tasks like fetching objects and hand-offs, while the robot is engineered to fall, crawl, and recover without damage. Sprout also incorporates an expressive face to facilitate intuitive, nonverbal human-robot communication. Sprout is positioned as a developer-centric platform, offering whole-body behaviors such as walking, kneeling, crawling, compliant physical interaction, and fall recovery, alongside core capabilities like teleoperation, mapping, navigation, and expressive interaction primitives
robothumanoid-robotservice-robothuman-robot-interactionrobotics-platformhome-automationrobot-safetyPopular AI models aren’t ready to safely run robots, say CMU researchers - The Robot Report
Researchers from Carnegie Mellon University and King’s College London have found that popular large language models (LLMs) currently powering robots are unsafe for general-purpose, real-world use, especially in settings involving human interaction. Their study, published in the International Journal of Social Robotics, evaluated how robots using LLMs respond when given access to sensitive personal information such as gender, nationality, or religion. The findings revealed that all tested models exhibited discriminatory behavior, failed critical safety checks, and approved commands that could lead to serious physical harm, including removing mobility aids, brandishing weapons, or invading privacy. The researchers conducted controlled tests simulating everyday scenarios like kitchen assistance and eldercare, incorporating harmful instructions based on documented technology abuse cases. They emphasized that these LLM-driven robots lack reliable mechanisms to refuse or redirect dangerous commands, posing significant interactive safety risks. Given these shortcomings, the team called for robust, independent safety certification for AI-driven robots, comparable to standards in aviation or medicine. They warned companies to exercise caution when
robotartificial-intelligencelarge-language-modelsrobot-safetyhuman-robot-interactiondiscriminationrobotics-researchFigure humanoid robot hand showed skull-cracking force in trials, whistleblower warns
Figure AI, a leading California-based humanoid robot company, is facing a lawsuit from its former head of product safety, Robert Gruendel, who claims he was wrongfully terminated after raising serious safety concerns. Gruendel warned top executives, including CEO Brett Adcock and chief engineer Kyle Edelberg, that the company’s robots possessed enough force to cause severe physical harm, citing an incident where a robot malfunctioned and gouged a steel refrigerator door. Despite his documented safety complaints, Gruendel alleges his warnings were dismissed and he was fired under the pretext of a vague “change in business direction.” Figure AI disputes these claims, stating Gruendel was terminated for poor performance and plans to challenge the allegations in court. Gruendel seeks economic, compensatory, and punitive damages, emphasizing that California law protects whistleblowers reporting unsafe practices. The lawsuit also accuses Figure AI of undermining a safety plan Gruendel developed for major investors, removing critical elements that influenced their decision to fund the company. This action
robothumanoid-robotsrobot-safetyFigure-AIproduct-safetywhistleblowerrobotics-industryDisney trains robots to fall, roll, and land safely without damage
Disney researchers, collaborating with university engineers, have developed a reinforcement learning-based system that enables bipedal robots to fall safely by controlling their landing poses to protect sensitive components. Traditional robots often suffer damage from uncontrolled falls due to stiff joints or flailing limbs, leading to costly repairs. Instead of resisting gravity, the new approach teaches robots to absorb impacts by rolling or shifting limbs during a fall to land in stable, damage-minimizing positions, prioritizing damage prevention over strict balance control. The training involved thousands of simulated falls with randomized velocities and directions, allowing the robot to learn a variety of safe landing strategies. A scoring system rewarded moves that reduced impact forces and protected vulnerable parts like the head and battery pack, while penalizing erratic motions. The researchers generated 24,000 stable poses, including artist-designed ones within realistic joint limits, to expand the robot’s repertoire of safe landings. After two days of training on powerful GPUs, the learned policy was transferred to a real 16-kil
roboticsreinforcement-learningrobot-safetybipedal-robotsrobot-fall-protectionDisney-researchrobot-simulationHow Iowa State Lab helps humanoid robots master balance and safety
Engineers at Iowa State University are advancing humanoid robots' physical intelligence, focusing on mastering self-balancing and precise execution of complex tasks such as walking, grasping, and navigation. These capabilities are essential for robots to function effectively in real-world human environments. Bowen Weng, an assistant professor and roboticist at Iowa State, emphasizes that physical intelligence—automatic in humans—is a remarkable skill that robots must develop to operate smoothly and safely alongside people. Humanoid robots, designed with human-like forms, aim to assist in various roles including research, hazardous jobs, and everyday tasks, while addressing societal concerns about automation and human-robot interaction. Weng co-authored two significant studies contributing to this field. The first evaluates the stability and performance of commercial quadruped robots, Ghost Robotics Vision 60 and Boston Dynamics Spot, under dynamic naval conditions, finding that Vision 60 exhibits superior balance and lower torque demands. The second study focuses on the importance of repeatable and reliable risk assessment protocols for robot
roboticshumanoid-robotsrobot-balancephysical-intelligenceautomationrobot-stabilityrobot-safetyThe world is just not quite ready for humanoids yet
The article highlights skepticism from experts about the current state and near-term prospects of humanoid robots, despite significant investment and hype in the sector. Rodney Brooks, a renowned roboticist and iRobot founder, warns of an investment bubble, emphasizing that humanoids still lack the dexterity and fine motor skills necessary for practical use. Other AI and robotics experts echo this caution, noting that widespread adoption of humanoid robots is unlikely for several years, if not over a decade. Fady Saad, a robotics-focused venture capitalist, points out limited market opportunities beyond niche applications like space exploration and raises serious safety concerns about humanoids operating alongside humans, especially in homes. The timeline for achieving functional, commercially viable humanoid robots remains uncertain, complicating investment decisions given venture capital fund lifecycles. Nvidia’s AI research leaders compare the current enthusiasm for humanoids to early excitement around self-driving cars, which have yet to achieve full global scalability despite years of development. The complexity of humanoid robotics—such as managing
roboticshumanoid-robotsartificial-intelligencerobotics-investmentrobot-safetyautomationrobotics-technologyFamed roboticist says humanoid robot bubble is doomed to burst
Renowned roboticist Rodney Brooks, co-founder of iRobot and former MIT researcher, warns that the current enthusiasm around humanoid robots is overly optimistic and likely to collapse. He criticizes companies like Tesla and Figure for relying on teaching robots dexterity through videos of humans performing tasks, calling this method “pure fantasy thinking.” Brooks highlights the complexity of the human hand, which contains about 17,000 specialized touch receptors—a level of tactile sophistication that no robot currently approaches. Unlike advances in speech recognition and image processing, which benefited from decades of data collection, robotics lacks a comparable foundation of touch data. Brooks also raises safety concerns, noting that full-sized humanoid robots consume large amounts of energy to maintain balance, making falls dangerous. He explains that larger robots would pose exponentially greater risks due to the physics of energy impact. Predicting the future of robotics, Brooks believes that successful robots in 15 years will likely abandon the human form, instead featuring wheels, multiple arms, and specialized sensors tailored to
robothumanoid-robotsroboticsmachine-learningrobot-safetyrobot-dexterityRodney-BrooksHow to make robots predictable with a priority based architecture and a new legal model - The Robot Report
The article discusses the challenge of ensuring predictable and safe behavior in increasingly autonomous robots, such as Tesla's Optimus humanoid and Waymo's driverless cars. Traditional robotic control systems rely on predefined scripts or reactive responses to commands, which can lead to conflicting actions and hesitation in complex, dynamic environments. Such unpredictability poses significant safety risks, especially when robots receive simultaneous or contradictory commands or when technical faults occur. To address these issues, the author’s team developed a priority-based control architecture that moves beyond simple stimulus-response behavior. This system evaluates every event through mission and subject filters, considering environmental context and potential consequences before execution. The architecture features two interlinked hierarchies: a mission hierarchy that ranks goals from fundamental safety rules (e.g., “Do not harm a human”) to user-set and current tasks, and a hierarchy of interaction subjects that prioritizes commands based on their source, giving highest priority to owners or operators and lower priority to external parties. This approach aims to enable robots to act
roboticsautonomous-robotspriority-based-controlTesla-Optimusrobot-safetyhumanoid-robotsautonomous-systems