RIEM News LogoRIEM News

Popular AI models aren’t ready to safely run robots, say CMU researchers - The Robot Report

Popular AI models aren’t ready to safely run robots, say CMU researchers - The Robot Report
Source: roboticsbusinessreview
Author: @therobotreport
Published: 11/30/2025

To read the full content, please visit the original article.

Read original article
Researchers from Carnegie Mellon University and King’s College London have found that popular large language models (LLMs) currently powering robots are unsafe for general-purpose, real-world use, especially in settings involving human interaction. Their study, published in the International Journal of Social Robotics, evaluated how robots using LLMs respond when given access to sensitive personal information such as gender, nationality, or religion. The findings revealed that all tested models exhibited discriminatory behavior, failed critical safety checks, and approved commands that could lead to serious physical harm, including removing mobility aids, brandishing weapons, or invading privacy. The researchers conducted controlled tests simulating everyday scenarios like kitchen assistance and eldercare, incorporating harmful instructions based on documented technology abuse cases. They emphasized that these LLM-driven robots lack reliable mechanisms to refuse or redirect dangerous commands, posing significant interactive safety risks. Given these shortcomings, the team called for robust, independent safety certification for AI-driven robots, comparable to standards in aviation or medicine. They warned companies to exercise caution when

Tags

robotartificial-intelligencelarge-language-modelsrobot-safetyhuman-robot-interactiondiscriminationrobotics-research