RIEM News LogoRIEM News

Humanoid robot masters lip-sync, predicts face reaction with new system

Humanoid robot masters lip-sync, predicts face reaction with new system
Source: interestingengineering
Author: @IntEngineering
Published: 1/14/2026

To read the full content, please visit the original article.

Read original article
Researchers at Columbia University’s Creative Machines Lab have developed an advanced humanoid robot named Emo that can synchronize lifelike lip movements with speech audio and anticipate human facial expressions in real time. Emo features significant hardware improvements over its predecessor Eva, including 26 actuators for asymmetric facial expressions and flexible silicone skin manipulated by magnets for precise control. Equipped with high-resolution RGB cameras in its eyes, Emo uses a dual neural network framework: one model predicts its own facial movements, while another anticipates the human interlocutor’s expressions. This allows Emo to perform coexpressions—mirroring human facial reactions before they fully manifest—across multiple languages, including those not in its training data. The system’s predictive model, trained on 970 videos from 45 participants, analyzes subtle initial facial changes to forecast target expressions with high speed and accuracy, running at 650 frames per second. The inverse model executes motor commands at 8,000 fps, enabling Emo to generate facial expressions within 0.002 seconds,

Tags

robothumanoid-robotfacial-roboticshuman-robot-interactionmotor-controlneural-networksreal-time-expression