RIEM News LogoRIEM News

Articles tagged with "embodied-AI"

  • TechCrunch Mobility: ‘Physical AI’ enters the hype machine

    The article from TechCrunch Mobility highlights the growing prominence of "physical AI" or "embodied AI" showcased at the 2026 Consumer Electronics Show (CES) in Las Vegas. With traditional U.S. automakers notably absent, the event was dominated by autonomous vehicle technology firms, Chinese automakers, and companies specializing in AI-driven robotics and automotive chips. Physical AI refers to AI systems integrated with sensors, cameras, and motor controls that enable machines—such as humanoid robots, drones, and autonomous vehicles—to perceive and interact with the physical world. Hyundai, for example, featured a range of robots, including those from its subsidiary Boston Dynamics, and innovations like an autonomous vehicle charging robot and a four-wheel electric platform called Mobile Eccentric Droid (MobEd), set for production in 2026. The enthusiasm around humanoid robots was significant, with industry leaders like Mobileye’s Amnon Shashua acknowledging the hype but affirming the long-term reality and potential of humanoid robotics despite

    robotautonomous-vehiclesphysical-AIembodied-AIroboticselectric-vehiclessensors
  • ByteDance backs China’s new humanoid robot maker in funding round

    Chinese robotics startup X Square Robot has secured $143.3 million (1 billion yuan) in a Series A++ funding round led by major investors including ByteDance, HSG (formerly Sequoia Capital China), and government-backed firms such as Beijing Information Industry Development Investment Fund and Shenzhen Capital Group. Founded in 2023, X Square specializes in humanoid robots and embodied AI, aiming for applications in homes, hotels, and logistics. The company is known for its Quanta X1 and X2 wheeled humanoid robots with dexterous hands, powered by its proprietary vision–language–action (VLA) model called WALL-A. This model integrates world models and causal reasoning to enhance robots’ ability to generalize and perform complex tasks in unstructured environments without prior training. X Square’s product lineup includes the Quanta X1, a wheeled bimanual robot with 20 degrees of freedom and a working range of up to 1 meter, and the more advanced Quanta

    roboticshumanoid-robotsembodied-AIartificial-intelligencerobotics-startuprobotic-manipulationautonomous-robots
  • China's ice cream-making humanoid robot wows crowds at US tech show

    At CES 2026 in Las Vegas, PaXini Tech showcased its tactile humanoid robot TORA-ONE performing a complete ice cream-making workflow autonomously, demonstrating the practical application of touch-driven intelligence beyond research settings. The company presented its full embodied intelligence stack, including advanced tactile sensors, robotic hands, humanoid platforms, and large-scale data systems. Originating from Japan’s Sugano Laboratory, PaXini focuses on enabling AI systems to understand the physical world through high-precision touch, force, and motion sensing. Central to PaXini’s technology are its independently developed tactile sensors, such as the PX-6AX-GEN3, which provide multidimensional force sensing with exceptional resolution and repeatability. These sensors, along with wrist and joint force sensing, allow robots to perceive contact similarly to human touch. The company also introduced the DexH13 dexterous hand, featuring over a thousand tactile processing units, capable of delicate manipulation tasks like grasping irregular objects and turning knobs,

    robothumanoid-robottactile-sensorsembodied-AIrobotics-technologydexterous-robotic-handCES-2026
  • Chinese humanoid robot achieves world’s first embroidery feat in demo

    On December 22, China’s TARS Robotics demonstrated a significant breakthrough in embodied artificial intelligence by showcasing a humanoid robot performing hand embroidery with both hands. The robot threaded a needle and stitched a logo on soft, flexible material with sub-millimeter precision, a task previously considered too delicate and complex for automation. This achievement addresses a long-standing challenge in robotics—ultra-fine manipulation involving precise vision, adaptive force control, and coordinated bimanual movement—opening new possibilities for automating intricate manual tasks such as wire harness assembly and handling soft materials in manufacturing. The success stems from TARS Robotics’ DATA AI PHYSICS approach, which integrates real-world data collection via their SenseHub platform, embodied AI modeling through the TARS AWE 2.0 World Engine, and physical robotic systems designed with minimal digital-to-physical gaps. This closed-loop system enables the AI to learn generalizable physical skills rather than isolated tasks, allowing the robots to reliably execute complex movements in real environments.

    roboticshumanoid-robotAI-roboticsprecision-automationindustrial-roboticsembodied-AITARS-Robotics
  • Generations in Dialogue: Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín - Robohub

    The article discusses the third episode of the AAAI podcast series "Generations in Dialogue: Bridging Perspectives in AI," which features a conversation between host Ella Lan and Professor Roberto Martín-Martín. The series aims to explore how generational experiences influence perspectives on AI, addressing challenges, opportunities, and ethical considerations in the field. In this episode, Martín-Martín shares insights from his childhood curiosity about technology to his current research focus on embodied AI, robotics, perception, and action. He emphasizes the importance of making robots accessible to everyone and discusses how machines can augment human capabilities, drawing inspiration from human cognition and interdisciplinary fields like psychology and cognitive science. Professor Roberto Martín-Martín is an Assistant Professor of Computer Science at the University of Texas at Austin, specializing in integrating robotics, computer vision, and machine learning to develop autonomous agents capable of real-world perception and action. His research covers a range of tasks from basic manipulation and navigation to complex activities such as cooking and mobile manipulation. With a background that includes positions

    roboticsembodied-AIautonomous-agentsmachine-learningcomputer-visionhuman-robot-interactionmobile-manipulation
  • China firm gets funding to mass-produce embodied-AI humanoid robots

    Chinese robotics company RobotEra has secured nearly RMB 1 billion (approximately USD 140 million) in a Series A+ funding round led by Geely Capital, with participation from BAIC Capital, Alibaba Group, Haier Capital, and other global investors. This funding comes amid the company already having around USD 70 million in commercial orders for 2025, signaling strong industrial confidence in RobotEra’s vision and product line. The company’s portfolio includes a dexterous robotic hand (XHAND1), a wheeled service robot, and a full-size bipedal humanoid robot (RobotEra L7), designed for diverse applications from industrial tasks to service deployment. The RobotEra L7 humanoid robot stands about 171 cm tall, weighs 65 kg, and features 55 degrees of freedom with joint torque up to 400 N·m. It can perform dynamic athletic movements such as sprinting at 14.4 km/h, 360° spins, and breakdancing mane

    roboticshumanoid-robotsembodied-AIindustrial-automationrobotic-handsservice-robotsAI-robotics
  • China’s Xiaomi taps ex-Musk engineer to advance robot hand tech

    China’s Xiaomi has hired Zach Lu Zeyu, a former senior robotics engineer from Elon Musk’s Tesla Optimus humanoid robot team, to lead the development of its dexterous robot hand technology. Lu’s expertise in dexterous grasping and tactile sensing—critical capabilities that enable robots to manipulate objects with human-like precision and sensitivity—signals Xiaomi’s strong commitment to advancing embodied AI and robotics. This move is part of Xiaomi’s broader strategy to become a major player in the global humanoid robotics market, following its initial ventures into electric vehicles and robotics prototypes such as a quadrupedal robot dog and a humanoid robot. Xiaomi’s recruitment drive includes over 200 robotics-related roles and recent hires like AI researcher Luo Fuli, underscoring its ambition to build a world-class robotics team. The company also released MiMo-Embodied, an open-source foundation model combining autonomous driving and embodied AI technologies. This expansion occurs amid a competitive U.S.-China race in humanoid robotics

    roboticshumanoid-robotsdexterous-handtactile-sensingXiaomirobotics-engineeringembodied-AI
  • Video: Speedy 'drone painter' covers 200 sqft per minute with ease

    Lucid Bots has introduced a new painting module for its Sherpa drone, enabling it to spray paint and coat building exteriors with remarkable speed and efficiency. Previously used for cleaning windows and exteriors, the Sherpa drone can now cover over 200 square feet per minute, operating up to 160 feet high with continuous power via a tether. This advancement allows a single operator to manage the drone easily, completing jobs up to three times faster and at about half the traditional cost, significantly improving productivity and safety in high-risk, labor-intensive exterior work. The Sherpa drone leverages embodied AI, which enables it to interact with the environment by adjusting for factors like wind and surface texture to apply paint evenly. This capability addresses critical labor shortages and safety concerns in the construction industry, where many skilled workers are retiring and tasks often involve hazardous conditions. The modular design of the painting attachment allows existing Sherpa users to upgrade without purchasing new equipment, facilitating adoption. With applications already underway in stadium waterproofing and graffiti

    roboticsdronesembodied-AIconstruction-automationindustrial-paintinglabor-safetymodular-robotics
  • Lucid Bots brings embodied AI to commercial painting - The Robot Report

    Lucid Bots Inc., a Charlotte-based robotics company founded in 2018, has introduced new painting and coating capabilities for its Sherpa Drone, originally designed for exterior building cleaning. The drone uses a power tether to stay aloft while lifting a hose from the ground to supply water or paint, with the paint reservoir remaining on the ground. The system features automation-assisted controls such as “Distance Lock,” which uses onboard sensors to maintain the optimal distance and angle between the spray nozzle and the surface, ensuring consistent coverage and minimizing overspray. The drone’s design also incorporates military-grade nano-coatings to prevent paint from adhering to its surface, facilitating easy cleanup. Lucid Bots aims to address the growing demand for automation in large-scale commercial and industrial infrastructure projects amid significant labor shortages and safety concerns in construction. With over 40% of construction workers expected to retire by 2031, the company leverages embodied AI—robots capable of navigating and manipulating the physical world—to perform dangerous and demanding tasks like painting

    roboticsdronesembodied-AIautomationcommercial-paintingindustrial-robotsinfrastructure-maintenance
  • Diligent Robotics adds two members to AI advisory board - The Robot Report

    Diligent Robotics, known for its Moxi mobile manipulator used in hospitals, has expanded its AI advisory board by adding two prominent experts: Siddhartha Srinivasa, a robotics professor at the University of Washington, and Zhaoyin Jia, a distinguished engineer specializing in robotic perception and autonomy. The advisory board, launched in late 2023, aims to guide the company’s AI development with a focus on responsible practices and advancing embodied AI. The board includes leading academics and industry experts who provide strategic counsel as Diligent scales its Moxi robot deployments across health systems nationwide. Srinivasa brings extensive experience in robotic manipulation and human-robot interaction, having led research and development teams at Amazon Robotics and Cruise, and contributed influential algorithms and systems like HERB and ADA. Jia offers deep expertise in computer vision and large-scale autonomous systems from his leadership roles at Cruise, DiDi, and Waymo, focusing on safe and reliable AI deployment in complex environments. Diligent Robotics’

    roboticsAIhealthcare-robotsautonomous-robotshuman-robot-interactionrobotic-manipulationembodied-AI
  • Icarus raises $6.1M to take on space’s “warehouse work” with embodied-AI robots

    Icarus, a startup founded by Ethan Barajas and Jamie Palmer, has raised $6.1 million in seed funding to develop intelligent, dexterous robots aimed at automating the labor-intensive cargo logistics tasks aboard the International Space Station (ISS). After interviewing astronauts, the founders identified that much of the astronauts’ time—trained experts with advanced backgrounds—is consumed by unpacking, repacking, and stowing cargo arriving every 60 days, rather than conducting scientific experiments. To address this inefficiency, Icarus is creating robots equipped with two arms and jaw grippers designed specifically for cargo handling tasks, starting with simpler robotic designs rather than humanoid forms to achieve about 80% of the needed dexterity. The company has demonstrated promising results with a terrestrial teleoperation demo involving unzipping and repacking real ISS cargo bags and plans to conduct flight testing through a parabolic flight campaign followed by a one-year demonstration aboard the ISS via Voyager Space’s commercial Bishop airlock. Initially,

    roboticsembodied-AIspace-robotscargo-logisticsteleoperationbimanual-manipulationspace-technology
  • X Square Robot debuts foundation model for robotic butler after $100M Series A - The Robot Report

    X Square Robot, a Shenzhen-based startup founded in 2023, has raised $100 million in Series A+ funding and introduced Wall-OSS, an open-source foundational AI model designed for robotic platforms, alongside its Quanta X2 humanoid robot. The company aims to advance household humanoid robotics by addressing key limitations in current robotic AI, such as over-reliance on task-specific training and excessive focus on bipedal locomotion. Instead, X Square Robot emphasizes generalized training in manipulation with robotic hands and reasoning across diverse robot forms to enable robots to perform unpredictable real-world tasks, like serving food, which traditional warehouse-focused training does not prepare them for. Wall-OSS is built on what X Square Robot claims to be the world’s largest embodied intelligence dataset and is designed to overcome challenges like catastrophic forgetting (loss of previously learned knowledge when training on new data) and modal decoupling (misalignment of vision, language, and action). The multimodal model is trained on vision-language-action

    roboticshumanoid-robotsembodied-AIfoundation-modelrobotic-butlerAI-trainingopen-source-robotics
  • FieldAI raises $405M to build universal robot brains

    FieldAI, a robotics AI company, announced a $405 million funding raise to develop universal "robot brains" capable of controlling diverse physical robots across varied real-world environments. The latest funding round, including a $314 million tranche co-led by Bezos Expedition, Prysm, and Temasek, adds to backing from investors such as Khosla Ventures and Intel Capital. FieldAI’s core innovation lies in its "Field Foundation Models," which integrate physics-based understanding into embodied AI—AI that governs robots physically navigating environments—enabling robots to quickly learn, adapt, and manage risk and safety in new settings. This physics-informed approach contrasts with traditional AI models that often lack risk awareness, making FieldAI’s robots better suited for complex and potentially hazardous environments. Founder and CEO Ali Agha emphasized that their goal is to create a single, general-purpose robot brain that can operate across different robot types and tasks, with a built-in confidence measure to assess decision reliability and manage safety thresholds. Agha’s decades

    robotartificial-intelligenceembodied-AIrobotics-safetyrobot-learningAI-modelsrobotics-technology
  • Ai2 says new MolmoAct 7B model brings AI into the physical world - The Robot Report

    The Allen Institute for AI (Ai2) has introduced MolmoAct 7B, an embodied AI model designed to bring advanced artificial intelligence into the physical world by enabling robots to perceive and interact with their surroundings more intelligently. Unlike traditional models that convert language instructions directly into movements, MolmoAct processes 2D visual inputs to generate 3D spatial plans, allowing robots to understand spatial relationships and plan actions accordingly. This model emphasizes transparency, safety, and adaptability, providing step-by-step visual reasoning that lets users monitor and adjust robot behavior in real time. Ai2 describes MolmoAct as an “action reasoning model” (ARM) that interprets high-level natural language commands and breaks them down into a sequence of spatially grounded decisions, enabling complex tasks like sorting objects to be executed as structured sub-tasks. MolmoAct 7B was trained on an open dataset of approximately 12,000 robot episodes captured in real-world household environments, such as kitchens and bedrooms, showcasing diverse tasks

    robotembodied-AIMolmoAct-7Bspatial-reasoningaction-reasoning-modelAI-roboticsvisual-waypoint-planning
  • ShengShu Technology launches Vidar multi-view physical AI training model - The Robot Report

    ShengShu Technology, a Beijing-based company founded in March 2023 specializing in multimodal large language models, has launched Vidar, a multi-view physical AI training model designed to accelerate robot development. Vidar, which stands for “video diffusion for action reasoning,” leverages a combination of limited physical training data and generative video simulations to train embodied AI models. Unlike traditional methods that rely heavily on costly, hardware-dependent physical data collection or purely simulated environments lacking real-world variability, Vidar creates lifelike multi-view virtual training environments. This approach allows for scalable, robust training of AI agents capable of real-world tasks, reducing the need for extensive physical data by up to 1/80 to 1/1,200 compared to industry-leading models. Built on ShengShu’s flagship video-generation platform Vidu, Vidar employs a modular two-stage learning architecture that separates perceptual understanding from motor control. In the first stage, large-scale general and embodied video data train the perceptual

    robotembodied-AIAI-training-modelsimulationgenerative-videorobotics-developmentphysical-AI
  • TRI: pretrained large behavior models accelerate robot learning

    The Toyota Research Institute (TRI) has advanced the development of Large Behavior Models (LBMs) to accelerate robot learning, demonstrating that a single pretrained LBM can learn hundreds of tasks and acquire new skills using 80% less training data. LBMs are trained on large, diverse datasets of robot manipulation, enabling general-purpose robots to perform complex, long-horizon behaviors such as installing a bike rotor. TRI’s study involved training diffusion-based LBMs on nearly 1,700 hours of robot data and conducting thousands of real-world and simulation rollouts, revealing that LBMs consistently outperform policies trained from scratch, require 3-5 times less data for new tasks, and improve steadily as more pretraining data is added. TRI’s LBMs use a diffusion transformer architecture with multimodal vision-language encoders and a transformer denoising head, processing inputs from wrist and scene cameras, proprioception, and language prompts to predict short action sequences. The training data combines real-world teleoperation data,

    roboticslarge-behavior-modelsrobot-learningpretrained-modelsToyota-Research-Instituteautonomous-robotsembodied-AI
  • Hugging Face launches Reachy Mini robot as embodied AI platform

    Hugging Face, following its acquisition of Pollen Robotics in April 2025, has launched Reachy Mini, an open-source, compact robot designed to facilitate experimentation in human-robot interaction, creative coding, and AI. Standing 11 inches tall and weighing 3.3 pounds, Reachy Mini features motorized head and body rotation, expressive animated antennas, and multimodal sensing via an integrated camera, microphones, and speakers, enabling rich AI-driven audio-visual interactions. The robot is offered as a kit in two versions, encouraging hands-on assembly and deeper mechanical understanding, and will provide over 15 robot behaviors at launch. A key advantage of Reachy Mini is its seamless integration with Hugging Face’s AI ecosystem, allowing users to utilize advanced open-source models for speech, vision, and personality development. It is fully programmable in Python, with planned future support for JavaScript and Scratch, catering to developers of varying skill levels. The robot’s open-source hardware, software, and simulation

    robotembodied-AIopen-source-roboticshuman-robot-interactionAI-powered-robotprogrammable-robotHugging-Face-robotics
  • China's humanoid robot with full embodied AI works at auto factory

    China has deployed AlphaBot2, a general-purpose humanoid robot with full embodied AI, in an automotive factory operated by Dongfeng Liuzhou Motor Co. Developed by Shenzhen-based AI² Robotics, AlphaBot2 performs diverse tasks such as quality inspection, assembly, logistics, and maintenance. This deployment marks the first full-scenario validation of a domestically developed embodied AI model in China’s automotive sector. The robot leverages real factory data to continuously improve its spatial intelligence and learning capabilities through a feedback loop with AI² Robotics’ self-developed embodied large model, enhancing its efficiency, precision, and adaptability in complex, evolving manufacturing environments. AlphaBot2 is powered by the advanced GOVLA AI model, a Vision-Language-Action system built on the AI²R Brain platform, enabling near-human dexterity and full-body coordination with over 34 degrees of freedom. It features 360° spatial sensing, autonomous navigation, and a vertical working range of up to 240 cm, with over six hours of battery life for extended operations. The robot’s flexible manipulation and rapid adaptation reduce deployment time and support mixed-model automotive production lines, demonstrating significant advancements in intelligent manufacturing and factory automation in China.

    robothumanoid-robotembodied-AIintelligent-manufacturingfactory-automationroboticsAI-in-robotics