RIEM News LogoRIEM News

Articles tagged with "neural-networks"

  • Humanoid robot masters lip-sync, predicts face reaction with new system

    Researchers at Columbia University’s Creative Machines Lab have developed an advanced humanoid robot named Emo that can synchronize lifelike lip movements with speech audio and anticipate human facial expressions in real time. Emo features significant hardware improvements over its predecessor Eva, including 26 actuators for asymmetric facial expressions and flexible silicone skin manipulated by magnets for precise control. Equipped with high-resolution RGB cameras in its eyes, Emo uses a dual neural network framework: one model predicts its own facial movements, while another anticipates the human interlocutor’s expressions. This allows Emo to perform coexpressions—mirroring human facial reactions before they fully manifest—across multiple languages, including those not in its training data. The system’s predictive model, trained on 970 videos from 45 participants, analyzes subtle initial facial changes to forecast target expressions with high speed and accuracy, running at 650 frames per second. The inverse model executes motor commands at 8,000 fps, enabling Emo to generate facial expressions within 0.002 seconds,

    robothumanoid-robotfacial-roboticshuman-robot-interactionmotor-controlneural-networksreal-time-expression
  • New microchips mimic human nerves to boost speed and cut power waste

    Scientists at Germany’s Ilmenau University of Technology are developing a new generation of ultra-fast, energy-efficient microchips inspired by the human brain’s nerve signaling. Their neuroNODE project focuses on superconducting electronic components that process information using short electrical pulses, mimicking how signals travel along human nerve pathways. Unlike traditional silicon chips that consume power continuously, these pulse-based circuits only use energy when processing signals, potentially halving the energy consumption needed for the same computing power. This innovation aims to address the rapidly growing global energy demands driven by data traffic from smartphones, cloud services, streaming, and AI applications. The project is particularly timely given the soaring electricity consumption of modern IT infrastructure, with AI training alone, such as for ChatGPT-4, requiring tens of millions of kilowatt-hours. Traditional chips’ constant power draw, even when idle, is becoming a critical bottleneck. By leveraging quantum effects in superconducting circuits—concepts originally proposed by John von Neumann—the researchers hope to create components that

    energymicrochipssuperconducting-circuitsenergy-efficiencydata-centersneural-networkscomputing-technology
  • US engineers create AI bionic hand that grips object like a human hand

    Engineers at the University of Utah have developed an AI-enhanced bionic hand that mimics the natural grasping ability of a human hand by integrating pressure and proximity sensors with an artificial neural network trained on natural hand movements. This prosthetic hand can intuitively and securely grip objects, allowing users to perform everyday tasks such as picking up small items or drinking from a cup with greater precision and less mental effort, even without extensive practice. The sensors are sensitive enough to detect very light touches, and the AI independently adjusts each finger’s position to form an optimal grasp, resulting in a prosthetic that functions more naturally and reduces cognitive strain. A key innovation of this system is its bioinspired control scheme that balances user intent with AI assistance, allowing the prosthetic to adapt when the user wants to release an object or modify their grip. Tested on four amputee participants, the hand improved performance on standardized assessments and enabled fine motor tasks that were previously difficult, enhancing usability and user confidence. This breakthrough points

    roboticsbionic-handAI-prostheticsneural-networkssensor-technologyhuman-machine-interactionprosthetic-control
  • Brain-mimicking neuron moves robots closer to human-like control

    Scientists from Loughborough University, in collaboration with the Salk Institute and the University of Southern California, have developed an artificial transneuron that closely mimics the electrical activity of neurons from different regions of the macaque brain. Unlike traditional artificial neurons that perform a single function, this transneuron can switch between roles related to vision, planning, and movement by adjusting its electrical properties in real time. It reproduces brain pulse patterns with up to 100% accuracy and responds dynamically to environmental changes such as pressure and temperature, suggesting potential applications in sensory systems and energy-efficient computing. The transneuron’s functionality is enabled by a nanoscale memristor component, which physically alters its internal structure as electricity flows, allowing it to generate diverse electrical pulses without relying on software. This hardware-based adaptability allows the device to process information similarly to biological neurons, including responding differently to multiple simultaneous inputs based on their timing—capabilities that typically require multiple artificial neurons. The researchers envision building networks of these trans

    robotartificial-neuronbrain-inspired-computingmemristorneural-networksrobotics-controlbio-inspired-hardware
  • Roboticist Warns of Robot Bubble - CleanTechnica

    The article discusses a recent warning from Rodney Brooks, a renowned roboticist and cofounder of iRobot, about a potential "robot bubble" fueled by excessive hype and investment in humanoid robots. In his article “Why Today’s Humanoids Won’t Learn Dexterity,” Brooks argues that despite significant funding from venture capitalists and major tech companies, current humanoid robots will not achieve human-like dexterity anytime soon. He emphasizes that while he remains optimistic about the future of robotics, the ambitious timelines proposed by figures like Tesla’s Elon Musk and Figure’s Mike Cagney—predicting significant humanoid robot capabilities within a few years—are unrealistic and reflect fantasy thinking. Brooks provides a historical overview of robotics, highlighting that humanoid robots are still in the early stages of the hype cycle, while AI is transitioning from peak hype toward a period of disillusionment. He discusses the technical challenges remaining, particularly in developing safe, two-legged humanoid robots and human-like dexterity in robotic

    roboticshumanoid-robotsAIrobotic-dexterityrobot-bubbleneural-networkstechnology-hype-cycle
  • Cortical Labs' CL1 turns living neurons into programmable processors

    Cortical Labs, led by neuroscientist Brett Kagan, has developed the CL1, the world’s first commercial biological computer that uses 800,000 lab-grown human neurons reprogrammed from skin or blood samples to process information. Unlike traditional silicon-based computers, the CL1’s living neurons can learn, adapt, and in some cases outperform machine learning systems. The device, which began shipping in summer 2025 at $35,000 per unit, includes a custom life-support system for the neurons and operates with significantly lower energy consumption compared to conventional data centers. Early users span various fields, including pharmaceutical research, finance, game development, and AI science. The CL1 evolved from an earlier proof-of-concept project called DishBrain, which demonstrated the feasibility of using living neurons for computation by enabling them to play the game Pong. Transitioning from DishBrain to a commercial product required extensive engineering efforts to ensure scalability, reproducibility, and robustness beyond tightly controlled laboratory conditions. Cortical

    biological-computingsynthetic-intelligenceneural-networksbrain-computer-interfaceenergy-efficient-computingbiocomputersneuroscience-technology
  • Cortical Labs' CL1 turns living neurons into programmable processors

    Cortical Labs, led by neuroscientist Brett Kagan, has developed the CL1, the world’s first commercial biological computer that uses 800,000 lab-grown human neurons reprogrammed from skin or blood samples to process information. Unlike traditional silicon-based processors, these living neurons can learn, adapt, and in some cases outperform machine learning systems. The CL1, priced at $35,000 and shipping since summer 2025, includes a custom life-support system for the neurons and operates with significantly lower energy consumption compared to conventional data centers. Its early adopters span diverse fields such as pharmaceuticals, finance, gaming, and AI research. The journey from the initial scientific proof of concept, DishBrain, to the commercial CL1 product took about two and a half to three years and involved extensive engineering challenges. Moving beyond lab-scale experiments required building a scalable, reproducible system, which meant developing everything from low-level code and kernel-level software to custom hardware including FPGAs and printed

    biological-computingsynthetic-intelligenceneural-networksbrain-computer-interfaceenergy-efficient-computingregenerative-medicineAI-research
  • What Tesla’s Optimus robot can do in 2025 and where it still lags

    Tesla aims to produce 5,000 Optimus humanoid robots by 2025, positioning the robot as central to its future under the vision of integrating AI into the physical world. CEO Elon Musk has claimed that 80% of Tesla’s future value will derive from Optimus and related AI ventures, signaling a shift from purely an automaker to a “physical AI” platform. Demonstrations through 2024 and 2025 have shown Optimus performing basic locomotion with improved heel-to-toe walking, simple household chores like sweeping and trash removal, and basic manipulation tasks such as handling car parts. These capabilities are enabled by a unified control policy—a single neural network trained using vision-based inputs and human video data—which Tesla highlights as a scalable approach to skill acquisition. However, Optimus’s current functionality is largely limited to structured or lightly staged environments with known objects and controlled lighting, lacking robust autonomy in unstructured homes or fully operational industrial settings. While the robot shows smoother full-body coordination and

    robothumanoid-robotTesla-OptimusAI-roboticsautomationneural-networksrobotics-development
  • Scientists grow mini-brains in lab to boost energy efficiency in AI

    Researchers at Lehigh University, led by Professor Yevgeny Berdichevsky, are developing lab-grown mini-brains called brain organoids to study how the human brain processes information with remarkable energy efficiency. Supported by a $2 million grant from the National Science Foundation’s Emerging Frontiers in Research and Innovation program, the team aims to replicate the brain’s complex computations to design smarter, faster, and more energy-efficient artificial intelligence (AI). Unlike traditional hardware-based neural networks, these organoids could reveal new computational mechanisms that improve AI’s processing capacity while drastically reducing power consumption. The project involves engineering three-dimensional brain organoids by arranging neurons in an ordered structure resembling the human cortex, using 3D-printed biomaterial scaffolds developed by bioengineering expert Lesley Chow. The organoids will be stimulated with light pulses representing simple moving images, allowing researchers to observe neural responses related to motion, speed, and direction—key tasks for AI applications like self-driving cars. By decoding neuronal activity patterns

    energyartificial-intelligencebrain-organoidsenergy-efficiencybioengineeringneural-networks3D-printed-biomaterials
  • Agility Robotics explains how to train a whole-body control foundation model - The Robot Report

    Agility Robotics has developed a whole-body control foundation model for its Digit humanoid robot, designed to enable safe, stable, and versatile task execution in complex, human-centric environments. This model acts like a "motor cortex," integrating signals from different control layers to manage voluntary movements and fine motor skills. It is implemented as a relatively small LSTM neural network with fewer than one million parameters, trained extensively in NVIDIA’s Isaac Sim physics simulator. Remarkably, the model transfers directly from simulation to the real world without additional training, allowing Digit to perform tasks such as walking, grasping, and manipulating heavy objects with high precision and robustness to disturbances. The model can be prompted using various inputs, including dense spatial objectives and large language models, enabling Digit to execute complex behaviors like grocery shopping demonstrated at NVIDIA’s GTC event. Agility Robotics aims to provide an intuitive interface for humanoid robots similar to fixed-base robots, where users specify desired end-effector poses and the robot autonomously positions itself accordingly.

    roboticshumanoid-robotswhole-body-controlneural-networksAI-in-roboticsrobot-manipulationAgility-Robotics
  • Tesla’s Dojo, a timeline

    The article chronicles the development and evolution of Tesla’s Dojo supercomputer, a critical component in Elon Musk’s vision to transform Tesla from just an automaker into a leading AI company focused on full self-driving technology. First mentioned in 2019, Dojo was introduced as a custom-built supercomputer designed to train neural networks using vast amounts of video data from Tesla’s fleet. Over the years, Musk and Tesla have highlighted Dojo’s potential to significantly improve the speed and efficiency of AI training, with ambitions for it to surpass traditional GPU-based systems. Tesla officially announced Dojo in 2021, unveiling its D1 chip and plans for an AI cluster comprising thousands of these chips. By 2022, Tesla demonstrated tangible progress with Dojo, including load testing of its hardware and showcasing AI-generated imagery powered by the system. The company aimed to complete a full Exapod cluster by early 2023 and planned multiple such clusters to scale its AI capabilities. In 2023, Musk

    robotAIsupercomputerTesla-Dojoself-driving-carsneural-networksD1-chip
  • Ultra-fast Airy beams keep network flowing past walls and obstacles

    Researchers at Princeton University have developed a novel wireless communication system that uses ultra-fast Airy beams—curved transmission paths—to navigate around indoor obstacles and maintain uninterrupted high-speed data flow. This innovation addresses a key limitation of sub-terahertz frequency signals, which, while capable of extremely high data rates needed for applications like virtual reality and autonomous vehicles, are easily blocked by walls, furniture, or people. By combining physics-based beam shaping with machine learning, the team trained a neural network to select and adapt the optimal Airy beam in real time, allowing signals to bend around obstacles rather than relying on reflection. To enable this adaptive capability, the researchers created a simulator that models countless indoor scenarios, allowing the neural network to learn effective beam configurations without exhaustive physical testing. This approach leverages physical principles to efficiently train the system, which then rapidly adjusts to dynamic environments, maintaining strong connections even in cluttered spaces. Experimental tests mimicking real-world indoor conditions demonstrated the system’s potential, marking a significant step toward

    IoTwireless-communicationneural-networkssub-terahertzAiry-beamsmachine-learningindoor-networking
  • AI decodes dusty plasma mystery and describes new forces in nature

    Scientists at Emory University developed a custom AI neural network that successfully discovered new physical laws governing dusty plasma, a complex state of matter consisting of electrically charged gas with tiny dust particles. Unlike typical AI applications that predict outcomes or clean data, this AI was trained on detailed experimental data capturing three-dimensional particle trajectories within a plasma chamber. By integrating physical principles such as gravity and drag into the model, the AI could analyze small but rich datasets and reveal precise descriptions of non-reciprocal forces—interactions where one particle’s force on another is not equally reciprocated—with over 99% accuracy. This breakthrough corrected long-standing misconceptions in plasma physics, including the nature of electric charge interactions between particles. The study demonstrated that when one particle leads, it attracts the trailing particle, while the trailing particle pushes the leader away, an asymmetric behavior previously suspected but never accurately modeled. The AI’s transparent framework not only clarifies these complex forces but also offers a universal approach applicable to other many-body systems, from living

    AIdusty-plasmaphysics-discoveryneural-networksmaterials-scienceparticle-interactionsplasma-physics
  • MIT vision system teaches robots to understand their bodies

    MIT researchers at CSAIL have developed a novel robotic control system called Neural Jacobian Fields (NJF) that enables robots to learn how their bodies move in response to motor commands purely through visual observation, without relying on embedded sensors or hand-coded models. Using a single camera and random exploratory movements, NJF allows robots—ranging from soft robotic hands to rigid arms and rotating platforms—to autonomously build an internal model of their 3D geometry and control sensitivities. This approach mimics how humans learn to control their limbs by observing and adapting to their own movements, shifting robotics from traditional programming toward teaching robots through experience. NJF’s key innovation lies in decoupling robot control from hardware constraints, enabling designers to create soft, deformable, or irregularly shaped robots without embedding sensors or modifying structures for easier modeling. By leveraging a neural network inspired by neural radiance fields (NeRF), NJF reconstructs the robot’s shape and its response to control inputs solely from visual data. This

    roboticsmachine-learningsoft-roboticsrobotic-control-systemsneural-networks3D-printingcomputer-vision
  • Stanford students build tiny AI-powered robot dog from basic kit

    Stanford University’s Computer Science 123 course offers undergraduates a hands-on introduction to robotics and AI by having them build and program a low-cost quadruped robot called “Pupper.” Over a 10-week elective, student teams receive a basic robot kit and learn to engineer the platform’s movement, sensing, and intelligence from the ground up. By the course’s end, groups demonstrated Puppers capable of navigating mazes, acting as tour guides, or simulating firefighting with a toy water cannon, showcasing practical applications of their AI and hardware skills. The course originated from a student robotics club project called “Doggo,” designed to prove that advanced legged robots need not be prohibitively expensive. Led by instructors including former Tesla executive Stuart Bowers, Stanford professor Karen Liu, and Google DeepMind researcher Jie Tan, the curriculum guides students from basic motor control and sensor calibration to training neural networks for gait refinement, object tracking, and voice command response. Students even create custom hardware extensions, bridging

    robotAIrobotics-educationquadruped-robotStanford-Universityneural-networkshardware-development
  • US startup unveils real-time tool that makes blood translucent

    US startup Ocutrx Technologies has unveiled HemoLucence, a pioneering surgical imaging technology that renders blood translucent in real time, allowing surgeons to see through pooled blood without suction or irrigation. Integrated into the OR Bot 3D Surgical Microscope, HemoLucence uses AI-powered algorithms and advanced computational physics to visualize tissue and structures obscured by blood, successfully penetrating up to three millimeters of whole human blood in lab tests. The system collects and analyzes light from multiple angles, separating scattered light from absorbed light to reconstruct a clear 3D view of hidden anatomy, including vessels, nerves, bleed sites, and tumors. This breakthrough addresses a longstanding challenge in operating room imaging by enabling surgeons to see through blood during procedures, potentially enhancing surgical precision and safety. Medical advisors from leading hospitals have praised the technology for its potential to reduce reliance on traditional blood-clearing methods, shorten surgery times, and improve outcomes. However, HemoLucence remains a prototype awaiting patent approval and must undergo clinical trials

    robotAIsurgical-technologymedical-imagingcomputational-physicsneural-networks3D-visualization
  • New 3D-printed off-roading robot made from recycled materials

    A European collaboration between Lemki Robotix (Ukraine), iSCALE 3D (Germany), and Zeykan Robotics (Czech Republic) has unveiled the world’s first fully 3D-printed autonomous off-road robot made entirely from recycled materials. The robot’s body, wheels, and rims are fabricated using reinforced recycled polymers—glass fiber-reinforced recycled polypropylene for the sealed body, puncture-proof recycled polyurethane for airless wheels, and carbon fiber-reinforced nylon for rims—ensuring durability in harsh outdoor environments. Equipped with 360° cameras, LiDAR, and Starlink satellite connectivity, it supports real-time remote operation and autonomous navigation via an onboard neural network, capable of functioning even in GPS-denied areas. Designed for challenging applications such as military logistics, search and rescue, precision agriculture, and infrastructure inspection, the hermetically sealed robot can cross shallow water and operate reliably in demanding conditions. This project exemplifies the potential of large-format 3D printing to

    robot3D-printingrecycled-materialsautonomous-robotoff-road-robotsustainable-roboticsneural-networks
  • Artificial Intelligence Models Improve Efficiency of Battery Diagnostics - CleanTechnica

    The National Renewable Energy Laboratory (NREL) has developed an innovative physics-informed neural network (PINN) model that significantly enhances the efficiency and accuracy of diagnosing lithium-ion battery health. Traditional battery diagnostic models, such as the Single-Particle Model (SPM) and the Pseudo-2D Model (P2D), provide detailed insights into battery degradation mechanisms but are computationally intensive and slow, limiting their practical use for real-time diagnostics. NREL’s PINN surrogate model integrates artificial intelligence with physics-based modeling to analyze complex battery data, enabling battery health predictions nearly 1,000 times faster than conventional methods. This breakthrough allows researchers and manufacturers to non-destructively monitor internal battery states, such as electrode and lithium-ion inventory changes, under various operating conditions. By training the PINN surrogate on data generated from established physics models, NREL has created a scalable tool that can quickly estimate battery aging and lifetime performance across different scenarios. This advancement promises to improve battery management, optimize design, and extend the operational lifespan of energy storage systems, which are critical for resilient and sustainable energy infrastructures.

    energybattery-diagnosticsartificial-intelligenceneural-networkslithium-ion-batteriesbattery-healthenergy-storage
  • Solid-state battery breakthrough promises 50% more range in one charge

    Researchers from Skolkovo Institute of Science and Technology (Skoltech) and the AIRI Institute have achieved a significant breakthrough in solid-state battery technology by using machine learning to accelerate the discovery of high-performance battery materials. Their innovation could enable electric vehicles (EVs) to travel up to 50% farther on a single charge while improving safety and battery lifespan. The team employed graph neural networks to rapidly identify optimal materials for solid electrolytes and protective coatings, overcoming a major hurdle in solid-state battery development. This approach is orders of magnitude faster than traditional quantum chemistry methods, enabling quicker advancement in battery design. A key aspect of the research is the identification of protective coatings that shield the solid electrolyte from reactive lithium anodes and cathodes, which otherwise degrade battery performance and increase short-circuit risks. Using AI, the team discovered promising coating compounds such as Li3AlF6 and Li2ZnCl4 for the solid electrolyte Li10GeP2S12, a leading candidate material. This work not only enhances the durability and efficiency of solid-state batteries but also paves the way for safer, more durable, and higher-performing EVs and portable electronics, potentially reshaping the future of energy storage.

    energysolid-state-batterybattery-materialselectric-vehiclesmachine-learningneural-networksenergy-storage