Articles tagged with "robotic-manipulation"
NVIDIA's Cosmos Policy helps robots predict what happens next
NVIDIA has introduced Cosmos Policy, a novel robot control framework that leverages large pretrained video prediction models to simplify decision-making in robotics. Unlike traditional robot policies that rely on separate perception, planning, and control modules and require extensive task-specific data, Cosmos Policy post-trains a pretrained video world model (Cosmos Predict) on robot demonstration data. This approach integrates robot actions, physical states, and task outcomes into a unified temporal representation, enabling the model to jointly predict the robot’s next actions, future states, and task success within a single architecture. This reduces architectural complexity and the need for large amounts of robot-specific training data. Benchmark tests demonstrate that Cosmos Policy achieves high success rates on multi-step robotic manipulation tasks, often matching or surpassing existing methods while using significantly fewer training demonstrations. A key advantage is its planning capability at inference time, allowing the model to generate and evaluate multiple candidate action sequences and select those with the best predicted outcomes over longer horizons. This strategic planning enables robots to perform complex tasks
roboticsrobot-controlAI-in-roboticsNVIDIA-Cosmos-Policyrobot-planningvideo-prediction-modelsrobotic-manipulationVine-inspired robotic gripper gently lifts heavy and fragile objects - Robohub
Researchers at MIT and Stanford have developed a vine-inspired robotic gripper that can gently and securely lift a wide range of heavy and fragile objects, such as watermelons and glass vases. The robot operates by inflating long, flexible, vine-like tubes from a pressurized box near the target object; these tubes extend, twist, and coil around the object before being clamped and mechanically wound back to lift it in a soft, sling-like grasp. This design allows the robot to navigate tight spaces and cluttered environments, offering a gentler alternative to conventional rigid grippers. The team envisions broad applications for this technology, including agricultural harvesting, cargo handling, and notably eldercare. In eldercare settings, the vine robot could assist caregivers by gently lifting patients out of bed, a task that is physically demanding and often uncomfortable for patients. This approach could reduce caregiver strain and improve patient comfort during transfers. The research, detailed in the journal Science Advances, builds on prior work in soft, pneumatic
roboticssoft-roboticsrobotic-grippervine-inspired-robotpneumatic-actuatorsrobotic-manipulationeldercare-roboticsRobots learn human touch with less data using adaptive motion system
Researchers at Keio University in Japan have developed an adaptive motion reproduction system that enables robots to perform human-like grasping and manipulation using minimal training data. Traditional robotic systems struggle to adjust when objects vary in weight, stiffness, or texture, limiting their use to controlled factory environments. The new system leverages Gaussian process regression to model complex nonlinear relationships between object properties and human-applied forces, allowing robots to infer human motion intent and adapt their movements to unfamiliar objects in dynamic, real-world settings such as homes and hospitals. Testing showed that this approach significantly outperforms conventional motion reproduction and imitation learning methods, reducing position and force errors by substantial margins both within and beyond the training data range. By requiring less data and lowering machine learning costs, the technology has broad potential applications, including life-support robots that must adapt to diverse tasks. This advancement builds on Keio University’s expertise in force-tactile feedback and haptic technologies and represents a key step toward enabling robots to operate reliably in unpredictable environments. The
roboticsadaptive-motionmachine-learningGaussian-process-regressionhuman-robot-interactionrobotic-manipulationautomationByteDance backs China’s new humanoid robot maker in funding round
Chinese robotics startup X Square Robot has secured $143.3 million (1 billion yuan) in a Series A++ funding round led by major investors including ByteDance, HSG (formerly Sequoia Capital China), and government-backed firms such as Beijing Information Industry Development Investment Fund and Shenzhen Capital Group. Founded in 2023, X Square specializes in humanoid robots and embodied AI, aiming for applications in homes, hotels, and logistics. The company is known for its Quanta X1 and X2 wheeled humanoid robots with dexterous hands, powered by its proprietary vision–language–action (VLA) model called WALL-A. This model integrates world models and causal reasoning to enhance robots’ ability to generalize and perform complex tasks in unstructured environments without prior training. X Square’s product lineup includes the Quanta X1, a wheeled bimanual robot with 20 degrees of freedom and a working range of up to 1 meter, and the more advanced Quanta
roboticshumanoid-robotsembodied-AIartificial-intelligencerobotics-startuprobotic-manipulationautonomous-robotsHumanoid Robots Keep Slipping Into the Future, Much Like Fusion - CleanTechnica
The article from CleanTechnica discusses the recurring pattern of optimism and delay surrounding humanoid robots, comparing their elusive arrival to that of fusion energy. While each wave of robotic development brings improvements in motion, balance, and control, the promise of general-purpose humanoid robots capable of safely performing diverse everyday tasks continues to be postponed. Despite genuine scientific progress and breakthroughs, the remaining technical challenges are more profound than initially anticipated, pushing realistic deployment timelines further into the future. Significant advances have been made in robotic locomotion, with bipedal and quadrupedal robots now able to walk, run, jump, and navigate complex terrain with impressive balance and dynamic control. However, the article emphasizes that locomotion, while necessary, is not sufficient for practical usefulness. The primary challenge lies in dexterous manipulation—robots’ ability to interact safely and reliably with objects such as doors, tools, and fragile items in unstructured environments. This problem is compounded by the complexity of tactile sensing, which is local,
roboticshumanoid-robotsrobotic-manipulationbipedal-robotsrobotic-locomotionrobotics-researchrobotic-technologyMuscular humanoid robot folds towel autonomously by watching humans
US startup Kinsi Robotics has developed the KR-1, a muscular humanoid robot capable of autonomously folding towels by observing human demonstrations. The robot uses simultaneous perception, planning, and dexterous manipulation to pick up towels from random positions and fold them neatly, mimicking human behavior. Central to this capability is kinesthetic teaching, a method where a human operator physically guides the robot through the task while the system records visual inputs and corresponding arm and gripper movements. This approach allows the robot to learn a flexible, adaptable skill rather than a fixed sequence, enabling it to handle varying towel configurations. Unlike rigid objects, soft deformable materials like towels pose a significant challenge for robots due to their continuously changing shape, which is difficult to model with traditional physics-based methods. Instead, KR-1 learns through repeated experience, internalizing how the towel responds to different manipulations by mapping visual cues directly to physical actions without explicit labeling of towel features. This experiment exemplifies a broader trend in robotics toward learning
roboticshumanoid-robotautonomous-robotkinesthetic-teachingrobotic-manipulationAI-in-roboticssoft-object-handlingHoliday prep goes robotic as Christmas machines tackle decor and meals
As the Christmas season approaches, robotics and autonomous systems are increasingly being employed to handle festive preparations, blending holiday traditions with advanced automation. HEBI Robotics demonstrated this trend with their mobile manipulator Treadward, equipped with a 7-DoF arm, which efficiently decorated a Christmas tree and staged festive scenes in just two days. The robot showcased impressive strength, coordination, and adaptability, highlighting the potential of mobile manipulators to assist in real-world holiday tasks. Meanwhile, Germany’s FZI Research Center explored the challenges of robotic meal preparation under the guidance of large language models and human teleoperation. Their staged demonstration humorously illustrated how small miscommunications between AI instructions and robotic execution can lead to chaotic outcomes, while emphasizing ongoing research in robotic manipulation, human-robot interaction, and AI decision support. Additionally, Fraunhofer IOSB presented an autonomous system that assembled and decorated a large outdoor Christmas tree using coordinated multi-robot operations, including autonomous cranes and quadruped robots. This project undersc
roboticsautonomous-systemsAIrobotic-manipulationteleoperationhuman-robot-interactionautomationSoft vine robot wraps fragile items, even lifting human bodies safely
Researchers at MIT and Stanford have developed a novel “robo-tendril” robot inspired by garden vines, capable of gently wrapping, tightening, and lifting fragile and heavy objects—including safely lifting a human body. The system uses long, inflatable tubes that grow outward by turning inside out, twisting and coiling around objects before reconnecting to their base. A built-in clamp and mechanical winch then retract the tubes, creating a soft, hammock-like cradle. This design allows the robot to navigate tight spaces, push through clutter, and stabilize loads that conventional grippers struggle to handle. A key innovation is the robot’s ability to switch between “open-loop” growth—extending and burrowing under an object—and “closed-loop” grasping, where the vine forms a continuous loop by reconnecting to its base to securely hold and lift objects. This capability addresses challenges in eldercare, such as transferring patients from beds, by reducing physical strain on caregivers and providing a gentler, more comfortable experience for
roboticssoft-roboticsvine-robotcaregiving-technologypneumatic-actuatorsrobotic-grippersrobotic-manipulationNVIDIA tech helps humanoid robot beat human operators at opening doors
NVIDIA researchers have developed “DoorMan,” a robotic learning system enabling a humanoid robot—the $16,000 Unitree G1—to open doors more efficiently than human operators. Utilizing only built-in RGB cameras and trained entirely through simulation-based reinforcement learning in NVIDIA’s Isaac Lab, the system allows the robot to open various real-world doors faster and with higher success rates than humans remotely controlling it. In tests, DoorMan completed door-opening tasks up to 31% faster than expert teleoperators and achieved an 83% success rate, outperforming both expert (80%) and non-expert (60%) human operators. This advancement represents significant progress in “loco-manipulation,” where robots must simultaneously walk, perceive, coordinate limbs, and manipulate objects. The DoorMan system employs a novel pixel-to-action training approach, relying solely on raw RGB input without specialized sensors like depth cameras or motion-capture markers. To overcome common reinforcement learning challenges, the researchers introduced a “staged-reset” mechanism
roboticshumanoid-robotreinforcement-learningNVIDIA-Isaac-Labrobotic-manipulationAI-roboticsdoor-opening-robotChinese model helps humanoid robots adapt to tasks without training
Researchers from Wuhan University have developed a novel framework called the recurrent geometric-prior multimodal policy (RGMP) to enhance humanoid robots' ability to manipulate objects with human-like adaptability and minimal training. Current humanoid robots excel at specific tasks but struggle to generalize when objects change shape, lighting varies, or when encountering tasks they were not explicitly trained for. RGMP addresses these limitations by incorporating two key components: the Geometric-Prior Skill Selector (GSS), which helps the robot analyze an object's shape, size, and orientation to select the appropriate skill, and the Adaptive Recursive Gaussian Network (ARGN), which models spatial relationships and predicts movements efficiently with far fewer training examples than traditional deep learning methods. Testing showed that robots using RGMP achieved an 87% success rate on novel tasks without prior experience, demonstrating a significant improvement over existing diffusion-policy-based models, with about five times greater data efficiency. This advancement could enable humanoid robots to perform a wider range of tasks in dynamic environments such
roboticshumanoid-robotsrobot-learningdata-efficient-roboticsrobotic-manipulationAI-in-roboticsrobotic-skill-adaptationUS firm teaching humanoid robot brains to do laundry, make coffee, light candles
Physical Intelligence (PI), a Silicon Valley robotics startup, is advancing the development of humanoid robots capable of learning and reliably performing complex physical tasks such as folding laundry, making coffee, and lighting candles. The company recently raised $400 million from investors including OpenAI and Jeff Bezos, valuing it above $2 billion. PI’s innovation centers on a new training method called Recap (Reinforcement Learning with Experience and Corrections via Advantage-conditioned Policies), which enables robots to learn more like humans—through instruction, correction, and autonomous practice—addressing a key challenge in robotics where small errors during task execution often compound and cause failure. Recap enhances robot learning by incorporating corrective human interventions when errors occur and by allowing the robot to evaluate its own actions using reinforcement learning. This approach uses a value function to assign credit or blame to specific moves, enabling the system to learn from imperfect experiences rather than discarding them. PI’s vision-language-action model, π*0.6, trained with Rec
roboticshumanoid-robotsAI-trainingreinforcement-learningrobotic-manipulationphysical-intelligenceautomationVideo: Humanoid robot goes 'mountain-grade' while picking up litter
Flexion Robotics has released a demonstration video showcasing its humanoid robot equipped with a new autonomy framework that enables it to navigate challenging outdoor terrain and perform litter cleanup tasks independently, without prior training. The robot identifies scattered objects, picks them up, and deposits them into a trash bin, highlighting advances in real-world robotic autonomy. The company’s technology integrates reinforcement learning and sim-to-real transfer to train low-level motor skills in simulation, which are then executed reliably on physical robots. This approach addresses the scalability challenge of collecting real-world data for every possible scenario by combining core learned skills with high-level decision-making powered by large language and vision-language models. The system is structured as a three-layer modular hierarchy: at the top, a language or vision-language model handles task planning, common-sense reasoning, and breaking down goals into actionable steps; the middle layer generates safe, short-range motions based on perception and instructions; and the base layer uses reinforcement learning controllers to execute these motions robustly across different environments and robot
roboticshumanoid-robotreinforcement-learningrobot-autonomysim-to-real-transfermachine-learningrobotic-manipulationVideo: 'Backdrivable' robot hand spins nut on bolt at incredible speed
Kyber Labs, a New York-based robotics company, has introduced a robotic hand capable of spinning a nut on a bolt at exceptionally high speeds in real time, without any video edits. This performance is enabled by the hand’s fully backdrivable and torque-transparent actuators, which allow it to naturally adapt to the nut’s movement. The backdrivability means external forces can move the robot’s fingers, and the system can infer torque from motor current, eliminating the need for complex tactile sensors. This design philosophy aims to simplify manipulation tasks, making control software and AI learning systems more reliable and efficient by offloading variability handling to the hardware itself. The robotic hand mimics human-like mechanical compliance and precision, enabling fluid and dexterous manipulation suitable for delicate tasks at scale. Kyber Labs emphasizes that general-purpose robotic hands remain a significant bottleneck in advancing robot capabilities, particularly for complex assembly and manufacturing operations. Their platform includes dual arms with human-like hands designed specifically for embodied AI, facilitating large
robotroboticsrobotic-handbackdrivable-actuatorAI-based-controlrobotic-manipulationdexterous-manipulationThe quiet rise of factory humanoids
The article "The quiet rise of factory humanoids" explores the emerging role of humanoid robots in addressing labor shortages and reshoring challenges in modern factories. Unlike traditional automation designed for high-volume, repetitive tasks, humanoid robots are gaining traction for high-mix, low-volume work such as aerospace subassemblies, automotive rework, and handling awkward materials in older facilities. Experts emphasize that factories adopt humanoids not for their human-like appearance but for their ability to solve specific bottlenecks caused by labor scarcity. Success depends on meeting cycle time and uptime targets, proving their practical value before scaling. A key challenge for humanoid robots lies in manipulation rather than mobility. While many robots can walk or perform stunts, the nuanced dexterity required to handle tools and machinery—such as gripping, rotating, and applying precise force—is still difficult to achieve. This manipulation capability is critical since manual labor accounts for a significant portion of global economic value. Additionally, durability in harsh industrial environments demands advanced mechanical components
roboticshumanoid-robotsindustrial-automationmanufacturing-technologylabor-shortage-solutionsfactory-automationrobotic-manipulationPickNik expands support for Franka Research 3 robot on MoveIt Pro - The Robot Report
PickNik Robotics has announced expanded support for the Franka Research 3 (FR3) robotic arm in its MoveIt Pro software platform, aiming to accelerate AI robotics development through enhanced simulation, training data collection, and hardware-ready policy deployment. The FR3 is a force-sensitive robot known for its precision and low-level control access, widely used in research institutions. This collaboration, ongoing since 2018, combines Franka’s hardware capabilities with MoveIt Pro’s advanced planning and simulation tools to bridge the gap between AI robotics research and real-world deployment, reducing costs and risks for researchers. The new MoveIt Pro integration for the FR3 includes comprehensive robot models for single- and dual-arm setups, high-fidelity digital twin environments, tutorials for dataset collection and diffusion policy training, and detailed hardware setup guides to facilitate smooth transitions from simulation to physical systems. Additionally, PickNik released MoveIt Pro 7.0 in early 2025, featuring faster planning algorithms, expanded pro-RRT support for robots
robotroboticsAI-roboticsrobotic-armMoveIt-ProFranka-Research-3robotic-manipulationDiligent Robotics adds two members to AI advisory board - The Robot Report
Diligent Robotics, known for its Moxi mobile manipulator used in hospitals, has expanded its AI advisory board by adding two prominent experts: Siddhartha Srinivasa, a robotics professor at the University of Washington, and Zhaoyin Jia, a distinguished engineer specializing in robotic perception and autonomy. The advisory board, launched in late 2023, aims to guide the company’s AI development with a focus on responsible practices and advancing embodied AI. The board includes leading academics and industry experts who provide strategic counsel as Diligent scales its Moxi robot deployments across health systems nationwide. Srinivasa brings extensive experience in robotic manipulation and human-robot interaction, having led research and development teams at Amazon Robotics and Cruise, and contributed influential algorithms and systems like HERB and ADA. Jia offers deep expertise in computer vision and large-scale autonomous systems from his leadership roles at Cruise, DiDi, and Waymo, focusing on safe and reliable AI deployment in complex environments. Diligent Robotics’
roboticsAIhealthcare-robotsautonomous-robotshuman-robot-interactionrobotic-manipulationembodied-AIDisaster-response robot cuts wooden plank with handheld saw in secs
The Korean Atomic Energy Research Institute (KAERI) has developed ARMstrong Dex, a human-scale, dual-arm hydraulic robot designed specifically for disaster-response scenarios. A recent video demonstrates the robot’s ability to cut through a thick wooden beam (40 x 90 mm) using a handheld saw within seconds, highlighting its precision, continuous control, and dexterity without relying on powered tools. This capability is crucial for operating in disaster zones where power outages and obstructive debris are common, and where robots must perform tasks like cutting, drilling, and lifting with high accuracy to avoid further harm or structural instability. ARMstrong Dex is engineered to handle extreme conditions such as unstable terrain, toxic environments, and limited visibility. It features caterpillar tracks for mobility, can lift up to 441 pounds (200 kg) across both arms, and has demonstrated strength through tests like lifting 88 pounds (40 kg) with one arm and performing weighted pull-ups. Beyond raw power, the robot also exhibits fine motor skills, as
robotdisaster-response-robothydraulic-robothumanoid-robotrobotic-dexterityindustrial-robotrobotic-manipulationAugust 2025 issue: Motion control enables robots from the ISS to the AGT stage - The Robot Report
The August 2025 issue of The Robot Report highlights the critical role of motion control technologies in advancing robotics applications both in space and on Earth. A key feature explores PickNik Inc.’s collaboration with the Japan Aerospace Exploration Agency (JAXA) to develop a multi-arm robotic system designed for complex manipulation tasks in microgravity. This innovation aims to enhance cargo handling capabilities aboard the International Space Station (ISS) and support future crewed and uncrewed space missions. PickNik’s MoveIt Pro software, integral to this project, also finds applications in terrestrial governmental and commercial robotics. Additionally, the issue covers Boston Dynamics’ efforts to showcase its Spot quadruped robot on NBC’s America’s Got Talent (AGT). The performance combined teleoperated and autonomous control with precise choreography, demonstrating both the technical prowess of the engineering team and the expanding commercial and industrial potential of robotics. The company also turned an on-air malfunction into a memorable moment, highlighting the human side of robotic innovation. The issue
robotmotion-controlroboticsspace-roboticsBoston-Dynamicsautonomous-robotsrobotic-manipulationNew robot grip twists, turns, and rolls objects in tight spaces
Yale University researchers have developed a novel robotic hand, called the Sphinx, that significantly enhances a robot’s ability to grasp and rotate objects in tight, complex spaces. Unlike traditional robotic wrists that rely on three degrees of freedom (roll, pitch, yaw) but are mechanically complex and positioned away from the object, the Sphinx integrates these motions into a single spherical mechanism. This design allows the robot to perform precise maneuvers—such as twisting open jars, turning door handles, or screwing in light bulbs—more efficiently and closer to the object without moving the entire arm. Notably, the mechanism operates without sensors or cameras, relying purely on its mechanical design to achieve smooth, multi-axis rotations. This innovation addresses a major limitation in robotics by enabling machines to work effectively in cluttered or unpredictable environments, bridging the gap between industrial robots and adaptable robots suitable for homes, hospitals, and disaster zones. The Sphinx’s ability to handle delicate and complex tasks in confined spaces represents a significant step
roboticsrobotic-handrobot-gripYale-Universityrobotic-manipulationautomationrobotic-innovationLearn at RoboBusiness how Sim2Real is training robots for the real world - The Robot Report
The article highlights the upcoming RoboBusiness 2025 event in Silicon Valley, which will focus on advances in physical AI—combining simulation, reinforcement learning, and real-world data—to enhance robot deployment and reliability in dynamic environments such as e-commerce and logistics. A key feature will be a session showcasing Ambi Robotics’ AmbiStack logistics robot, which uses the PRIME-1 foundation model trained extensively in simulation to master complex tasks like 3D item stacking, akin to playing Tetris. This simulation-driven training, coupled with physical feedback, enables the robot to make real-time decisions and handle diverse packages efficiently. The session will be co-hosted by noted experts Prof. Ken Goldberg of UC Berkeley and Jeff Mahler, CTO and co-founder of Ambi Robotics. They will discuss scalable AI training approaches that improve robotic manipulation capabilities. RoboBusiness 2025 will also introduce the Physical AI Forum track, covering topics such as multi-model decision agents, AI-enhanced robot performance, and smarter data curation
roboticsartificial-intelligencesimulation-trainingwarehouse-automationphysical-AIrobotic-manipulationlogistics-robotsRobots can sense when something might slip from grip with new method
Engineers at the University of Surrey have developed a novel, bio-inspired method enabling robots to sense and prevent objects from slipping during manipulation by predicting slip events and adjusting their movements in real-time. Unlike traditional robotic grip strategies that rely solely on increasing grip force—which can damage delicate items—the new approach mimics human behavior by modulating the robot’s trajectory, such as slowing down or repositioning, to maintain a secure hold without excessive squeezing. This method, demonstrated through a predictive control system powered by a learned tactile forward model, allows robots to anticipate slip risks continuously and adapt accordingly. The research, published in Nature Machine Intelligence, shows that trajectory modulation significantly outperforms conventional grip-force-based slip control in certain scenarios and generalizes well to objects and movement paths not included in training. This advancement holds promise for enhancing robotic dexterity and reliability across various applications, including healthcare (handling surgical tools), manufacturing (assembling delicate parts), logistics (sorting awkward packages), and home assistance. The study highlights the importance of
roboticsrobotic-manipulationslip-preventionautomationtactile-sensingpredictive-controlbio-inspired-roboticsJAXA tests PickNik's MoveIt Pro software in multi-armed robotic system for the ISS - The Robot Report
PickNik Robotics partnered with the Japan Aerospace Exploration Agency (JAXA) to test MoveIt Pro software as the planning and control backbone for JAXA’s multi-armed robotic system under the Payload Organization and Transportation Robotic System (PORTRS) initiative. The goal was to demonstrate a complex robotic system capable of performing manipulation tasks in microgravity aboard the International Space Station (ISS), such as crawling, payload swapping, and handling flexible cargo transfer bags. These tasks, often routine maintenance or cargo handling, are time-consuming for astronauts whose time is extremely costly—up to $200,000 per hour—highlighting the significant return on investment in robotic assistance to augment astronauts and free them for higher-value activities. JAXA’s robot features four arms and a reconfigurable base that can stabilize itself by grabbing onto surfaces like ISS rails, enabling it to crawl like a spider in zero gravity. Unlike terrestrial robots, which account for gravity in their control systems, the zero-gravity environment required Pick
roboticsspace-roboticsJAXAMoveIt-Promulti-armed-robotISS-automationrobotic-manipulationChina’s ‘slim-waisted’ humanoid robot debuts with human-like skills
China’s Robotera has unveiled the Q5 humanoid robot, a slim-waisted, 1650 mm tall machine weighing 70 kg, designed for practical deployment in sectors like healthcare, retail, tourism, and education. Featuring 44 degrees of freedom (DoF), including the highly dexterous 11-DoF XHAND Lite robotic hand, Q5 excels in precise manipulation and smooth navigation within complex indoor environments. Its compact size and fused LiDAR with stereo vision enable autonomous movement with minimal human oversight. The robot supports full-body teleoperation via VR and sensor gloves and interacts through AI-powered natural dialogue, facilitating responsive, context-aware communication. Powered by the EraAI platform, Q5 integrates a complete AI lifecycle from teleoperation data collection to model training and closed-loop learning, offering over four hours of runtime on a 60V supply. Its 7-DoF robotic arms have a reach extending beyond two meters, allowing it to handle objects at various heights safely and compliantly.
robothumanoid-robotAI-roboticsautonomous-navigationrobotic-manipulationteleoperationservice-robotsBlack-I Robotics wins autonomous mobile robot picking challenge
Black-I Robotics won the Chewy Autonomous Mobile Picking (CHAMP) Challenge, a competition organized by Chewy and MassRobotics to develop fully autonomous robots capable of handling large, heavy, and non-rigid items in complex warehouse environments. The challenge addressed significant difficulties in warehouse automation, such as manipulating irregularly shaped, deformable items weighing over 40 pounds, which are difficult to grasp using conventional methods. Black-I Robotics’ winning system combined a mobile base with a 6-DOF industrial arm and custom multi-modal end effectors, integrating AI-driven perception, precise object detection, and pose estimation to enable reliable grasping and navigation in tight aisles alongside live warehouse operations. Their solution demonstrated full autonomy, adaptability, and seamless integration into fulfillment workflows, earning them the $30,000 first-place prize. The CHAMP Challenge emphasized not only manipulation but also system-level integration, requiring robots to navigate narrow aisles, avoid dynamic obstacles, and place items into shipping containers with mixed contents. Twelve
roboticsautonomous-robotswarehouse-automationAI-perceptionrobotic-manipulationindustrial-robotsmobile-robotsElephant trunk drone arm bends, grabs, and works in tight spaces
Researchers at the University of Hong Kong have developed the Aerial Elephant Trunk (AET), a flexible, shape-shifting robotic arm inspired by an elephant’s trunk, designed to enhance drone capabilities in complex manipulation tasks. Unlike traditional rigid drone arms with grippers, the AET uses a soft, continuum structure that can bend, twist, and wrap around objects of various sizes and shapes, enabling drones to operate effectively in tight spaces and awkward angles. This innovation addresses key limitations of existing aerial robots, such as weight constraints and limited range of motion, allowing drones to perform tasks that require both reach and finesse. The AET’s dexterity and adaptability make it particularly valuable for applications in disaster response, infrastructure maintenance, and inspections in hard-to-reach environments. It can navigate narrow pipelines, maneuver around obstacles, and handle delicate operations like clearing debris from collapsed buildings or repairing high-voltage lines and bridges. By expanding the functional roles of drones beyond observation to hands-on interaction, the AET represents a
roboticsdronesaerial-roboticsflexible-robotic-armsrobotic-manipulationdisaster-response-technologyinfrastructure-inspectionMeta V-JEPA 2 world model uses raw video to train robots
Meta has introduced V-JEPA 2, a 1.2-billion-parameter world model designed to enhance robotic understanding, prediction, and planning by training primarily on raw video data. Built on the Joint Embedding Predictive Architecture (JEPA), V-JEPA 2 undergoes a two-stage training process: first, self-supervised learning from over one million hours of video and a million images to capture physical interaction patterns; second, action-conditioned learning using about 62 hours of robot control data to incorporate agent actions for outcome prediction. This approach enables the model to support planning and closed-loop control in robots without requiring extensive domain-specific training or human annotations. In practical tests within Meta’s labs, V-JEPA 2 demonstrated strong performance on common robotic tasks such as pick-and-place, achieving success rates between 65% and 80% in previously unseen environments. The model uses vision-based goal representations, generating candidate actions for simpler tasks and employing sequences of visual subgoals for more complex tasks
roboticsAIworld-modelsmachine-learningvision-based-controlrobotic-manipulationself-supervised-learning