Articles tagged with "AI-training"
Forget Sensors, Tesla's AI Training Costs Are Soaring - CleanTechnica
The article from CleanTechnica highlights the rapidly increasing costs Tesla is incurring for AI training infrastructure as it pushes toward full self-driving and robotaxi deployment. Tesla has long argued against using expensive sensors like lidar and radar, favoring cameras combined with AI software to reduce costs. However, recent financial disclosures reveal that Tesla’s operating expenses surged 39% in Q4 2024, largely driven by AI and R&D projects. Tesla is significantly expanding its AI training compute capacity, particularly at its Gigafactory Texas, where it plans to more than double onsite compute power in the first half of 2026. This expansion is costly and carefully managed to avoid overbuilding capacity prematurely. Despite years of promises, Tesla has yet to deploy fully driverless robotaxis without human supervisors, currently operating limited supervised trials in cities like Austin and San Francisco. The rollout is progressing city-by-city, contrary to earlier claims that Tesla could enable robotaxi service fleet-wide simultaneously. The company faces a critical juncture:
robotAITeslaself-driving-carsrobotaxiAI-trainingautonomous-vehiclesCompetitors Pull Ahead on Tesla’s Technology Tangents at CES - CleanTechnica
At CES, many competitors are advancing technologies that Tesla once pursued but has yet to bring to production, particularly in humanoid robotics. AGIBOT leads global sales in humanoid robots, offering various models including bipedal robots and more stable rolling-base units for industrial use. Numerous companies showcased robots performing diverse tasks, from dancing to industrial applications, highlighting rapid progress in this field. In contrast, Tesla has not yet started production of its humanoid robots, indicating competitors are pulling ahead in this technology tangent. Other technology tangents at CES that relate more closely to clean technology include advanced driver-assistance systems (ADAS) and self-driving vehicle solutions. Chinese automaker Geely demonstrated intelligent driving technologies, and multiple vendors presented components essential for autonomous vehicles, such as sensors, processors, and AI training software. Additionally, home energy storage systems and scalable energy solutions were prominent, with companies like Jackery offering commercially viable solar roofs—areas Tesla had previously promoted but now face strong competition. The event also saw widespread
robothumanoid-robotsAI-trainingautonomous-vehiclesEV-technologyclean-energyrobotics-industryVideo: Humanoid robot kicks teleoperator's groin in demo-gone-wrong
During a public demonstration of Unitree’s G1 humanoid robot, a teleoperator wearing a motion capture suit attempted a martial arts-style kick that inadvertently struck himself. Because the robot mirrors the operator’s movements exactly and both faced the same direction, the robot lifted its leg in sync, causing the operator’s own foot to hit his groin. The operator collapsed in pain while the robot mimicked his posture, creating a viral moment that highlighted the risks of human-robot interaction when movements are mirrored without spatial adjustment. Unitree recently introduced the G1-D, a wheeled humanoid robot designed for data collection, AI training, and practical tasks in industrial and service environments. The G1 robot itself has been showcased performing advanced martial arts maneuvers, including kicks, spins, and flips, demonstrating impressive agility and balance. However, some viewers have questioned the practical applications of these demonstrations, as Unitree markets the G1 primarily as a research and education platform rather than a consumer home assistant. Pr
roboticshumanoid-robotUnitree-G1motion-capturehuman-robot-interactionAI-trainingrobot-agilityAll the biggest news from AWS’ big tech show re:Invent 2025
At AWS re:Invent 2025, Amazon Web Services emphasized AI advancements focused on enterprise customization and autonomous AI agents. CEO Matt Garman highlighted a shift from AI assistants to AI agents capable of independently performing tasks and automating workflows, unlocking significant business value. Key announcements included expanded capabilities for AWS’s AgentCore platform, such as policy-setting features to control AI agent behavior, enhanced memory and logging functions, and 13 pre-built evaluation systems to help customers assess agent performance. AWS also introduced three new “Frontier agents” designed for coding, security reviews, and DevOps tasks, with preview versions already available. AWS unveiled its new AI training chip, Trainium3, promising up to 4x performance improvements and 40% lower energy use for AI training and inference. The company teased Trainium4, which will be compatible with Nvidia chips, signaling deeper integration with Nvidia technology. Additionally, AWS expanded its Nova AI model family with new text and multimodal models, alongside Nova Forge, a
energyAI-chipscloud-computingAI-agentsNvidia-compatibilityAI-trainingAWS-re:InventUS firm teaching humanoid robot brains to do laundry, make coffee, light candles
Physical Intelligence (PI), a Silicon Valley robotics startup, is advancing the development of humanoid robots capable of learning and reliably performing complex physical tasks such as folding laundry, making coffee, and lighting candles. The company recently raised $400 million from investors including OpenAI and Jeff Bezos, valuing it above $2 billion. PI’s innovation centers on a new training method called Recap (Reinforcement Learning with Experience and Corrections via Advantage-conditioned Policies), which enables robots to learn more like humans—through instruction, correction, and autonomous practice—addressing a key challenge in robotics where small errors during task execution often compound and cause failure. Recap enhances robot learning by incorporating corrective human interventions when errors occur and by allowing the robot to evaluate its own actions using reinforcement learning. This approach uses a value function to assign credit or blame to specific moves, enabling the system to learn from imperfect experiences rather than discarding them. PI’s vision-language-action model, π*0.6, trained with Rec
roboticshumanoid-robotsAI-trainingreinforcement-learningrobotic-manipulationphysical-intelligenceautomation1HMX introduces Nexus NX1 for full-body motion capture, teleoperation - The Robot Report
1HMX has introduced the Nexus NX1, a comprehensive full-body motion capture and teleoperation system designed to enhance training and simulation for humanoid robotics, embodied AI, and virtual reality (VR). The system integrates advanced technologies including HaptX Gloves G1 for tactile and force feedback, Virtuix Omni One’s 360-degree movement platform, and Freeaim’s motorized robotic shoes. It offers 72 degrees of freedom (DoF) body and hand tracking with sub-millimeter precision, capturing detailed data such as skeletal and soft tissue models, tactile displacement, pressure points, center of mass, and locomotion metrics. An included software development kit (SDK) facilitates integration with VR and robotics applications, enabling realistic real-time sensory input and valuable output data for robotic control, AI training, and user performance feedback. 1HMX envisions Nexus NX1 as a transformative tool across various industries including manufacturing, medical, defense, and research, supporting both single and multi-user immersive experiences with full
roboticsteleoperationmotion-capturehumanoid-robotsAI-trainingvirtual-realityhuman-machine-interfaceMbodi will show how it can train a robot using AI agents at TechCrunch Disrupt 2025
Mbodi, a New York-based startup founded by former Google engineers Xavier Chi and Sebastian Peralta, has developed a cloud-to-edge hybrid computing system designed to accelerate robot training using multiple AI agents. Their software integrates with existing robotic technology stacks and allows users to train robots via natural language prompts. The system breaks down complex tasks into smaller subtasks, enabling AI agents to collaborate and gather the necessary information to teach robots new skills more efficiently. Mbodi’s approach addresses the challenge of adapting robots to the infinite variability of real-world physical environments, where traditional robot programming is often too rigid and time-consuming. Since launching in 2024 with a focus on picking and packaging tasks, Mbodi has gained recognition by winning an ABB Robotics AI startup competition and securing a partnership with a Swiss robotics organization valued at $5.4 billion. The company is currently working on a proof of concept with a Fortune 100 consumer packaged goods (CPG) company, aiming to automate packing tasks that frequently change and are difficult to
roboticsartificial-intelligenceAI-trainingcloud-computingedge-computingautomationrobotic-softwareChina’s wearable suit trains humanoid robots with high accuracy
Researchers at China’s National University of Defense Technology, in collaboration with Midea Group, have developed HumanoidExo, a wearable suit system designed to train humanoid robots with high accuracy by capturing real-time human motion. Unlike traditional training methods that rely on videos and simulations—often causing robots to lose balance—HumanoidExo uses motion sensors and a LiDAR scanner to track seven arm joints and body movements, providing robots with precise, real-world data. The system’s AI component, HumanoidExo-VLA, combines a Vision-Language-Action model to interpret human tasks and a reinforcement learning controller to maintain robot balance during learning. Testing on the Unitree G1 humanoid robot demonstrated significant improvements: after training with data from five teleoperated and 195 exoskeleton-recorded sessions, the robot’s success rate on a pick-and-place task rose from 5% to nearly 80%, approaching the performance level of 200 human demonstrations. The robot also learned to walk effectively
robothumanoid-robotswearable-suitmotion-captureAI-trainingreinforcement-learningexoskeletonAnker offered to pay Eufy camera owners to share videos for training its AI
Anker, the maker of Eufy security cameras, launched a campaign earlier this year offering users $2 per video of package or car thefts to train its AI systems. The company encouraged users to submit both real and staged videos, even suggesting users simulate thefts to help improve detection algorithms. This initiative aimed to collect 20,000 videos each of package thefts and car door pulls, with payments made via PayPal. While the campaign reportedly attracted participation from over 120 users, Eufy did not disclose how many videos were collected, the total payments made, or whether the videos were deleted after training. Following this, Eufy continued similar programs, including an in-app Video Donation Program that rewards users with badges, gifts, or rankings for submitting videos involving humans, which the company states are used solely for AI training and not shared with third parties. Despite these efforts to monetize user data for AI development, concerns about privacy and security persist. For example, in 2023
IoTsecurity-camerasAI-trainingvideo-data-collectionuser-incentivesprivacy-concernssmart-home-devicesChina to build robot 'boot camps' as gyms to power next-gen humanoids
China plans to establish a network of robot "boot camps"—large-scale training facilities acting as gyms or obstacle courses for humanoid robots—in major cities including Beijing and Shanghai. The largest facility, located in Beijing’s Shijingshan district, will span over 108,000 square feet and generate more than 6 million data points annually. These camps will simulate real-world environments such as factories, retail shops, elderly care centers, and smart homes, enabling robots to practice tasks and gather standardized, high-quality training data. This initiative aims to address the current bottleneck in China’s robotics industry caused by inconsistent and costly data collection methods, facilitating improved AI development and data sharing among robotics companies. This effort is part of China’s broader strategic push to lead in embodied intelligence—AI integrated into physical robots—and to compete with the United States, which currently deploys far fewer industrial robots annually (about one-tenth of China’s 300,000). The boot camps will form a national network linked across
roboticshumanoid-robotsAI-trainingrobotics-industryChina-technologyrobot-boot-campsembodied-intelligenceDog crate-sized robot factory trains itself by watching human demos
MicroFactory, a San Francisco-based startup founded in 2024, has developed a compact robotic system roughly the size of a dog crate that can perform a wide range of manual tasks typically done by human hands. The system features two robotic arms capable of precise operations such as circuit board assembly, soldering, cable routing, and even delicate actions like threading a needle. It is designed to automate repetitive manual labor and can assemble real products efficiently, with the company claiming it is more effective than humanoid robots due to its simpler, non-humanoid design optimized for both hardware and AI. The robotic system can be trained through AI or by human demonstration using an external robotic arm to physically guide the in-box arms through tasks. This teaching method enables the robot to replicate complex motions accurately and learn new tasks quickly. MicroFactory has also developed a user interface that breaks down tasks into smaller steps to facilitate training and operation. Since launching their prototype within five months, the company has received hundreds of preorders from customers
roboticsrobotic-armsAI-trainingautomationelectronics-assemblyMicroFactorygeneral-purpose-robotsSymage to spotlight future of vision model training at RoboBusiness
Symage, a company specializing in physics-based, high-fidelity synthetic image data for AI and computer vision training, will showcase its technology at RoboBusiness 2025, held October 15-16 at the Santa Clara Convention Center. Unlike generative AI approaches, Symage’s platform generates photorealistic synthetic datasets without visual artifacts or model degradation, resulting in faster training, improved accuracy, better edge case coverage, and reduced bias. CEO Brian Geisel emphasizes that this approach enables robotics teams to develop and test vision models more efficiently and reliably, supporting advancements in smarter and safer robotics systems. At RoboBusiness, which attracts over 2,000 robotics professionals and features 100+ exhibitors and numerous educational sessions, Geisel will present on how synthetic data accelerates vision model development, particularly in warehouse automation, agriculture technology, and mobile robotics. Symage’s offerings highlight the potential of physics-accurate synthetic data to train models before hardware availability, addressing critical edge cases and improving data quality. The
roboticsAI-trainingsynthetic-datacomputer-visionrobotics-developmentautomationrobotics-innovationX Square Robot debuts foundation model for robotic butler after $100M Series A - The Robot Report
X Square Robot, a Shenzhen-based startup founded in 2023, has raised $100 million in Series A+ funding and introduced Wall-OSS, an open-source foundational AI model designed for robotic platforms, alongside its Quanta X2 humanoid robot. The company aims to advance household humanoid robotics by addressing key limitations in current robotic AI, such as over-reliance on task-specific training and excessive focus on bipedal locomotion. Instead, X Square Robot emphasizes generalized training in manipulation with robotic hands and reasoning across diverse robot forms to enable robots to perform unpredictable real-world tasks, like serving food, which traditional warehouse-focused training does not prepare them for. Wall-OSS is built on what X Square Robot claims to be the world’s largest embodied intelligence dataset and is designed to overcome challenges like catastrophic forgetting (loss of previously learned knowledge when training on new data) and modal decoupling (misalignment of vision, language, and action). The multimodal model is trained on vision-language-action
roboticshumanoid-robotsembodied-AIfoundation-modelrobotic-butlerAI-trainingopen-source-roboticsRealMan launches robotics data training center in Beijing - The Robot Report
RealMan Robotics, a Beijing-based developer of robotic arms and mobile manipulators, has launched a new robotics data training center in Beijing. The 3,000-square-meter facility integrates core technology R&D, scenario-based application testing, operator training, and ecosystem collaboration. It features 108 diverse robots—including dual-arm mobile manipulators, wheeled semi-humanoids, drone-arms, and quadrupeds—deployed across ten real-world environments such as eldercare, rehabilitation, automotive assembly, and smart catering. These scenarios enable large-scale multimodal data generation, producing over one million high-quality data points annually to train advanced AI models via the newly unveiled RealBOT Embodied Intelligence Open Platform. The center aims to address three key challenges in robotics: lack of cross-scenario data generalization, gaps between simulation and real-world conditions, and the absence of standardized data formats and efficient closed-loop iteration. By creating a full-stack data pipeline from collection to deployment, RealMan seeks to accelerate commercialization of
roboticsrobotic-armsAI-traininghumanoid-robotsdata-acquisitionmobile-manipulatorsrobotics-R&DElon Musk confirms shutdown of Tesla Dojo, ‘an evolutionary dead end’
Elon Musk has confirmed the shutdown of Tesla’s Dojo supercomputer project, describing it as “an evolutionary dead end” after the company decided to consolidate its AI chip development efforts. Initially, Tesla developed the first Dojo supercomputer using a combination of Nvidia GPUs and in-house D1 chips, with plans for a second-generation Dojo 2 powered by a D2 chip. However, Tesla has shelved the D2 chip and the broader Dojo 2 project to focus resources on its AI5 and AI6 chips. The AI5 chip is designed primarily for Tesla’s Full Self-Driving (FSD) system, while the AI6 chip aims to support both onboard inference for autonomous driving and humanoid robots, as well as large-scale AI training. Musk explained that it makes more sense to integrate many AI5/AI6 chips on a single board to reduce network complexity and costs, a configuration he referred to as “Dojo 3.” This strategic pivot reflects Tesla’s
robotAI-chipsTesla-Dojoautonomous-vehiclesself-driving-technologyAI-traininghumanoid-robotsDiffuseDrive addresses data scarcity for robot and AI training - The Robot Report
DiffuseDrive Inc., founded in 2023 by engineer Balint Pasztor and physicist Roland Pinter, addresses the critical challenge of data scarcity in training robots and AI systems by generating photorealistic synthetic data. Traditional real-world data collection is costly and slow, while simulation-based data often lacks realism, leading to a simulation-to-reality gap. DiffuseDrive’s generative AI platform analyzes existing datasets, identifies missing elements, and uses proprietary diffusion models to create highly realistic synthetic data tailored to specific operational design domains (ODDs). This approach enables the rapid creation of relevant datasets in days rather than months or years, improving AI training outcomes by up to 40% in some cases. Unlike generic synthetic data generators, DiffuseDrive integrates a quality assurance layer that contextualizes data generation based on business logic and domain-specific requirements provided by customers, who remain in control of their data and expertise. The platform employs advanced statistical analysis, semantic segmentation, and 2D/3D labeling to
robotartificial-intelligencesynthetic-dataautonomous-drivingdata-scarcityAI-trainingsimulation-to-reality-gap99.9% reliable robot vision studio completes week-long task in hours
Apera, a Canadian company, has developed Apera Forge, a web-based, AI-powered 4D vision design studio that significantly accelerates the development of vision-guided robotic (VGR) automation projects. This browser-based platform requires no hardware and enables industrial manufacturers to simulate robotic applications—including parts, grippers, robots, and cell environments—in minutes rather than days. By training AI neural networks through extensive digital cycles, Forge achieves over 99.9% reliability in object recognition and task performance, delivering deployable vision programs within 24 to 48 hours. This drastically reduces the time and risks traditionally involved in creating robotic cells for bin picking, material handling, and de-racking. The latest upgrades to Forge enhance its flexibility and simulation capabilities, supporting advanced robotic cell design with customizable camera placement, bin positioning, and obstacle integration to better replicate real-world conditions. Notably, Forge now supports end-of-arm-tooling (EOAT) mounted camera configurations (Eye-in-Hand), allowing users to
robotAIvision-guided-roboticsautomationindustrial-manufacturingsimulationAI-trainingApera AI updates Apera Forge design and AI training studio - The Robot Report
Apera AI Inc. has released an updated version of Apera Forge, its web-based, no-code design and AI training studio aimed at simplifying 4D vision-guided robotic projects. The latest update enhances advanced robotic cell design capabilities, supports end-of-arm-tooling (EOAT)-mounted camera configurations, and introduces full simulation and AI training for de-racking applications. These improvements enable users to simulate and validate complex robotic environments—including robot, gripper, camera, part geometry, and cell layout—within minutes, significantly reducing development time from weeks or months to hours. Trained AI models developed in Forge reportedly achieve over 99.9% reliability in object recognition and task execution, with complete vision programs ready for deployment within 24 to 48 hours. Key new features include greater flexibility in cell design, allowing arbitrary positioning of cameras and bins, integration of reference CAD files for accurate visualization, and an Obstacle Autopilot for improved robot navigation and collision avoidance. The platform now supports EO
roboticsAI-trainingvision-guided-robotsrobotic-simulationindustrial-automationend-of-arm-toolingrobot-navigationDriverless cars can now make better decisions, new technique validated
Researchers at North Carolina State University have validated a new technique to improve moral decision-making in driverless cars by applying the Agent-Deed-Consequences (ADC) model. This model assesses moral judgments based on three factors: the agent’s character or intent, the deed or action taken, and the consequences of that action. The study involved 274 professional philosophers who evaluated a range of low-stakes traffic scenarios, focusing on everyday driving decisions rather than high-profile ethical dilemmas like the trolley problem. The researchers aimed to collect quantifiable data on how people judge the morality of routine driving behaviors to better train autonomous vehicles (AVs) in making ethical choices. The study found that all three components of the ADC model significantly influenced moral judgments, with positive attributes in the agent, deed, and consequences leading to higher moral acceptability. Importantly, these findings were consistent across different ethical frameworks, including utilitarianism, deontology, and virtue ethics, suggesting a broad consensus on what constitutes moral behavior in traffic
robotautonomous-vehiclesAI-ethicsdriverless-carsmoral-decision-makingtraffic-safetyAI-training