Nvidia announces new open AI models and tools for autonomous driving research

Source: techcrunch
Author: Rebecca Szkutak
Published: 12/1/2025
To read the full content, please visit the original article.
Read original articleNvidia has unveiled new AI infrastructure and models aimed at advancing physical AI applications, particularly in robotics and autonomous vehicles. At the NeurIPS AI conference, the company introduced Alpamayo-R1, described as the first vision-language-action model specifically designed for autonomous driving research. This model integrates visual and textual data to enable vehicles to perceive their environment and make informed decisions, leveraging Nvidia’s existing Cosmos reasoning model family, which was initially launched in January 2025. Alpamayo-R1 is intended to help autonomous vehicles achieve level 4 autonomy—full self-driving capability within defined areas and conditions—by providing them with “common sense” reasoning to handle complex driving scenarios more like humans.
In addition to the new model, Nvidia released the Cosmos Cookbook on GitHub, a comprehensive resource including step-by-step guides, inference tools, and post-training workflows to assist developers in customizing and training Cosmos models for various applications. This toolkit covers essential processes such as data curation, synthetic data generation, and model
Tags
robotautonomous-vehiclesAI-modelsNvidiaphysical-AIautonomous-drivingvision-language-models