At CES 2026, NVIDIA unveiled Alpamayo, a groundbreaking suite of open-source artificial intelligence (AI) models. The new family also includes simulation tools and datasets specifically designed for training physical robots and vehicles. “The ChatGPT moment for physical AI is here,” said Jensen Huang, CEO of NVIDIA. He emphasized that Alpamayo gives reasoning capabilities to autonomous vehicles, enabling them to navigate complex environments safely and explain their driving decisions.
Model details
The heart of NVIDIA’s new family is Alpamayo 1, a 10 billion-parameter chain-of-thought, reason-based vision language action (VLA) model. This innovative model enables an autonomous vehicle to think like a human and solve complex edge cases. “It does this by breaking down problems into steps, reasoning through every possibility, and then selecting the safest path,” Ali Kani, NVIDIA’s VP of automotive said during a press briefing.
Advanced features
Alpamayo not only processes sensor input to control steering wheel, brakes, and acceleration but also reasons about the action it is about to take. “It tells you what action it’s going to take, the reasons by which it came about that action. And then, of course, the trajectory,” Huang said during his keynote on Monday. Developers can fine-tune Alpamayo into smaller versions for vehicle development or use it to train simpler driving systems.
Additional resources
NVIDIA’s generative world models, branded as Cosmos, can be used to generate synthetic data and train/test an Alpamayo-based AV application on a combination of real and synthetic datasets. Along with the Alpamayo rollout, NVIDIA is also releasing an open dataset with over 1,700 hours of driving data collected across various geographies and conditions. The company is launching AlpaSim too—an open-source simulation framework for validating autonomous driving systems—available on GitHub.












