The race towards true self-autonomy for cars is on. In October 2025, Tesla chief Elon Musk claimed that the company’s full self-driving (FSD) V14.3 update, which has yet to be released as of early January 2026, would make the car feel like a conscious being. A few weeks later, Jim Fan, Director of Robotics & Distinguished Scientist at Nvidia, tried out the latest FSD v14 update and said it was the first AI to pass the Physical Turing Test. Little did we know that Nvidia was working on its own self-driving technology, one that would supposedly enable “humanlike thinking” for cars.
At CES 2026, Nvidia revealed Alpamayo for self-driving vehicles. At its core are vision language action (VLA) AI models that focus on multi-step reasoning to overcome novel or rare scenarios encountered by a self-driving car. Current self-driving systems are trained on vast datasets (read: real-world footage and simulations of cars driving), using detailed labeling to teach the system about everything it will encounter on the road. As of 2026, they are doing a pretty good job. But when they run into a scenario they haven’t been trained for, the systems fail, and these failures can be devastating. Safely handling these outliers has proved to be one of the biggest challenges for self-driving technologies, such as Tesla’s own FSD stack.

This is where human-like thinking comes into the picture. Nvidia’s Alpamayo plans to combine perception and planning, and then feed the data to an AI vision model to initiate a chain of thoughts that will allow a car to reason through unexpected navigation challenges. It’s similar to the “thinking mode” in popular chatbots such as Gemini, which relies heavily on reasoning to handle complex multi-step problems.
Alpamayo is technically a massive autonomous vehicle platform that includes open AI models, training datasets, and simulation tools for self-driving cars. Nvidia says the biggest beneficiary of such a tech stack would be robotaxis, as it would allow them to reason effectively while driving and help them navigate complex environments easily. The company is, expectedly, quite bullish about the tech.
“The ChatGPT moment for physical AI is here — when machines begin to understand, reason, and act in the real world,” claimed the company’s chief, Jensen Huang. The whole stack has three core components. The first one is Alpamayo 1, touted to be the industry’s first chain-of-thought reasoning VLA model. Next, we have AlpaSim, a fully open‑source simulation framework. The third is an openly available dataset containing over 1,700 hours of autonomous driving, covering a wide range of environments and terrains.
Now, Nvidia is not directly offering Alpamayo as a ready-made competitor platform to Tesla’s FSD. Instead, it’s releasing Alpamayo as an open ecosystem that serves more like a teacher model for self-driving technologies, which carmakers and researchers can adapt to their needs. But the way it works (or reasons) is what sets this apart: “Not only does it take sensor input and activates steering wheel, brakes, and acceleration, it also reasons about what action it is about to take,” Huang explained at CES.
The 2026 Mercedes-Benz CLA sedan will be the first car to deploy Alpamayo, utilizing Nvidia’s DRIVE software for autonomous vehicles. The car will offer an enhanced level 2 (L2) autonomous driving experience and will hit U.S. roads in 2026. Per Nvidia, Lucid, JLR, and Uber are among the other companies exploring Alpamayo for their self-driving cars.

source

Lisa kommentaar

Sinu e-postiaadressi ei avaldata. Nõutavad väljad on tähistatud *-ga

Your Shopping cart

Close