The Robot Report
|
Helm.ai said its new architectural framework can enable autonomous operations with less data than typical systems. | Source: Helm.ai
Typically, in the autonomous driving industry, developers create massive black-box, end-to-end models for autonomy that require petabytes of data to learn driving physics from scratch. Helm.ai today unveiled its Factored Embodied AI architectural framework, which it says offers a different approach.
With the framework, the company released a benchmark demonstration of its vision-only AI Driver steering the streets of Torrance, CA, with zero-shot success without ever having seen those specific streets before. This included handling lane keeping, lane changes, and turns at urban intersections.
Helm.ai said it achieved this autonomous steering capability by training the AI using simulation and only 1,000 hours of real-world driving data.
“The autonomous driving industry is hitting a point of diminishing returns. As models get better, the data required to improve them becomes exponentially rarer and more expensive to collect,” said Vladislav Voroninski, CEO and Founder of Helm.ai. “We are breaking this ‘Data Wall’ by factoring the driving task. Instead of trying to learn physics from raw, noisy pixels, our Geometric Reasoning Engine extracts the clean 3D structure of the world first. This allows us to train the vehicle’s decision-making logic in simulation with unprecedented efficiency, mimicking how a human teenager learns to drive in weeks rather than years.”
Helm.ai said the architecture enables automakers to deploy ADAS through L4 capabilities using their existing development fleets, bypassing the prohibitive data barrier to entry.
“We are moving from the era of brute force data collection to the era of Data Efficiency,” added Voroninski. “Whether on a highway in LA or a haul road in a mine, the laws of geometry remain constant. Our architecture solves this universal geometry once, allowing us to deploy autonomy everywhere.”
The company said its new architecture offers several key technological advancements. First, it bridges the simulator gap. Helm.ai’s architecture trains in “semantic space.” This is a simplified view of the world that focuses on geometry and logic rather than graphics. By simulating the structure of the road rather than just the pixels, Helm.ai can train on infinite simulated data that works immediately in the real world.
Next, leveraging this geometric simulation, Helm.ai’s planner achieved robust, zero-shot urban autonomous steering using only 1,000 hours of real-world fine-tuning data, offering a capital-efficient path to fully autonomous driving. Additionally, to tackle acceleration, braking, and complex interactions, Helm.ai is leveraging its world model capabilities to predict the intent of pedestrians and other vehicles.
Finally, to validate the robustness of its perception layer, Helm.ai deployed its automotive software into an Open-Pit Mine. With extreme data efficiency, the system correctly identified drivable surfaces and obstacles. This, Helm.ai said, proves the architecture can adapt to any robotics environment, not just roads.
Founded in 2016, Helm.ai develops AI software for L2/L3 ADAS, L4 autonomous driving, and robotics automation. In August, the company partnered with Honda Motor Co., Ltd. The companies plan to work together to develop Honda’s self-driving capabilities, including its Navigate on Autopilot (NOA) platform.
The partnership centers on ADAS for production consumer cars, using Helm.ai’s full-stack real-time AI software and large-scale autolabeling and generative simulation foundation models for development and validation. In October, Honda made an additional investment in Helm.ai.
Honda isn’t the only major automaker trying to put autonomous driving capabilities into consumer vehicles. In October, General Motors Co. announced plans to bring “eyes-off” driving to market. The company will be using technology originally developed at Cruise, a now-shut-down robotaxi developer.
Tesla has long been a frontrunner when it comes to personal vehicle technology. Its “full self-driving” (FSD) software first came to the streets in 2020. While the company’s technology has matured since then, it still requires a human driver to pay attention to the road and be ready to take over at all times.
Brianna Wessling is an Associate Editor, Robotics, WTWH Media. She joined WTWH Media in November 2021, after graduating from the University of Kansas with degrees in Journalism and English. She covers a wide range of robotics topics, but specializes in women in robotics, robotics in healthcare, and space robotics.
She can be reached at [email protected]
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us












