From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox.
Access expert insights, exclusive content, and a deeper dive into engineering and innovation.
Engineering-inspired textiles, mugs, hats, and thoughtful gifts
We connect top engineering talent with the world's most innovative companies.
We empower professionals with advanced engineering and tech education to grow careers.
We recognize outstanding achievements in engineering, innovation, and technology.
All Rights Reserved, IE Media, Inc.
Follow Us On
Access expert insights, exclusive content, and a deeper dive into engineering and innovation.
Engineering-inspired textiles, mugs, hats, and thoughtful gifts
We connect top engineering talent with the world's most innovative companies
We empower professionals with advanced engineering and tech education to grow careers.
We recognize outstanding achievements in engineering, innovation, and technology.
All Rights Reserved, IE Media, Inc.
A 400 percent speed boost in visual processing could significantly improve safety for self-driving cars, drones, and robots.
A multinational team of researchers has unveiled a new safety system for autonomous machines that can allegedly react to danger faster than the human brain.
The study details how scientists from China, Britain, Hong Kong, Saudi Arabia, and the United States built a hardware-based “reflex” designed to speed up automated driving decisions.
The development addresses a long-standing concern in robotics and self-driving technology. Machines typically take longer than humans to interpret visual data and respond to sudden hazards.
At 50 miles per hour, an automated vehicle can take about 0.5 seconds to react to an obstacle. In that time, it may travel roughly 43 feet before braking. By comparison, the human brain reacts in about 0.15 seconds.
Even with advanced processors, analyzing a high-definition image frame by frame takes time. Computers must determine what is moving, where it is going, and whether it poses a threat. This delay has raised concerns about the safety of robots, drones, and autonomous vehicles operating in unpredictable environments.
In real traffic conditions, fractions of a second matter. A slower response can translate into longer braking distances and a higher risk of collision. Engineers have struggled to close the reaction gap between human perception and machine processing without sacrificing accuracy.
The research team focused on solving this problem at the hardware level rather than relying only on software improvements. Their goal was to enable faster decision-making without completely redesigning existing camera systems.
The scientists modeled their system on how human vision works. Instead of analyzing every detail in a scene, the human brain quickly detects sudden motion or change and reacts first. Detailed processing follows later.
At the center of the new system is a two-dimensional synaptic transistor array, described as a highly sensitive motion detection chip. It follows a “filter-then-process” approach. The chip first filters out irrelevant visual data and identifies only key changes in a scene.
The transistor can detect image changes in just 100 microseconds, much faster than human perception. It can retain motion information for more than 10,000 seconds and operate for over 8,000 cycles without performance loss.
Once a frame is captured, the chip ignores the full image and registers only moving objects. These selected signals are then passed to standard computer vision algorithms for deeper analysis. According to the study, this approach is more than 10 times faster than conventional image processing methods.
In laboratory tests, the system processed motion data four times faster than current state-of-the-art algorithms. Under ideal conditions, it even exceeded human-level reaction performance.
The researchers reported a 213.5 percent improvement in hazard detection during driving tests and a 740.9 percent increase in robotic arms’ object-grasping ability. In real-world scenarios, efficiency declined slightly but remained better than existing autonomous systems.
At 50 miles per hour, the roughly 0.2-second improvement in response time could reduce braking distance by about 14.4 feet. Gao Shuo, co-corresponding author and associate professor at Beihang University, said, “Our approach demonstrates a 400 per cent speed-up, surpassing human-level performance while maintaining or improving accuracy through temporal priors.”
“We do not completely overthrow the existing camera system; instead, by using hardware plug-ins, we enable existing computer vision algorithms to run four times faster than before, which holds greater practical value for engineering applications.”
He explained, “In a traffic accident, these 4 metres (13.1 feet) often determine whether a collision occurs or it’s just a close call.”
For small drones, reaction time was reduced by at least one-third, improving endurance and flight performance. Gao added, “This project will undoubtedly advance in-depth collaboration with Chinese automotive and drone companies.”
“We hope to equip autonomous vehicles with this ‘hardware-level reflex’ system, enabling them to respond more sensitively than humans when handling sudden road conditions, thereby fundamentally enhancing the safety of unmanned systems.”
The study was published in the peer-reviewed journal Nature Communications.
A versatile writer, Sujita has worked with Mashable Middle East and News Daily 24. When she isn't writing, you can find her glued to the latest web series and movies.
Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits.
Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits.
Premium
Follow

source

Lisa kommentaar

Sinu e-postiaadressi ei avaldata. Nõutavad väljad on tähistatud *-ga

Your Shopping cart

Close