VR for self-driving cars makes training safer, more efficient

The vehicles think they’re on public roads, when they’re really in parking lots.

Software that acts like a virtual reality (VR) headset for self-driving cars could make training autonomous vehicles faster and safer.

The challenge: Aside from freeing us from tedious commutes, self-driving cars could potentially make our roads much safer by eliminating the possibility of human error leading to an accident.

Before that can happen, though, the AIs trained in computer simulations need to be able to safely operate actual vehicles, adapting to countless variables, including construction, pedestrians, erratic human drivers, inclement weather, and other road hazards.

“The method can work with any AV simulator.”

Cao, X. et al.

Typically, developers do this by testing their self-driving cars on public roads, with safety backup drivers behind the wheel. But they need to log a lot of miles before the vehicles are exposed to enough “edge cases” — the kinds of rare situations that can throw the software off — before they can be sure they’re safe enough to be fully deployed.

More importantly, if self-driving cars make mistakes during on-road training, they can potentially destroy property or even take lives.

What’s new? Researchers at the Ohio State University (OSU) have now unveiled a new method for training self-driving cars that works like virtual reality for autonomous vehicles (AVs), making the AIs “think” the car is in one place when it’s actually in another.

A developer could use the tech to make the AV believe it’s approaching a busy intersection, for example, when it’s really just driving around an empty lot. The key feature here, which makes it different from a pure simulation, is that the system is operating a real, physical car, while virtual obstacles can be safely thrown its way.

“The [Vehicle-in-Virtual-Environment (VVE)] method can work with any AV simulator and virtual environment rendering software as long as these can be run in real time and can generate the raw sensor data required by the actual AV computing system,” they write in a study, published in Sensors.

“This ability saves time, money, and there is no risk of fatal traffic accidents.”

Bilin Aksun-Guvenc

Looking ahead: The OSU researchers used a real self-driving car to demonstrate the viability of the VVE method for the Sensors study. They’ve now filed a patent for the tech behind it, which they believe could become a “staple” of the AV industry in the next 5 to 10 years. 

“With our software, we’re able to make the vehicle think that it’s driving on actual roads while actually operating on a large open, safe test area,” said study co-author Bilin Aksun-Guvenc. “This ability saves time, money, and there is no risk of fatal traffic accidents.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Waze will now tell you if a road has a history of car accidents
Navigation app Waze’s new Crash History Alerts let drivers know when they’re approaching an area prone to car accidents.
Robot painter lets smaller crews do jobs twice as fast
Tennessee-based startup PaintJet’s robot painter Bravo slashes the time it takes to complete painting jobs.
You’ll be able to buy this flying car for $190k in 2024
Startup Pivotal has unveiled the Helix eVTOL, a one-seater aircraft it plans to begin selling for $190,000 in 2024.
New AI algorithm transforms 2D photos into 3D maps
A new method named MonoXiver uses AI to build up reliable 3D maps of a camera’s surroundings based only on 2D photos.
Self-driving cars can now tell passengers what they’re thinking
AV startup Wayve has given its self-driving cars the ability to explain their decisions in conversational language.
Up Next
a render of Alef Aeronautic's electric flying car soaring above a forest with trees
Subscribe to Freethink for more great stories