Transportation

MIT’s AI makes autonomous cars drive more like humans


Creating driverless cars capable of humanlike reasoning is a long-standing pursuit of companies like Waymo, GM’s Cruise, Uber, and others. Intel’s Mobileye proposes a mathematical model — the Responsibility-Sensitive Safety (RSS) — it describes as a “common sense” approach to on-the-road decision-making that codifies good habits like giving other cars the right of way. For its part, Nvidia is actively developing Safety Force Field, a decision-making policy in a motion-planning stack that monitors unsafe actions by analyzing real-time sensor data.

Now, a team of MIT scientists are investigating an approach that leverages GPS-like maps and visual data to enable autonomous cars to learn human steering patterns, and to apply the learned knowledge to complex planned routes in previously unseen environments. Their work — which will be presented at the International Conference on Robotics and Automation in Long Beach, California next month — builds on end-to-end navigation systems architected by Daniel Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Rus and colleagues’ prior models followed roads without destinations in mind, while the new model drives to predefined places. “With our system, you don’t need to train on every road beforehand,” said first paper author and MIT graduate student Alexander Amini. “You can download a new map for the car to navigate through roads it has never seen before.”

As Amini and the other contributing researchers explain, their AI system watches and learns how to steer from a human driver, and then correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it comes to know the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.

In experiments, the researchers fed the machine learning model a map with a randomly chosen route. When driving, the system extracted visual features from the camera, which enabled it to predict road structures like a distant stop sign and line breaks on the side of the road. Furthermore, it correlated visual data with the map data to identify mismatches, which helped it to better determine its position on the road and ensure it stayed on the safest path. For example, when it was driving on a straight road without turns and the map indicated that it needed to turn right, the car knew to keep driving straight.

“In the real world, sensors do fail,” said Amini. “We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.