Transportation

Teaching Self-Driving Cars to Watch for Unpredictable Humans


“We are very much interested in how human-driven vehicles and robots can coexist,” says Daniela Rus, director of the MIT lab and a coauthor of the paper. “It’s a grand challenge for the field of autonomy and a question that’s applicable not just for robots on roads but in general, for any kind of human-machine interaction.” One day, this kind of work might be able to help humans work more smoothly with robots on, say, the factory floor or in a hospital room.

But first, game theory. The research pulls from an approach being applied more frequently in robotics and machine learning: using games to “teach” machines to make decisions with imperfect knowledge. Game players—like drivers—often have to reach conclusions without full understanding of what the other players—or drivers—are doing. So more researchers are applying game theory to train self-driving cars how to act in uncertain situations.

Keep Reading

Still, the uncertainty is a challenge. “Ultimately, one of the challenges of self-driving is that you’re trying to predict human behavior, and human behavior tends to not fall into rational agent models we have for game players,” says Matthew Johnson-Roberson, assistant professor of engineering at the University of Michigan and the cofounder of Refraction AI, a startup building autonomous delivery vehicles. Someone might look like they’re about to merge but see a flash of something out of the corner of their eye and stop short. It’s very hard to teach a robot to predict that kind of behavior.

Of course, driving situations could become less uncertain if the researchers were able to collect more information about human driving behavior, which is what they’re hoping to do next. Data on the speed of vehicles, where they are heading, the angle at which they’re traveling, how their position changes over time—all could help traveling robots better understand how the human mind (and personality) operates. Perhaps, the researchers say, an algorithm derived from more precise data could improve predictions about human driving behavior by 50 percent instead of 25 percent.

That might be really hard, says Johnson-Roberson. “One of the reasons I think it’s going to be challenging to deploy [autonomous vehicles] is because you’re going to have to get these predictions right when traveling at high speeds in dense urban areas,” he says. Being able to tell whether a driver is a selfish driver within two seconds of observation is useful, but a car traveling at 25 mph travels nearly 75 feet in that time. A lot of unfortunate things can happen in 75 feet.

The fact is, even humans don’t understand humans all the time. “People are just the way they are, and sometimes they’re not focused on driving, and make decisions we can’t completely explain,” says Wilko Schwarting, an MIT graduate student who led the research. Good luck out there, robots.


More Great WIRED Stories



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.