Autonomous Cars Could Determine Your Driving Style by Gently Probing You

By Evan Ackerman

Posted
This article’s source is here.

When every car on the road is an autonomous car, we won’t have to worry about what kind of driver everyone else is. Before that happens, there’s going to be a very long and messy period where autonomous cars will be sharing the road with human drivers. It’ll be important for autonomous cars to understand and predict what the humans around them are trying to do, which is a very difficult problem, because humans are notoriously irrational: we all have different intentions, goals, preferences, objectives, driving styles, and we may or may not be looking at our cell phones.

At UC Berkeley, researchers have come up with a way for autonomous cars to actively gather information about the human drivers around them. All it takes is a little gentle probing.

Generally, robots gather information about humans passively: they watch what humans do, take notes, and try to use those data to predict what humans will do next. This approach is somewhat limited in its effectiveness, because humans don’t always do the things that would provide the most useful information.

A more active approach would involve trying to find ways to get the humans to take actions that would generate the information that the robots need. That sounds a bit complicated, but it’s something that we human drivers do all the time. For example, if you’re at a four-way stop and it’s technically not your turn to go but you’re not sure if the other driver is paying attention, you might inch forward a bit to see how they react.

At IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) earlier this month, Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia, and Anca Dragan presented a paper on an algorithm that can plan robot actions to gain information about humans. In other words, it gives robots ideas on how to use little nudges to get a better sense of what humans are thinking.

We explored planning for an autonomous vehicle that actively probes a human’s driving style, by braking or nudging in and expecting to cause reactions from the human driver that would be different depending on their style.

Here are some examples of the kind of actions that the algorithm plans for autonomous cars to determine whether the human drivers around them are passive, aggressive, or paying attention:

Scenario 1: Nudging In to Explore on a Highway

The autonomous car actively probes the human by nudging into her lane in order to infer her driving style. An attentive human significantly slows down (timid driver) or speeds up (aggressive driver) to avoid the vehicle, while a distracted driver might not realize the autonomous actions and maintain their velocity, getting closer to the autonomous vehicle.

Scenario 2: Braking to Explore on a Highway

The robot slows down to actively probe the human and find out her driving style. An attentive human would slow down and avoid collisions while a distracted human will have a harder time to keep safe distance between the two cars.

Scenario 3: Nudging In to Explore at an Intersection

In the active condition, the autonomous car nudges into the intersection to probe the driving style of the human. An attentive human would slow down to stay safe at the intersection while a distracted human will not slow down.

Once an autonomous car has collected this data, it can then adjust its behavior to compensate for whatever the humans around it are doing. It’s interesting to think about how data like these could be used beyond just the scenarios in which they get collected. For example, if autonomous cars consistently notice that particular humans drive aggressively, perhaps they could label them as such, and share that information with other autonomous cars, or even with other human drivers. Or insurance agencies. And, you know, maybe suggest that they get counseling.

The results of studies with humans in driving simulators “suggest that robots are indeed able to construct a more accurate belief over the human’s driving style with active exploration than with passive estimation.” The authors readily admit that finding the appropriate line between exploration and exploitation is still a challenge, and that it will be important to find actions to take that are safe. Perhaps the biggest issue, as the authors point out, is that “people might not always react positively to being probed.”

Information Gathering Actions over Human Internal State, by Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia, and Anca Dragan from the University of California at Berkeley, was presented this month at IROS 2016 in Seoul, Korea.