Skip to main content

Self-driving cars could change traffic behaviour

Driving in traffic with robotic cars will not remind you of science fiction movies, just yet. (Photo: Michael Shick [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons)
Published May 24, 2016

When self-driving cars start appearing on your commute, they're not likely to remind you of the swift Lexus vehicles in Minority Report.

Think driving behind your grandpa, instead.

“They will drive safely and slowly and we’ll experience them as sluggish,” says Peter Georén, director of the Integrated Transport Research Laboratory at KTH Royal Institute of Technology. “So for people who take risks in their own driving, it will be frustrating to deal with these safely driven vehicles. It will be like driving behind my grandpa.”

But people will gradually get used to them, he says. And their patience will pay off. “Ultimately they will probably make traffic safer,” he says.

Self-driving cars and buses are much more risk-averse than human beings, Georén explains. The robots are programmed to obey the traffic laws to a fault, and even come to a stop when a pedestrian leans out to look down the street, as has been the case in Palo Alto and Mountain View in California, where internet giant Google’s self-driving cars have been plying the streets for more than a year.

“I’ve read that kids make a game of jumping out in front of them in Mountain View,” he says. In a bus demo that ITRL participated in recently in Stockholm, the same sort of caution was on display. “The demo buses did stop a lot due to close-by walking and curious pedestrians, and it can get aggravating for the bus passengers.”

However, this diligence could establish new, safer norms for driver behavior. Georén says that self-driving cars could influence drivers in the same way police cars do.

Yet there remain problems to work out. One reason self-driving vehicles can be so maddening is that robots lack a key capacity that human vehicle operators bring to the streets — the art of negotiation. As motorists, we are continuously communicating with each other on the road, deciding between us who goes next and who has to wait. Humans can do this not only with turn signals, but with subtle cues we convey to each other by the way we are positioned in traffic or rolling forward. We also establish eye contact when it’s possible — a particularly critical form of communication between drivers and pedestrians.

Robots can’t do any of this.

In Palo Alto, for example, self-driving cars that arrive at four-way stops are prone to getting stuck waiting for their turn to proceed, unaware that everyone else is taking turns in the order that they arrived at the intersection, Georén says.

“Google's robot cars are unable to make the crossing because there's always another car coming to the intersection. So they're stuck there,” he says. “The program says you can't do anything until it's safe enough, and that situation never occurs.”

The result of all this caution will be lowered capacity on the roadways, he says. That is, until the numbers of robots increase to the point where they can drive in synchronization with one another — a coordinated activity commonly referred to as "platooning".

At that point we’ll be getting closer to the traffic scenes from science fiction movies. “People can’t drive in synchronization with each other; there’s always a delayed chain reaction. But when robots do it, simulations have shown that coordinated driving could potentially double roadway capacity,” he says.

But that alone won’t eliminate all problems with self-driving cars. An ongoing study at MIT is looking into the decision-making for robots encountering two hazards at once, he says.

“Self-driving vehicles have to be a lot safer than people, which poses high requirements on the robot. But if it comes to a situation where you have to choose between hitting a child on the street or swerving and hitting an elderly couple on the sidewalk — that’s an ethical decision that has to be made.

“How do we make a robot make a kill decision if necessary?” he asks. “There are questions for which the programming is not so easy.”

David Callahan