A recent autonomous vehicle boondoggle in Chandler, Arizona has people scratching their heads over how to really design cars that can drive as well as humans; even with all of our foibles and distractions, we have some decision-making powers that are hard to emulate with AI.
The 41-mile trip that is making its way through tech media today reportedly resulted in the passenger being 20 minutes late to his destination after the car in question confusingly tried to escape a Waymo employee coming to the rescue.
Eventually, the car was stopped by a set of traffic cones that confused the algorithms enough to stop the vehicle and let a person in to take over.
“Waymo said in a statement that the situation was ‘not ideal,’ and the self-driving car had received incorrect guidance, which made it challenging for the autonomous vehicle to resume its intended route,” writes Matt McFarland at CNN Business.
The passenger in this debacle has taken over 140 trips with the technology, and only one of them became this weird – so it’s not that self driving cars can’t handle a wide range of driving situations.
However, the Keystone-Kops type episode chronicled on YouTube illustrates what experts call the value learning problem – that learning computer systems still can’t really grasp some of the more intuitive aspects of human cognitive behavior.
To put it another way, computers are great at following basic traffic rules, which is what we generally do when we’re behind the wheel – except for some very important exceptions, where we’re asked to think critically about scenarios like traffic cones placed in a roadway, or various kinds of moving obstacles.
That’s where self-driving car technology comes up short in a big way. If you have stock in emerging autonomous vehicle systems, try to research the good and the bad together to come up with a realistic expectation of what these technologies will do in a one-year, five-year or ten-year horizon.