Does Vehicle Automation Need to Overcome the Uncanny Valley to Succeed?
In the world of digital animation, there is a concept known as the uncanny valley, which refers to a sense of unease generated in a viewer when something meant to replicate a human appears extremely close to being real, but subtle errors indicate that it is not. Automated driving systems are now approaching something similar in their development cycle. If the electronic control systems that are expected to drive our future vehicles can’t reach a sufficient level of reliability and robustness to cross this valley, it’s possible that consumers will never accept the technology.
Accident statistics indicate that up to 94% of all crashes are caused by human error; there is no doubt that the human decision-making process is deeply flawed. Nonetheless, human perception and visual processing have some unique qualities that make us able to detect incredibly subtle nuances. When audiences saw the 2004 film The Polar Express, it was not well-received due to characters in the movie falling into the uncanny valley. The microexpressions that are such an important part of human communication were missing from the characters in the film, leaving them with what appeared to be dead eyes.
(Dis)trusting the System
In my role as a transportation analyst, I have the opportunity to drive many new vehicles to evaluate the latest technologies. Despite usually knowing where I’m going, I try to utilize navigation systems along with voice recognition and human-machine interface and driver assistance (ADAS) features to aid my understanding of what works and, more importantly, what doesn’t.
Having spent more than 17 years developing electronic control systems including anti-lock brakes and electronic stability control, I’m constantly impressed at how far these systems have advanced. Nonetheless, I have yet to encounter a system that I completely trust, including Tesla’s Autopilot, which is arguably the most advanced ADAS system on the market today. For the most part, Autopilot and other ADAS features work well within their control domains. Using radar, they can track a vehicle ahead at a safe distance and automatically slow down or speed up in response. Lane keeping systems detect road markings and provide alerts or even adjust the steering to keep the vehicle from drifting out of the lane.
Unfortunately, the sensors don’t always detect what’s around the vehicle consistently, so drivers must remain alert and be ready to take control. There are enough control errors in normal operation that it’s impossible to completely trust the system. Even far more advanced fully autonomous systems that are currently being testing by many automakers, suppliers, and technology companies aren’t perfect. They have little or no ability to operate in areas that don’t have hi-definition 3D maps, clearly visible signage and road markings, or even in instances of poor weather.
Consumer Pushback
A recently released study from the University of Michigan Transportation Research Institute revealed that only 15.5% of respondents wanted fully autonomous vehicles, and nearly half wanted no self-driving capability at all. Navigant Research’s Autonomous Vehicles report forecasts that fewer than 5% of new vehicles sold in 2025 will have fully autonomous capability.
This low consumer interest comes despite the fact that almost no one besides the engineers working on the technology have actually experienced a self-driving car. If those engineers cannot find a way to cross the uncanny valley of automation and convince people to completely trust the technology, it will be very difficult for it to gain traction in the marketplace.
You must be logged in to post a comment.