navigant research blog


Perception vs. Reality: CES and the North American International Auto Show

If there is any one lesson that we should all take away from 2016, it’s the confirmation that perception does not necessarily equal reality. What people perceive to be the truth is often the most important part of their decision-making, a concept now shown in the auto industry’s seemingly increasing participation in the International CES and apparently declining interest in Detroit’s North American International Auto Show (NAIAS).

There has been a lot of consternation in Michigan recently about the impact that CES has had on the Detroit show over the past decade. The two events tend to run back-to-back over the first 2 weeks of January. I was on hand in 2008 when then-General Motors CEO Rick Wagoner was the first major auto executive to keynote at CES after demonstrating the autonomous Chevrolet Tahoe, which won the DARPA urban challenge the prior year. While more automakers and suppliers than ever took part in CES this year, GM actually took a pass for the first time since Wagoner’s speech.

While the Detroit Auto Dealers Association, which organizes the NAIAS, is concerned that manufacturers are increasingly favoring CES, the issues of the auto show are largely unrelated to what’s happening in Vegas. Auto shows are consumer events designed to showcase all of the latest products available for sale, and media previews show what is arriving in the coming months.

With rare exceptions (like 2016, when Chevrolet unveiled the production version of the Bolt EV), new production vehicles are almost never shown at CES. The electronics show is a business-to-business event that isn’t open to the public; instead, the industry flocks to Las Vegas to talk up technology.

NAIAS Is About Reality; CES Is About Perception

For many years, the financial market’s perception of the auto industry has been that of old-school manufacturers of commodity widgets. The view of Silicon Valley and technology companies is that of innovators on the bleeding edge that are poised for explosive growth. Thus, you have investors pouring billions of dollars into startups every year; most of those companies getting all of that investment fail without ever producing anything noteworthy while burning through cash.

Meanwhile, the modern car is one of the most complicated and technologically sophisticated devices ever created and is produced by the latest cutting-edge processes. The industry that produces them employs tens of millions of people globally directly and indirectly, generating trillions of dollars in revenue and tens of billions in profit. Yet the industry gets little respect and low market values.

The presence of the auto industry at CES is designed to reach a group of media that cover companies like Apple, Google, Microsoft, Amazon, and Facebook alongside countless startups, the same media that investors follow. The goal is to change the perception of the auto business from one that looks like it came from the dawn of the industrial revolution to one that innovates on a daily basis.

That’s not a message you can get across by showing off the refreshed Ford F-150, even though it may be packed with far more technology than anything from Silicon Valley. That’s a message you communicate by demonstrating automated cars in Las Vegas traffic jams; partnership announcements with chip designers like Nvidia won’t reach its intended audience in auto shows in Detroit, Frankfurt, or Geneva. These shows have issues to address, but the fault doesn’t lie in Las Vegas. It’s all about perception.


With Self-Driving Cars, We’re All Cartographers

Mapmaking used to be the domain of a select group of cartographers that would gather, review, and plot out data onto sheets of paper. The chances that you actually knew a cartographer in the past were probably pretty slim—but not anymore. Today and in the future, virtually everyone is or will be a contributor to the increasingly detailed maps that represent the world we live in.

As our vehicles become increasingly automated, they need ever more detailed maps, and not just the maps we get from Google or Apple on our smartphones. The self-driving car will need much more information. The basics of street names, directions, and building numbers are just the beginning determining a basic route from where a car is to where its user has asked it to go. This data set already exists in every vehicle with a navigation system and a GPS receiver.

Limits of GPS

However, if you’ve ever tried to navigate around urban canyons in places like Manhattan or Chicago, you’ve no doubt experienced the limitations of GPS as the signals orbiting more than 12,000 miles above the Earth’s surface bounce between skyscrapers. Looking at the navigation display and realizing that the car thinks it is several city blocks away from your actual location is not exactly confidence-inspiring.

Even when it works correctly, GPS is only accurate to several feet, not nearly precise enough to safely locate where a car is on the road. Then there’s the problem of navigating around on streets when you can’t actually see the road, such as when it snows. If you can’t rely on GPS for precise positioning and you can’t see lane markers, you need other data to calculate location.

Crowdsourced Maps

That’s where the future of crowdsourced mapping comes in. If you use smartphone-based navigation apps like Waze, Here, TomTom, or Google or Apple maps, you are already contributing to augmenting the map data that is also collected by fleets of sensor-equipped vehicles that drive the world’s roads.

In the near future, the cameras and other sensors that power lane keeping systems and other driver assist features will be feeding information to datacenters where it is aggregated with information from other drivers. In addition to real-time traffic and road conditions, they will be looking for landmarks like bridges, signs, buildings and more, and anything that isn’t already in the high-definition map will be uploaded.

Mobileye is the leading maker of image processing and recognition systems used by automakers for driver assist. In January 2016, the company announced a new product called Road Experience Management that processes images captured by car cameras and sorts out new information. This data is then transmitted and collected in order to update maps. Earlier this year, Ford invested in a startup called Civil Maps that is developing a similar system using cameras and any other sensors on the vehicle that can provide relevant data.

Even when the vehicle sensors can’t see the road, if they can see landmarks, they can triangulate and calculate position to within a few inches. Last winter, Ford demonstrated the ability to do precisely this with its autonomous prototype using a high-definition map generated using LIDAR. The future ability of autonomous vehicles to successfully operate in varied conditions will depend in large part on the contributions that we all make toward improving the quality of maps.


Automakers Doing More Rigorous Safety Analysis for Vehicle Automation

Back in September 2014 as the ITS World Congress gathered in Detroit, General Motors (GM) CEO Mary Barra announced that in 2016, a new Cadillac model would become available with the semi-autonomous Super Cruise system. With only a handful of weeks left in 2016, we now know that the Super Cruise will debut on Cadillac’s flagship CT6 sedan, but it won’t be arriving until sometime in 2017.

A lot has happened since that announcement, and GM has put a much greater emphasis on ensuring safety as a result of the massive ignition switch recall that began early in 2014. Those process changes have led to some significant upgrades to Super Cruise in an effort to avoid the issues caused by human interactions with Tesla’s similar AutoPilot driver assist system. Navigant Research’s Autonomous Vehicles report projects that by 2020, approximately 13 million vehicles with these so-called Level 2 automation systems will be sold annually.

Geofencing

In the process of evaluating the safety of Super Cruise, one of the key differences that GM has implemented is geofencing. Since Super Cruise is designed primarily as an advanced highway driving assist system for use on limited access roadways, GM is not relying on customers to understand where it does and does not function. Instead, the system will check the navigation map—if the vehicle isn’t on a suitable road, the driver will not be able to activate it. In contrast, Tesla’s operating instructions state that AutoPilot should only be used on divided, limited access roads, but there is nothing in the system to actively prevent a driver from using the system in an urban area or any other roadway that it’s not designed for.

Similarly, Tesla doesn’t really take measures to prevent operators from taking their attention away from the road. Countless videos have been posted by Tesla drivers as they take a nap, read, or even climb in the back seat while using AutoPilot. The research conducted by Bryan Reimer and the Advanced Vehicle Technology Consortium at the Massachusetts Institute of Technology reinforces the idea that even informed drivers will get distracted while using systems like AutoPilot or Volvo’s Pilot Assist.

Improving Safety

Cadillac is installing an active driver monitoring system in the CT6, which will include more prominent alerts if the operator does not remain engaged while using Super Cruise. If the driver does not respond, the car will pull to the side of the road and come to a safe stop.

GM safety engineers have also addressed the issue of the inevitable mechanical failure. When fully autonomous vehicles arrive, they will require systems that can maintain control during a failure mode until the vehicle is safely stopped. One of the key safety failure modes for a system like Super Cruise is the electrically assisted steering.

One of the optional features on the currently available CT6 without Super Cruise is the Active Chassis Package, which includes a rear-wheel steering system to aid low-speed maneuverability and high-speed stability. This rear steering system will be included on the CT6 with Super Cruise. While the rear steering is not designed to provide the same full maneuvering capability of the normal front steering, it will be sufficient to safely steer the car to the side of the road in the event of a front steering failure.

We won’t have an opportunity to fully evaluate the capabilities of Super Cruise until sometime next year, but it does inspire some confidence that GM is at least thinking about and trying to address both human and mechanical failure modes before putting the system into customer hands.


Cyber Security Is Imperative Before Deploying Autonomous Vehicles

August 2016 brought a flurry of autonomous driving announcements from Delphi, nuTonomy, FordVelodyneVolvo, Uber, Quanergy, and others. News about developments and deployment plans for self-driving vehicles came almost daily. A common thread was that the vehicles will be used as part of autonomous mobility on-demand (AMOD) services that require connectivity in addition to onboard sensing to function. However, something equally (if not more) important to implement before deploying any of these vehicles is beefing up the cyber security.

As the automotive world has raced over the last few years to transform itself into a mobility business, cyber security experts of both the white and black hat variety have also been advancing their own capabilities. In parallel with that, we’ve seen the launch of numerous startups focused on securing increasingly sophisticated vehicles from bad actors, including several based in Israel. Among them are Karamba SecurityArgus Cyber Security, and TowerSec.

Hardened Telematics

With external connection points through telematics being the obvious starting point for any malicious attacker trying to infiltrate a vehicle, that’s also the first surface that needs to be hardened. “To provide protection, we have to think like hackers,” said David Barzilai, chairman and co-founder of Karamba. “There are two primary ways to hack a system like this, dropping malicious binary code into the electronic control unit [ECU] or in-memory attacks while the system is running.”

The so-called code-dropper approach involves rewriting some of the code that resides in the flash storage of an ECU with malicious code designed to do something never intended by the manufacturer. Karamba has devised an approach to prevent this that is very straightforward for the software engineers at an automaker to implement without having to change any of their own code.

When building binary files that ultimately get loaded into the ECU, the scripts include calls to the Karamba system to automatically include some of that company’s code. Karamba generates hashes (an encrypted alphanumeric string that uniquely represents the contents of a file) of all the factory binary files which are included. If someone tries to reprogram an ECU with a binary that doesn’t match the hash, it will be rejected.

In-Memory Attacks

Even if the original programming remains intact, in-memory attacks remain the most common attack vector. Control instructions and data get moved from the static flash storage to dynamic memory in order to run in real time. If an attacker manages to inject deliberately corrupted data into a memory address, it is possible to send the control flow off to an instruction never intended by the designers of the system. This is the sort of attack that can enable someone connecting through a vehicle’s telematics system to take control of safety-critical systems like the throttle, brakes, or steering.

Some security providers use heuristic analysis to look for anomalous behavior in real time and stop the activity. This approach creates rules with weighting and probability to detect anomalies based on previously unknown attacks and is utilized by most computer anti-malware programs. Since the in-vehicle electronics should never be running random unknown programs like a computer or smartphone, Karamba has taken a deterministic approach. During the software build, they analyze and map every possible instruction control flow. In the vehicle, any instruction call that doesn’t match the flow map immediately gets discarded, an approach that should not result in any false positives.

Navigant Research’s Autonomous Vehicles report projects that nearly 5 million autonomous vehicles will be sold in 2025, growing to more than 40 million in 2030. Harnessing the safety benefits of this technology requires every vehicle to be secure and resilient against cyber attacks.


Automakers Need to Start Being More Candid About the Limits of Autonomous Technology

When was the last time you ever actually read an end-user license agreement or terms of service before clicking “Accept” to install a piece of software or join the latest social network? Odds are that unless you are a lawyer, the answer is never. The technology companies that make these products would probably like it to stay that way. However, in the world of the self-driving car, that is not an acceptable policy. The tragic death of a Tesla Model S driver in Florida highlights the need for all automakers to be more open and transparent about the limitations of autonomous technology.

Revolutionary (When It Works)

It seems that barely a day goes by when we don’t get a breathless press release from an automaker, supplier, technology company, or Silicon Valley startup about the amazing progress that they are making on self-driving technology. You can already go out today and purchase vehicles from a number of brands that promise at least partial autonomous capability, and full autonomy is being targeted by the end of this decade. While Tesla Autopilot, Volvo Pilot Assist, and other similar systems seem truly magical when they work as advertised, there are far more scenarios where these systems do not function at all.

Unfortunately, we have not seen Tesla CEO Elon Musk stand on a stage and tell people not to use Autopilot in the city, on curving rural roads, or in the snow. GM CEO Mary Barra stood on the stage at the 2014 ITS World Congress in Detroit and promised a Cadillac with hands-off Super Cruise capability in 2016. I’ve experienced prototype systems from Toyota and Honda and driven production systems from Tesla and Volvo, and when they work, they are incredibly impressive.

I am an engineer by training and technology analyst by trade, and I have a much greater understanding than the average consumer about how these systems work. As a result, I can never truly relax with these systems because I’m always on the lookout for the failure mode, and they are numerous. Unless very explicitly told, the average consumer will be so excited by the prospect of turning over control to a computer that they will not pay any attention to the warnings that Autopilot is very much in beta before enabling it. Volvo doesn’t even give that warning before allowing Pilot Assist to be engaged.

Mainstream Customers

Tesla is fortunate that many of its existing customers are early adopters that expect technology to be imperfect, although most of them probably don’t expect to be at risk of injury when it fails. When the Model 3 arrives and mainstream consumers try Autopilot and find its limitations, they aren’t likely to be as forgiving, and the same is true for every other automaker offering autonomous features. Navigant Research’s Autonomous Vehicles report projects more than 4 million autonomous-capable vehicles to be sold by 2025. Those customers need to know what the systems can do—and, more importantly, what they cannot.

We don’t yet know all the details of what happened in the tragic crash in Florida. Similar accidents where one vehicle crosses a highway divider happen all the time, and fatalities occur when humans are in control. What we do know is that we are far from a time when we can just sit back and relax and let the computer do the driving. Every company involved in this space needs to be far more upfront with consumers about this technology can do or risk poisoning the market.


Initial Quality Study Highlights the Commercial Risks of Vehicle Automation

For many years after J.D. Power and Associates began conducting its Initial Quality Study (IQS) 3 decades ago, most problems reported by customers in the first 90 days of vehicle ownership were either defects or non-functional features. However, in the past decade, the nature of reported problems has shifted toward what J.D. Power calls design-related issues. This could pose a serious problem for manufacturers as they rush to introduce autonomous driving technology.

At a recent meeting of the Automotive Press Association in Detroit, J.D. Power vice president Renee Stephens presented the 2016 IQS results. The industry as a whole improved by 6% in 2016 to just 105 problems per 100 vehicles, the best improvement in 7 years. Among the reported problems, those that fall into the audio, connectivity, electronics, and navigation areas continue to represent the largest category of complaints.

Voice recognition and connected devices still befuddle consumers. Numerous manufacturers including Ford have seen ratings decline in past years as a result of difficulties using infotainment systems. “Expected reliability remains the most important consideration when purchasing a new vehicle, cited by 49% of owners,” said Stephens. “It’s critical that technology be implemented correctly or consumers lose trust.”

Potential Problems

An increasing number of new vehicles now include advanced driver assist systems (ADAS) such as adaptive cruise control and lane keeping aids. However, if features don’t work as expected by the consumer, they often get turned off after a few false positives or surprises. This highlights a potentially serious problem for the auto industry in the coming decade as semi and fully autonomous systems are increasingly rolled out in the marketplace. Navigant Research’s Autonomous Vehiclesreport forecasts that nearly 5 million autonomous vehicles are expected to be sold in 2025, a volume that is expected to grow to more than 40 million in 2030.

Regardless of current ADAS and whether future autonomous systems work as the engineers intend them to, it is absolutely imperative that they work as consumers expect. Autonomous capability will add significant cost to vehicles, and until there is a shift toward on-demand mobility services, consumers will have to absorb that cost. If their experience with the stepping stone technologies is excessively negative, the market will reject these technologies.

Contradictory Views

This will be particularly true if consumers realize that autonomous systems don’t work at all in the scenarios where they are most likely to want to hand over control, such as in poor weather. A major market force for automated driving is improving safety. Related to the general functionality of these systems is the problem of ethics where, as is often the case, the public has contradictory views. A new study by MIT professor Iyad Rahwan shows consumers want autonomous vehicles to minimize casualties in the event of unavoidable crashes. However, that only applies if that person is not the potential casualty. It comes down to protect everyone—but protect me first.

If society as a whole is ever going to benefit from the potential of autonomous vehicles in reducing collisions, congestion, and energy use, much will have to change in society. Consumers will have to be educated in how these systems work so that expectations can be set appropriately. If the bar is not adjusted, consumer complaints in IQS and other studies will skyrocket, and this technology could die on the vine.


Does Vehicle Automation Need to Overcome the Uncanny Valley to Succeed?

In the world of digital animation, there is a concept known as the uncanny valley, which refers to a sense of unease generated in a viewer when something meant to replicate a human appears extremely close to being real, but subtle errors indicate that it is not. Automated driving systems are now approaching something similar in their development cycle. If the electronic control systems that are expected to drive our future vehicles can’t reach a sufficient level of reliability and robustness to cross this valley, it’s possible that consumers will never accept the technology.

Accident statistics indicate that up to 94% of all crashes are caused by human error; there is no doubt that the human decision-making process is deeply flawed. Nonetheless, human perception and visual processing have some unique qualities that make us able to detect incredibly subtle nuances. When audiences saw the 2004 film The Polar Express, it was not well-received due to characters in the movie falling into the uncanny valley. The microexpressions that are such an important part of human communication were missing from the characters in the film, leaving them with what appeared to be dead eyes.

(Dis)trusting the System

In my role as a transportation analyst, I have the opportunity to drive many new vehicles to evaluate the latest technologies. Despite usually knowing where I’m going, I try to utilize navigation systems along with voice recognition and human-machine interface and driver assistance (ADAS) features to aid my understanding of what works and, more importantly, what doesn’t.

Having spent more than 17 years developing electronic control systems including anti-lock brakes and electronic stability control, I’m constantly impressed at how far these systems have advanced. Nonetheless, I have yet to encounter a system that I completely trust, including Tesla’s Autopilot, which is arguably the most advanced ADAS system on the market today. For the most part, Autopilot and other ADAS features work well within their control domains. Using radar, they can track a vehicle ahead at a safe distance and automatically slow down or speed up in response. Lane keeping systems detect road markings and provide alerts or even adjust the steering to keep the vehicle from drifting out of the lane.

Unfortunately, the sensors don’t always detect what’s around the vehicle consistently, so drivers must remain alert and be ready to take control. There are enough control errors in normal operation that it’s impossible to completely trust the system. Even far more advanced fully autonomous systems that are currently being testing by many automakers, suppliers, and technology companies aren’t perfect. They have little or no ability to operate in areas that don’t have hi-definition 3D maps, clearly visible signage and road markings, or even in instances of poor weather.

Consumer Pushback

A recently released study from the University of Michigan Transportation Research Institute revealed that only 15.5% of respondents wanted fully autonomous vehicles, and nearly half wanted no self-driving capability at all. Navigant Research’s Autonomous Vehicles report forecasts that fewer than 5% of new vehicles sold in 2025 will have fully autonomous capability.

This low consumer interest comes despite the fact that almost no one besides the engineers working on the technology have actually experienced a self-driving car. If those engineers cannot find a way to cross the uncanny valley of automation and convince people to completely trust the technology, it will be very difficult for it to gain traction in the marketplace.


Toyota, Microsoft, and an Army of Software Bots to Deliver Contextual Driving

A new Toyota subsidiary aims to provide drivers with autonomous contextual help via the assistance of software bot technology just announced by Microsoft. Skynet isn’t here just yet, but Toyota Connected Inc. represents just the beginning of where transportation is heading in the coming decades as we transition from personally owned vehicles to mobility as a service.

Bots, as they have become known in recent years, are basically just a relatively new type of app that usually runs on a server somewhere in the cloud. What makes bots special is their ability to tap into huge databases and take advantage of sophisticated machine learning to understand the meaning of a query. Those queries can come from either a human or another bot. One bot may collect information from any number of other bots, merging and presenting it to a human or vehicle interface at the edge of the cloud.

Cascade of Queries

A contemporary example might be a driver telling their car that they are hungry. This could trigger a cascade of queries that take your current location, stored data about your favorite kinds of food, finds a restaurant with an available table at a time based on how long it will take to arrive there, and returns a response of “Would you like a reservation at restaurant X at 6:45 p.m.?” All of this could stem automatically from that one original question with no further input from the driver.

Now imagine extending this concept 20 years into the future when we will have fleets of on-demand autonomous vehicles moving around cities, as projected in Navigant Research’s Transportation Outlook: 2025-2050 white paper. Today, if you are leaving one appointment and heading to another, you pull out a phone, open the Uber or Lyft app, and request a ride.

In 2035, the mobile communicator that has replaced your phone reads your calendar, sees you have an appointment 20 minutes away, knows your current meeting will end in 5 minutes, and automatically summons a vehicle to your location so that it pulls up as you step out onto the sidewalk with no intervention. Several bots have contributed to this function, including one that provides weather data, another with real-time and historical traffic information, one to monitor your calendar, and another to handle billing for the mobility service of your choice, all without any direct input from the rider.

Bot Creation

At its Build 2016 developer conference on March 30, Microsoft announced the release of bot software development framework to simplify the task of creating bots. Toyota Connected plans to utilize the Microsoft Azure cloud platform to provide services to its customers utilizing data from telematics and vehicle-to-external (V2X) communications systems. These communications pathways can provide drivers with real-time alerts about slippery roads when a vehicle ahead triggers an automated braking system or stability control, and can also enable automatic re-routing to avoid congestion or reduce energy consumption.

Navigant Research’s Connected Vehicles report projects that more than 80 million vehicles will be sold with V2X capability in 2025. Contextual data moving through the air between bots in vehicles and in the cloud is expected to reduce energy use, improve road safety, and generally make life more convenient for everyone.


Smartphone-Based Car Connectivity Is Likely Only an Interim Solution

I’ve been an advocate of smartphone projection infotainment solutions in cars ever since Ford introduced SYNC AppLink back in 2010. That appreciation has grown recently since the rollout of Apple CarPlay and Android Auto. Despite the vastly superior user experiences provided by Google and Apple compared to OEM designs, the coming of autonomous vehicle control systems means these almost certainly won’t be long-term solutions.

Since the debut of built-in GPS-navigation systems in the 1990s, they have been an expensive but useful option. Unfortunately, maps and especially the points-of-interest database can become rapidly outdated and typically only have one name for each entry in that database, so if a driver doesn’t get the spelling exactly right, they’ll be out of luck. The ability to draw information from the ever changing data stores of Google, Bing, and other search engines is a key advantage of smartphone navigation. Combined with cloud-based voice recognition that can provide more natural language search capabilities that recognize multiple name variations and you have a much more robust user experience.

Reliable Data

Such reliable and detailed navigational data will be a crucial component of making self-driving vehicles work reliably, especially if they are moving around without occupants as they park themselves or go to pick up passengers. Navigant Research’s Autonomous Vehicles report projects that there could be as many as 85 million vehicles capable of some degree of autonomy on the world’s roads in the next 2 decades.

True self-driving vehicles, especially those that are operated as part of mobility as a service fleets, will need connectivity and built-in maps that don’t rely on the presence of an occupant’s phone. OEMs are rapidly increasing the deployment of telematics systems into new vehicles. Every vehicle built by General Motors (GM) for sale in most major markets comes with OnStar built in, and Ford will be offering SYNC Connect on most of its fleet beginning this year. Within the next few years, these cars will be capable of searching both embedded and cloud-based navigational databases for near real-time information.

When Ford recently began testing its prototype autonomous Fusion in winter weather conditions, one key to the car’s ability to get around on snow-covered roads was the detailed 3D maps that were available onboard. The car was able to find its way around using LIDAR scanning the surroundings for landmarks, something that wouldn’t be possible using smartphone projection.

Powertrain electrification can also benefit greatly from built-in 3D maps. In 2014, the Mercedes-Benz S500 plug-in hybrid was one of the first vehicles to use knowledge of the road topography ahead to manage the balance between using battery and internal combustion power. The Kia Niro and Hyundai Ioniq hybrids going on sale this year are utilizing a similar strategy to achieve fuel efficiency improvements of approximately 1%.

Different Roles

Smartphone projection systems can certainly utilize topographical data to provide more economical routing decisions for drivers of the hundreds of millions of existing cars that will continue to operate for decades to come, and they will likely play a major role in reaching critical mass for vehicles capable of V2X communications. CarPlay and Android Auto will also continue to play a part in delivering news and entertainment to drivers, but even this will likely be supplanted by the telematics systems.

This doesn’t mean Apple and Google won’t have a part to play in future vehicles. In addition to the autonomous control systems that Google is offering to existing OEMs, the technology companies will probably be pushing for greater integration of their software directly into vehicle infotainment without the need for a connected phone.