mapping


HD Maps Might Help Teslas Stop Running into Fire Trucks

Recently, a Tesla in Utah ran into the back of a stationary fire truck at high speed. This is the second such incident this year and the National Transportation Safety Board is already investigating the earlier incident. Incidents involving Teslas get news coverage because of the strident safety claims made by Elon Musk for his company’s AutoPilot driver assist system, but such accidents can happen with many vehicle brands. Relying on a single sensor for active safety control is often inadequate, but high definition (HD) maps may actually turn out to be part of the solution.

Teslas, and many millions of other vehicles, are equipped with forward-looking radar sensors that are used for adaptive cruise control (ACC). The radar is used to detect a vehicle moving ahead while ACC is active and measures the gap to that vehicle. If the lead vehicle slows down, the ACC vehicle will automatically slow to maintain a safe gap.

Forward-Looking Sensors Not Seeing Everything

You might think that if ACC detects a stopped vehicle it would automatically slow to a stop, but as the two recent crashes indicate, this isn’t always true. When ACC is used at highway speed, the assumption is that the other vehicles on the road will also be moving. To prevent false positives that would cause the brakes to erroneously engage, these systems are designed to ignore static objects like road signs, light poles, etc.

When another static vehicle that was outside of the radar range comes within view of the sensor while moving at highway speeds (as both vehicles in these crashes were), it is not assumed to be a vehicle and thus it is ignored. Some vehicles also include a combination of automatic emergency braking and/or forward collision warning safety systems to prevent crashes, but these systems are not optimized for identifying stationary vehicles in the roadway when the vehicles are traveling at highway speeds. Refinements in the coordination between these systems will continue.

How Does Mapping Fit into This?

Today, increasingly detailed maps are being used not just for routing but also as inputs to hybrid propulsion systems and long-range sensors in partially automated vehicles from GM and Mercedes-Benz. In the coming years, HD maps with detailed locations of static objects will be used for precision localization. If a vehicle has HD maps with the locations of fixed roadside objects, it may be possible to fuse this with the real-time radar data to better understand which objects can safely be ignored. The addition of image data from the camera used for lane keeping assist and it should be possible to recognize legitimately stopped vehicles and respond accordingly.

Companies such as San Francisco startup Mapper and incumbent map providers like HERE and TomTom have begun building HD maps. Mapper has developed a low cost, multi-camera-based data collection system that can be installed in vehicles used for ride-hailing providers or in other fleets. By the end of 2018, up to 2 million vehicles from Volkswagen, BMW, and Nissan are expected to be on the road globally with Mobileye’s latest EyeQ4 image processor. These vehicles will also be collecting data that feeds into Mobileye’s Road Experience Management system and then into maps from providers including HERE.

The sooner we start augmenting existing driver assist systems with new data sources such as HD maps or fusion of other sensors in the vehicle, the sooner object classification should improve to help prevent more crashes. The Tesla crashes are getting the attention, but these are problems that afflict virtually every manufacturer and the technology needs to be improved in order to save more lives.


With Self-Driving Cars, We’re All Cartographers

Mapmaking used to be the domain of a select group of cartographers that would gather, review, and plot out data onto sheets of paper. The chances that you actually knew a cartographer in the past were probably pretty slim—but not anymore. Today and in the future, virtually everyone is or will be a contributor to the increasingly detailed maps that represent the world we live in.

As our vehicles become increasingly automated, they need ever more detailed maps, and not just the maps we get from Google or Apple on our smartphones. The self-driving car will need much more information. The basics of street names, directions, and building numbers are just the beginning determining a basic route from where a car is to where its user has asked it to go. This data set already exists in every vehicle with a navigation system and a GPS receiver.

Limits of GPS

However, if you’ve ever tried to navigate around urban canyons in places like Manhattan or Chicago, you’ve no doubt experienced the limitations of GPS as the signals orbiting more than 12,000 miles above the Earth’s surface bounce between skyscrapers. Looking at the navigation display and realizing that the car thinks it is several city blocks away from your actual location is not exactly confidence-inspiring.

Even when it works correctly, GPS is only accurate to several feet, not nearly precise enough to safely locate where a car is on the road. Then there’s the problem of navigating around on streets when you can’t actually see the road, such as when it snows. If you can’t rely on GPS for precise positioning and you can’t see lane markers, you need other data to calculate location.

Crowdsourced Maps

That’s where the future of crowdsourced mapping comes in. If you use smartphone-based navigation apps like Waze, Here, TomTom, or Google or Apple maps, you are already contributing to augmenting the map data that is also collected by fleets of sensor-equipped vehicles that drive the world’s roads.

In the near future, the cameras and other sensors that power lane keeping systems and other driver assist features will be feeding information to datacenters where it is aggregated with information from other drivers. In addition to real-time traffic and road conditions, they will be looking for landmarks like bridges, signs, buildings and more, and anything that isn’t already in the high-definition map will be uploaded.

Mobileye is the leading maker of image processing and recognition systems used by automakers for driver assist. In January 2016, the company announced a new product called Road Experience Management that processes images captured by car cameras and sorts out new information. This data is then transmitted and collected in order to update maps. Earlier this year, Ford invested in a startup called Civil Maps that is developing a similar system using cameras and any other sensors on the vehicle that can provide relevant data.

Even when the vehicle sensors can’t see the road, if they can see landmarks, they can triangulate and calculate position to within a few inches. Last winter, Ford demonstrated the ability to do precisely this with its autonomous prototype using a high-definition map generated using LIDAR. The future ability of autonomous vehicles to successfully operate in varied conditions will depend in large part on the contributions that we all make toward improving the quality of maps.