With Self-Driving Cars, We’re All Cartographers


Mapmaking used to be the domain of a select group of cartographers that would gather, review, and plot out data onto sheets of paper. The chances that you actually knew a cartographer in the past were probably pretty slim—but not anymore. Today and in the future, virtually everyone is or will be a contributor to the increasingly detailed maps that represent the world we live in.

As our vehicles become increasingly automated, they need ever more detailed maps, and not just the maps we get from Google or Apple on our smartphones. The self-driving car will need much more information. The basics of street names, directions, and building numbers are just the beginning determining a basic route from where a car is to where its user has asked it to go. This data set already exists in every vehicle with a navigation system and a GPS receiver.

Limits of GPS

However, if you’ve ever tried to navigate around urban canyons in places like Manhattan or Chicago, you’ve no doubt experienced the limitations of GPS as the signals orbiting more than 12,000 miles above the Earth’s surface bounce between skyscrapers. Looking at the navigation display and realizing that the car thinks it is several city blocks away from your actual location is not exactly confidence-inspiring.

Even when it works correctly, GPS is only accurate to several feet, not nearly precise enough to safely locate where a car is on the road. Then there’s the problem of navigating around on streets when you can’t actually see the road, such as when it snows. If you can’t rely on GPS for precise positioning and you can’t see lane markers, you need other data to calculate location.

Crowdsourced Maps

That’s where the future of crowdsourced mapping comes in. If you use smartphone-based navigation apps like Waze, Here, TomTom, or Google or Apple maps, you are already contributing to augmenting the map data that is also collected by fleets of sensor-equipped vehicles that drive the world’s roads.

In the near future, the cameras and other sensors that power lane keeping systems and other driver assist features will be feeding information to datacenters where it is aggregated with information from other drivers. In addition to real-time traffic and road conditions, they will be looking for landmarks like bridges, signs, buildings and more, and anything that isn’t already in the high-definition map will be uploaded.

Mobileye is the leading maker of image processing and recognition systems used by automakers for driver assist. In January 2016, the company announced a new product called Road Experience Management that processes images captured by car cameras and sorts out new information. This data is then transmitted and collected in order to update maps. Earlier this year, Ford invested in a startup called Civil Maps that is developing a similar system using cameras and any other sensors on the vehicle that can provide relevant data.

Even when the vehicle sensors can’t see the road, if they can see landmarks, they can triangulate and calculate position to within a few inches. Last winter, Ford demonstrated the ability to do precisely this with its autonomous prototype using a high-definition map generated using LIDAR. The future ability of autonomous vehicles to successfully operate in varied conditions will depend in large part on the contributions that we all make toward improving the quality of maps.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.