My first exposure to the Google driverless car was through an email from a colleague who was driving down California Highway 101. He saw one of the test cars on the highway. According to my colleague, the car was driving itself while its occupant thumbed through a copy of People magazine.
Since then, Google has continued to work on the project, progressing from automated navigation of freeways to the intricacies of navigating city streets.
Just two years ago, Google shot a video of Steve Mahan, CEO of the Santa Clara Valley Blind Center, driving one of the test cars to pick up fast food and some clothing at dry cleaners. Mahan was legally blind, and had not driven a car for seven years. "It's a paradigm shift for transportation in general," he said. "But for those of us that have lost access to transportation, it's a life shift."
There are many technology elements that make driverless cars go, but few are more important than the light detection and ranging technology (LiDAR) devices that are mounted on the roofs of these vehicles. These LiDAR devices can scan more than 200 feet in all directions, generating a precise three-dimensional map of the car’s surroundings.
To begin the driverless car experience, the car’s user enters an address into Google Maps. The car’s software then collects information from Google Street View that will be referenced by the vehicle’s artificial intelligence (AI) software. This same AI software effects navigation and driving with the combined input it receives from in-car video cameras, radar sensors located at the front of the car, a position sensor attached to one of the car’s rear wheels that assists in locating the car’s location on a map, and a LIDAR sensor on the top of the vehicle that monitors the car’s immediate surroundings. The ensemble of technologies enables navigation from start to destination points. It also ensures that collisions and other obstacles are circumvented, thanks to the LIDAR detection of all potentially intervening events within a 360-degree, 200-foot range of the car.
LIDAR is nothing new. While its top-of the-line $70,000 per spinning unit uses millimeter-wave radar and continuously collects and maps vital data, LIDAR has also been used in more moderately priced applications in cars that employ adaptive cruise control systems (ACC). LIDAR has been used in “ordinary” (non-hybrid) driverless car experiments at Cornell University; in factory automation using robots and robotic vehicles; and even in household robotic vacuum cleaners.
For driverless cars, the key now to the commercialization of LIDAR is to bring the cost of the technology down.
The Eno Center for Transportation, which is a non-partisan think tank that is based in Washington, D.C., predicts that it will take at least a decade to bring the cost of the electronics in driverless cars down to $10,000 (this includes the LiDAR device). The Eno Center also predicts that driverless cars would save 1,100 lives per years, and that there would be 211,000 fewer car crashes per year if only 10 percent of vehicles on U.S. roads (or 12.7 million cars) were driverless Opportunities, Barriers—a benefit that can hardly be measured in dollars.
Although some industry pundits believe that by 2030, it will be virtually impossible to buy a new car that you can manually drive, David Alexander, a senior analytics with Navigant Research, does not project that self-driving cars will be commercial available until 2025. "I think the Google technology is great stuff. But I just don't see a quick pathway to the market," said Alexander.
Nevertheless, the use of LIDAR and other automation technology in cars is quietly expanding.
Audi, Acura, Subaru and Mercedes all use Advanced Pre-Collision Systems (A-PCS) that work with millimeter-wave radar, front-facing infrared projectors and a front-mounted stereo camera. The technology works to avoid low-speed collisions by scanning vehicles in near-to-far ranges and emitting audio visual signals when danger is present. Meanwhile, other in-car technology advances for LiDAR wait in the wings.