In November 2015, Ford was testing a fully self-driving car at a 32-acre simulated city, designed by the University of Michigan. The goal was to operate a self-driving Ford Fusion in this simulated city, which has streets, traffic lights, stop signs, crosswalks and other urban roadway features.

"Testing Ford's autonomous vehicle fleet at Mcity provides another challenging, yet safe, urban environment to repeatedly check and hone these new technologies," says Raj Najir, Ford's vice president for global product development. The company says it has been working on cars that can do without humans at the wheel in one form or another for more than a decade.

To illustrate, in December 2013, Ford unveiled an autonomous Ford Fusion, and it is now in an "advanced engineering" phase with its driverless vehicles. With these efforts, Ford joins Google, Apple, Honda, GM and others that are actively perfecting the driverless car.

But with testing also comes realization of conditions that are difficult to simulate or anticipate, like the whimsical driving habits of unpredictable human drivers.

In some cases, driverless cars are actually over-engineered for safety, as Google discovered last year in real-world testing among human drivers who don’t drive by the book. “The real problem is that the car is too safe,” says Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles. “They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”

Google cars have been in 16 crashes since 2009, and the company reports that in every case, a human was at fault. In many cases, the car behaved correctly by following a safety protocol, but it lacked the flexibility to respond to the other driver. This is capricious human behavior that is difficult to program for.

Reliable software is a second challenge, because the software has to operate non-stop without crashing, freezing up or encountering an irresolvable error condition. "There is no current process to efficiently develop safe software," says Steven Shladover, a researcher at the Partners for Advanced Transportation Technology at the University of California, Berkeley. Equally important are the inaccuracy issues of maps that the software relies on.

Third, there are limits to the sensors that are employed in the vehicles. Early testing revealed problems with sensors continuing to function in adverse conditions like snow. Sensors also have to be equipped with sufficient discernment to distinguish between dangerous and harmless situations.

To be sure, important advancements in driverless car technology are on the way. Just recently, researchers at the University of California, Berkeley said that they developed new laser technology with the potential of significantly reducing the size, weight, cost and power consumption of LIDAR, which measures distances and play a pivotal role in driverless car sensor activity.

Hacking, taking down the grid, or failing to alternate networks when primary networks are down also present critical questions that researchers are working on before driverless cars can become a reality.

The good news is that these issues are now squarely on the radar of driverless technology developers, LIDAR and sensor manufacturers. Now it is a matter of taking each issue and developing the right technologies and man-machine interfaces that can make driverless cars not only autonomous, but responsive to the human foibles of driving in heavy traffic, coupled with the need to resort to technology alternatives when primary systems fail.