In a world that is becoming increasingly interconnected, we can’t help but witness large-scale convergence. The very term, “Internet of Things,” is an all-encompassing concept where nearly everything is connected – though I’m not sure what my refrigerator would have to say to my car.  

On the geospatial side, there is also growing interest in knowing more about where things are. I guess you could call it the “Internet of Where.” When my refrigerator tells me I’m out of milk and I drive to the grocery store, my in-vehicle navigation system can prompt me to turn right into the parking lot when it knows I am near the store. I can supplement the navigation system’s plus-or-minus 10 foot accuracy by visually seeing the driveway and making a safe turn.

If my vehicle were autonomous, it would require a bit more precision. To achieve that, it needs an accurate map and onboard sensors working together to tell it when I am approaching my destination and where, exactly, I need to turn. If something has changed in the real-time world that creates a conflict, the onboard sensors need to detect it and either call on me to intervene or redirect the vehicle around the new obstacle.
 

The Importance of Mapping

When LiDAR maker Velodyne released its VLS-128, founder and CEO, David Hall, commented:

“While we were building our business, some map makers came to us and said they wanted to use our sensors to map the world in 3D. … They used to go out with laser range finders and survey crews and spend days mapping the height of an overpass and mundane things like that. With our sensors, they could just drive underneath and create a more detailed map with substantially less time and effort. Our instincts said follow that trend and wait for the autonomous revolution to catch up.”

Parallel developments were focusing on geographic information systems (GIS) and the role they could play in smart management of city assets. But first, the cities had to identify and locate those assets. Every utility pole, road sign, street light, and fire hydrant needed to be part of the database, and it all had to be accessible to a variety of stakeholders. Under the banner of Smart Cities, there were even smartphone apps developed for citizens to do things like identify specific streetlights that weren’t working and submit a report that would land on the appropriate desk to have the problem resolved.
 

Some Simple Convergence

Hall talked about waiting for the autonomous revolution to catch up, but that is a bit of an understatement. Quite a lot of work has already been poured into bringing together the results of mapping efforts, asset management, and the onboard systems managing autonomous vehicles. Vehicle navigation systems employing precise location information from global navigation satellite systems (GNSS) get us onto the correct street, where the GIS asset mapping helps inform the vehicle systems when to expect a crosswalk and identify the roadside object near the intersection as a utility pole. The onboard systems scan the real-time view and identify changes or conflicts. This is all common practice in as-built and BIM surveys, but on a different timeline.

Object recognition is critical to the onboard systems. The LiDAR scanners detecting an object near the upcoming crosswalk need to determine whether it is a fixed object – street sign, light pole, etc. – or a possible pedestrian. This all takes place very quickly in a human brain, but an autonomous vehicle sensory system must observe and compare the object to its database of expected and unexpected objects. The GIS asset management says there should be a road sign at this intersection. Is that object the road sign? If not, does it fit the profile of another asset? Meanwhile, the vehicle should prepare to react, perhaps slowing down and watching for movement.
 

Complexities of Safety

The question for transportation safety officials and others is, “Has the autonomous revolution caught up?” It sounds like the technologies have continued their rapid advancement towards providing a safe operating environment, at least on urban streets.

Once again, here are some thoughts from Hall:

“We think the biggest unsolved problem for autonomous driving at highway speeds is avoiding road debris. That’s tough, because you have to see way out ahead. The self-driving car needs to change lanes, if possible, and do so safely. On top of that, most road debris is shredded truck tire—all black material on a dark surface. Especially at night, that type of object recognition is challenging, even for the LiDAR sensors we’ve previously built. The autonomous car needs to see further out, with denser point clouds and higher laser repetitions.”

Surveyors know the problem. LiDAR is a contact system. The laser light must be transmitted and reflected back to be collected. We’ve all seen point clouds with anomalies created by simple things like puddles on black asphalt surfaces. In a survey, you can supplement the laser scan with other tools to fill the gap. For a vehicle – even one traveling 35 miles per hour – additional sensors need to fill the gap quickly or the onboard control logic needs to respond with an alert or evasive action.

Hall talks about increasing the number of sensors, speed, and redundancy of scans, indicating progress continues on the technology side. But there is an inherent issue in the physics of light that continues to plague LiDAR. In a post on Velodyne’s support page, the discussion notes, “The maximum range limit of a LiDAR sensor is dependent upon many factors, including the largest object’s reflectivity.” A table demonstrates that the approximate reflectivity value of a white, matte target is 90-100. A clean, reflective road sign has an approximate reflectivity value of 245-255. A black matte target has just 0-10 for an approximate reflectivity value. The post adds, “Black, matte objects may not be consistently visible until 40 meters.”
 

“Where?” is the Question

We started this discussion by talking about how we know where objects are located. The question now shifts to, “Where do we have to go next?”

Clearly, the potential operating environment for autonomous vehicles is not sufficiently mapped to provide the level of detail needed. Some discussions are still in the range of what level of precision is needed. For some aspects, survey grade is definitely called for. In other respects, GIS grade is adequate.  

Data collection is one of the challenges. For data to be useful, there must be some level of control established when the initial collection occurs. Further updates, as long as they can be tied to that control, can utilize a variety of resources. This can create a form of “continuous surveying” which gathers and updates information somewhat opportunistically. Related to this is a crowdsourcing approach that suggests at least some data can be gathered from a variety of non-survey sources in order to provide a more accurate current view. The scanners used by the autonomous vehicles themselves can be one of those sources. It’s a bit like doing an as-built scan every time the vehicle travels down a road.