The 2007 conference attracted a notable number of international attendees (more than 30 percent) and vendors (more than 20 percent). Roland Mangold, conference organizer and the editor of The Spatial Resources E-Letter, reported that “This enthusiastic crowd was composed of equal parts practitioners who provide LiDAR services and users of data derived from laser scanning.” Hardware and software developers and manufacturers complemented the conference with their offerings on exhibit.
The first day of the conference covered topics in LiDAR processing and data fusion, both important topics to the attendees. This technology is fast-moving and improvements to the basic laser systems, as well as the ability to interpret and generate useful information from point clouds, are continually adding depth and breadth to this important geomatics tool. Since thousands of laser “shots” are taken every second, even when flying over with LiDAR during “leaves-on” times, users are aware that some of the shots are returned by the canopy, some by the understory layers and some by the forest floor. More sophisticated processing techniques are being used to automate this process of separating the layers, recognizing that many more data users can be served if information on all these layers can be extracted and reported.
Data fusion, in the context of LiDAR, principally relates to the merging of imagery from visible light and infrared cameras, as well as thermal, multispectral and hyperspectral scanners. The fusion relates to geometric registration of the data, even if the sensors are all located in a single aircraft; it also relates to the techniques of using the various bands that have been sensed in various combinations to get information that is more than the sum of the parts.
Attendees also heard about the advancements in using LiDAR simultaneously with nadir and low oblique cameras where flight paths are intentionally designed to fly over the area of interest twice, the second time in a direction normal to the first. This enables side views of vertical structures from all directions. Basic photogrammetric principles can be used to measure the heights of the buildings. If “fly through” models of the terrain and objects are to be created, renderings of the faces of buildings can be done with much less ground surveying and photography than without these innovative oblique images. In these situations, the point clouds are still used to create the basic shape of the ground and the objects on it, and the oblique images are used to interpret the raw data so a more accurate model can be created.
On the second day, attendees were informed about a number of technical and practical developments of LiDAR use. Users reported using LiDAR in a wide variety of applications, with shot intensity varying from less than 1 to more than 25 per square meter. Depending on the technology (including other data sensors in addition to the LiDAR) and flying height, vertical (Z-direction) information can be generated with an accuracy of between 2 and 10 cm.
Of particular interest on the second day was a session devoted to using LiDAR for bathymetric applications (measuring water depths, or more correctly, the elevation of the bed of the water body). Some users reported that improvements to hardware and software technology allow depth measurements up to 50 m, dependent on water clarity and stillness of the surface.
Sponsors for the 2007 ILMF included: 3001: the geospatial company, Airborne 1, Dynamic Aviation, John Chance Land Surveys, GeoCue Corporation, Leica Geosystems, Merrick & Company, Riegl USA, Spectrum Mapping, TopoSys Topographische Systemdaten GmbH, Tuck Mapping, The American Surveyor, GISUser.com and POB magazine. For more on the ILMF, visit www.lidarmap.org.
Special reporting by Joseph V.R. Paiva, PhD, PS, PE.