Land surveyors and geospatial professionals use photogrammetry and LiDAR (light detection and ranging) in a variety of visual mapping and land survey documentation projects. Both technologies can often produce similar output, though they are really quite different in the methodologies that they employ. For any land surveyor or geomatics practitioner, understanding the differences between the two technologies — what they are and when to use them — will improve accuracy, efficiency and ensure the project achieves the desired results.

When tasked with documenting, mapping and/or surveying objects in a visual medium, both photogrammetry and LiDAR can be excellent choices. The key is to understand the benefits and challenges of both technologies so that the best technical approach can be chosen for any given project or situation.

Photogrammetry is the science of using photographs to make reliable measurements between objects. Using a series of photogrammetric photos, users can recreate geometric representations of the photographed objects. 

Aerial photography is one of photogrammetry most common mediums to obtain photogrammetric images. However, photogrammetry can also apply to interior structures. In photogrammetry applications, overlapping photos, from at least two or more vantage points, help establish depth and perspective. Data is then converted to a point or a set of data points. 

LiDAR uses lasers to accomplish many of the same tasks as photogrammetry, and the technology behind LiDAR is similar to how radar uses radio waves. In LiDAR applications, rapid pulses of laser light are fired at a surface, in some cases at a rate of 150,000 pulses per second. A LiDAR sensor measures the amount of time it takes for each pulse to bounce back from the earth’s surface to the LiDAR instrument. 

This process is repeated in rapid succession until the LiDAR captures a map of the land area it is measuring that meets the degree of detail that the surveyor requires. A point cloud is then generated.

This article covers the history and mechanics of photogrammetry and LiDAR technologies. Also to be examined are usage cases where photogrammetry or LiDAR, or a combination both, will enable the best result.


Photogrammetry History And Development

The earliest known mention of photogrammetry’s concepts was in 1480 when Leonardo DaVinci wrote, “Perspective is nothing else than the seeing of an object behind a sheet of glass, smooth and quite transparent, on the surface of which all the things may be marked that are behind this glass. All things transmit their images to the eye by pyramidal lines, and these pyramids are cut by said glass. The nearer to the eye these are intersected, the smaller the image of their cause will appear.” 

Much later, in the 1850s, French scientist Aimé Laussedat pointed out the potential for mapping using photography. Decades later, various photogrammetry experiments began during World War I and World War II. However, it wasn't until 1984 when Professor Ian J. Dowman of University College London proposed that photogrammetry could be used as a digital means of mapping the topography of terrain by using satellite imagery. This was the point at which commercial uses for photogrammetry began to emerge.


How Photogrammetry Works

Photogrammetry is a combination of photography and software. 

The first stage of photogrammetry is taking photographs of the subject area and the objects inside. This step entails taking a series of photographs at different angles and ensuring that there is enough area overlap between photographs so that no portions of the area delineated for study are missed. Angles and alignments must be carefully considered to ensure that you have a complete pictorial representation of the area you are photographing. 

Photographs taken should also be sharp and of high resolution. Why? Because every pixel point in each photograph defines a light ray in 3D space that starts at the camera and extends out to a central reference point in the subject area that is used for measurement. Each photograph taken is then imported into a photogrammetry software that requires not only each area photograph, but the position and angle of the camera for that photograph, along with the camera’s focal length, pixel size, and lens distortion.

With this information and a point identified on two or more photos, the photogrammetry software finds the geometric intersections of the light rays and figures out where those points are located in 3D space. The goal is finding where the photographs overlap. 

This method of using multiple photos for solving points is called “triangulation.” Points are matched in a “ray intersection” when two different photographs overlap. The photogrammetry software additionally uses mathematical algorithms to decipher camera locations, angles, and characteristics, which it can achieve with just a few point matches. The end result is the creation of lines, surfaces, texture-maps, and full 3D models that are derived from 3D point locations in the photographs.


LiDAR History And Development

LiDAR’s history began much later than that of photogrammetry. 

The first attempts to measure distance by light beams were made in the 1930s with searchlights that were used to study the structure of the atmosphere and with light pulses that were used to determine the heights of clouds.

Then in 1961, under the direction of Malcolm Stitch, who headed the laser development program at Hughes Aircraft Company, the first LiDAR-like system was developed just after the invention of the laser. This early system was designed to track satellites. It combined laser-focused imaging with the ability to calculate distances by measuring the time it took for a laser signal to bounce off an object and return to its source. As part of this process, sensors and data acquisition electronics were used.

In 1963, the formal term “LiDAR” was finally introduced. LiDAR’s first common use was in meteorology, where it was used to measure clouds and pollution.

By the time of the 1971 Apollo 15 mission, astronauts were using an altimeter equipped with LiDAR to map the moon's surface.

However, it was in the 1980s that the need for LiDAR grew with an equally important need to find an effective geographical positioning system (GPS). LiDAR sensors that were capable of emitting 2,000 to 25,000 pulses per second were in the market by the 1990s. As in photogrammetry, these systems could deliver dense data sets. Unfortunately, the new LiDAR stems are also extremely expensive.  

In the early days of LiDAR, users were primarily interested in mapping the earth's surface and in extracting features from these maps such as roads, buildings and forest canopy characterizations.


How LiDAR Works

For large areas, an aerial LiDAR system is deployed to collect data. A device installed in an airplane emits infrared laser pulses as the plane flies back and forth across the landscape. The system records how long it takes for the pulse to travel to the Earth and back. On-board computer systems can then calculate the location and height of the spot the laser beam hits an object or the Earth. The laser hits all objects but also can often make it to the ground. A secondary step allows the removal of the above-ground material (trees, houses, etc.) and the resulting data is the bare earth. This allows us to "see through the trees" at the ground that is usually obscured in aerial images.

The principle of LiDAR is to shine a small light (laser) at a surface and measure the time that the light takes to return to its source (i.e., Distance = speed of light x speed of flight divided by 2). When it executes this process, a LiDAR instrument can fire up to 150,000 pules of light per second.

The laser pulses and their measurements are repeated in rapid succession in order to develop a “map” of a given area and its objects. As part of the exercise, the height, location and orientation of the LiDAR instrument must also be known for every laser pulse that is recorded.

LiDAR systems are comprised of four main components:

  • Lasers of 600-1,000 mm wavelengths that are used in measurement;
  • Photodetector and receiver electronics that read and record signals as they enter the system;
  • Scanners and Optics for image capture; and
  • Navigation and positioning systems.

Once LiDAR data is captured, it must be processed.

There is an initial pre-processing step where the LiDAR data is pulled from the LiDAR instrument in a specific sequence.

First pulled is the laser data, followed by the positional raw data, then ground base station data, and, finally, the raw GPS and inertial measurement unit (IMU) data — which captures data about the movement of the LiDAR instrument. The totality of this data is then consolidated and processed by LiDAR software into what is known as a trajectory file, which is a large binary file that contains the time varying coordinates for each image frame in the system.

After this pre-processing is completed, calibrations are made to the data to unify all of the differences in the sampling of different types of data that were collected. The resulting uniform file is then placed into an LAS format, which is an industry-standard binary format used for storing airborne LiDAR data that also enables the exchange of LiDAR 3D point cloud data between data users. From this LAS format, the data can then be moved into a format that can be used by commercial engineering and mapping software.

Both photogrammetry and LiDAR are indispensable surveying and measurement technologies that can be used in a plethora of use cases. In many ways, the two technologies overlap each other in the data that they capture. However, the demands of the project being surveyed or documented; and also the costs involved, often determine which technology is chosen.

The cost of LiDAR has fallen, but it is still more expensive than traditional photogrammetry. On the other hand, there are limits to what photogrammetry can do. You can't look at sub-surface elements of a structure, for example; or get a clear picture of the ground when  you have to work through smoke or cloud cover.

There are also cases where the best of breed solution for a given project is a combination of photogrammetry and LiDAR. In these cases, the output from each technology can be consolidated with the help of geospatial mapping software.

Regardless of the type of technology employed, the most important takeaway for practitioners is that they have technology choice.