Unless one’s educational specialty has been photogrammetry, it is not common to see practitioners who have a working knowledge of the science and process of photogrammetry among the surveying rank and file.
Although aerial photogrammetry became a valuable tool during World War II and has grown in its application in the years since, it has primarily been used by government agencies and owners or mappers of projects covering large areas. Terrestrial photogrammetry--that is, similar techniques but using cameras situated for use on the ground--has been more popular outside of the U.S., where its primary application has been in the mapping of building façades and detailed preparation of 3D plans of entire structures, facilities and even some land forms.
In the past, photogrammetry has required knowledge of the precise location of the camera(s) and its (or their) orientation. With aerial photogrammetry, the processes, whether the original analog or the later digital processes, required either accurate ground control to enable good connections between adjoining images or accurate positioning of the camera using real-time kinematic GNSS technology. In all cases, accuracy and speed of processing were also facilitated by accurate measurement of the camera orientation (hence the market in systems that integrate inertial navigation systems, or INS, also known as inertial measurement units or IMUs).
Recent innovations have accelerated the potential access to photogrammetric techniques by surveyors. In fact, some total stations are now equipped with cameras collimated with the telescope, some through the lens, and software products that provide a variety of measurement and mapping functions. For those who use LiDAR, airborne and terrestrial cameras provide imagery that can be draped over the scene to enable better interpretation of point clouds and to facilitate feature extraction.
Unlike the analogous points in aerial photogrammetry, where only a handful of matching points might exist in adjacent images, vision technology software, using pattern recognition techniques, identifies matching points--sometimes at the single pixel level, which humans would have a hard time doing. Thus, feature points between any two pairs of adjoining images may number from the hundreds to the thousands, and a particular point may be imaged in upwards of 20 or 30 photographs.
From this triangulation process (the rays passing from each feature point on each photograph through each camera lens to the digital sensor) come the initial unknowns. A massive bundle adjustment is done in blocks to resolve the camera position and orientation accurately. In fact, this remarkable process can also determine what is known in the business as the interior orientation--that is, the parameters that are determined when a camera is calibrated to map its distortion patterns, so that the final results can be corrected for these systematic errors that are found in every image.
“Photogrammetric” total stations are being used initially by surveyors, some of whom have dipped their toes in the waters of mobile mapping and unmanned airborne systems, to experience the benefits of the vision technology used in these (relatively) close range environments. Some innovators in surveying are also trying out homemade systems using a variety of consumer, semi-professional and professional cameras that are handheld, installed in a ground- or water-based vehicle, or in aircraft, and are processing the images with a wide variety of vision software, some of it so fresh that it is being constantly updated.
We are a long way from seeing where this newest entrant in the photogrammetry field will come to rest in terms of capabilities and operational performance parameters. But it is likely that it will challenge, and at some point begin to replace, what people are now calling “conventional” photogrammetry. The debate is vigorous and animated since many of the traditional practitioners have not been schooled in vision techniques. Thus, accepting the quality statements from vision practitioners is difficult. Additionally, these practitioners are sometimes so focused on how well the system works that they ignore some of the practical considerations for using it in high altitude (even unmanned vehicles, which fly up to 1,500 feet above ground level, are considered to take “close range” photographs), and over large areas.
But as with total stations, GPS and even LiDAR, it is inevitable that vision technology will come of age as it is teamed with other technologies to produce results that challenge the frontiers of accuracy, speed and cost. As with some of the other new “black art” technologies, there is a tendency to rely on the technology without understanding its scientific underpinnings. While this will work for many, as they will become the majority of users in areas heretofore not at all addressed by photogrammetry, experts in surveying and mapping are cautioned to understand the technology before using it. The art and science of this new technology should be practiced with the same understanding and maxims that geomatics professionals should apply with any technology.