Point of Beginning

Virtual Warfare

April 1, 2010
Photo Credit: Sgt. 1st Class Clinton Wood, courtesy of U.S. Army


The modeling and simulation (M&S) community creates virtual terrain models for use in the training of warfighters in order to provide them with the optimum understanding of the conditions and environment in which they will be deployed. The source data for these synthetic terrain models have historically come from the National Geospatial-Intelligence Agency (NGA), the federal agency that provides geospatial intelligence in support of national security objectives. While these data are often adequate for most virtual and constructive trainers, the resolution is typically coarse. This lower resolution doesn’t satisfy interoperability requirements with live training, in which trainees are positioned in the real environment.

The U.S. Army Research, Development and Engineering Command (RDECOM) Simulation and Training Technology Center (STTC) is currently conducting research into live training interoperability. Part of that research involves constructing compact, high-resolution terrain representations that trainees can use on embedded hardware and handheld devices that have strict and/or limited resources (i.e., memory, power, security). The effort is being headed by Applied Research Associates Inc. (ARA), an engineering firm headquartered in Albuquerque, N.M. High-resolution LiDAR data and hyperspectral imagery provided the foundation for recent analysis.

Urban terrain models serve as combat training simulations for the warfighter community. Shown here is a 2D urban terrain map.

The high-resolution dataset for this project was collected by Merrick & Company using airborne LiDAR and hyperspectral systems as part of a simultaneous acquisition flight over two study areas in the Orlando, Fla., area. Site A had an area of 9.51 square miles that was dominated primarily by urban terrain. Site B had an area of 9.65 square miles and contained mostly rural terrain mixed with suburban terrain features on its southwestern margins. These two datasets provided rich terrain representations that are being used by ARA and RDECOM as one of their base digital-terrain datasets for generating reality-based training simulations for the warfighter.

The process for the data and imagery collection consisted of acquiring 0.5-meter point spacing LiDAR data from which a detailed feature-based digital terrain model would be extracted. Specifically, this included an elevation model of bare earth terrain, vegetation canopy and ground cover, buildings and the major utilities found within the study area. The hyperspectral data consisted of 0.6-meter pixel resolution imagery composed of 128 visible and near-infrared (VNIR) spectral bands in the wavelength range of 397.80 to 997.96 nanometers. The hyperspectral imagery served as the source information for deriving land-cover classes and their associated material compositions. The processing, fusion and analysis of these two high-resolution datasets provided ARA and RDECOM a robust set of attributed land-use and land-cover classes for sites A and B. Both the LiDAR data and hyperspectral imagery were simultaneously acquired on the same aircraft and were boresighted and co-registered to one another using data derived from an onboard airborne global positioning system (AGPS) and an inertial measurement unit (IMU).

Urban terrain models serve as combat training simulations for the warfighter community. Shown here is a 3D urban terrain simulation.

LiDAR Data

A Leica ALS50-II laser scanner, which sends out 150,000 pulses of light per second, was used to derive sensor returns that, when processed, generated terrain elevation points every 0.5-meter on the surface. The stored data consist of a massive elevation point cloud (more than 8 GB of data in an LAS file format) that contains elevations for each Earth surface feature that reflected a light pulse back to the sensor. Each acquired LiDAR flight line was post processed for the removal of system noise and was positionally geo-corrected to real-world coordinates (UTM WGS84, zone 17, meters) using the AGPS and IMU data. Each flight line was then mosaicked and tiled into manageable file sizes for later processing and analysis. Using its in-house proprietary software, Merrick Advanced Remote Sensing software suite (MARS), Merrick classified the reflectance point cloud data into seven distinct elevation classes:

• Bare earth surface (digital terrain model of the ground surface);

• Vegetation (low < 1.0 meter, medium 1.0 to 2.0 meters, and high > 2.0 meters), with elevation > 2.0 meter representing tree canopy;

• Buildings; and

• Utilities (representative of major utility lines and towers).

The project used digital hyperspectral imagery

These LiDAR data served as the primary source for all feature elevations and derived height information used by ARA and RDECOM for their 3D terrain simulations.

Hyperspectral Data

The AISA Eagle sensor (manufactured by Spectral Imaging) was used to collect the 128 bands of 0.6-meter pixel resolution imagery between the blue (397-nanometer) to near-infrared (997-nanometer) spectral wavelengths. Hyperspectral preprocessing included the radiometric and atmospheric correction of the 128 spectral bands on a per-flight-line basis. The imagery was radiometrically corrected using in-flight derived gain and offset values per band. Atmospheric effects were removed from the data using a MODTRAN 4 derived radiative transfer model, which corrects for atmospheric transmittance due to H2O, O2 and CO2 absorption, atmospheric molecular and particulate scattering, and solar irradiance effects due to time of year and solar position in the sky (solar azimuth, solar elevation). The resultant dataset is in reflectance values, which permits the user to exploit several existing spectral libraries in the feature classification process.

LiDAR data

The analysis of the hyperspectral imagery served as the main discriminator between land-cover classes found on the surface. Hyperspectral processing tools included the ENVI 4.7 software from ITT Visual Information Solutions (ITT) and the ERDAS Imagine 9.2 software from Leica Geosystems. Earth surface features were identified based on their characteristic spectral responses as a function of wavelength and differentiated based on their land-cover class type and by material composition. Identified classes included:

• Buildings: building footprint, building material composition;

• Paved roads: road centerline, road edge of pavement, material composition;

• Paved surfaces: parking lots, concrete surfaces;

• Vegetation classes: tree canopy, grass, brush, wetland vegetation;

• Surface hydrology: lakes, ponds, rivers, streams; and

• Soils classes: primarily sandy soils.

urban terrain classification 

Data Integration and Analysis

The premise of using both LiDAR and hyperspectral imagery is to create an intelligent digital terrain model that expressly uses true surface feature information (as opposed to synthetic simulated data) in the creation of 3D visualizations for training purposes. Though the derived LiDAR and hyperspectral data are independently valuable, the fusion of the two datasets offers greater access to the unique spectral and topographic characteristics of each surface feature to be extracted and classified.

An example of this process is evident in the extraction of individual footprints and elevations associated with each building point cloud from the LiDAR data and the integration of building rooftop material composition (asphalt, metal, shingles, clay tiles, concrete) as derived from the analysis of the hyperspectral imagery. This fusion generated a 3D model of each building that contains information about its location and areal extent (x, y, perimeter, area) as well as individual building height and gross material composition per rooftop. Merrick has generated an urban terrain database that contains both intelligent infrastructure (buildings, paved roads, paved lots, and utilities) and natural terrain information (terrain elevation, soils, vegetation cover, and surface hydrology). All data were delivered to ARA and RDECOM by Merrick within the context of data formatted for use within a GIS and having the portability for use within a simulation-visualization terrain database.

Synthetic urban terrain features as visualized within a modeling and simulation environment, including buildings and surrounding terrain.

The airborne data analysis was conducted using a suite of tools that included Merrick’s MARS 6.0 LiDAR processing software, ITT’s ENVI 4.7, and Leica Geosystems’ ERDAS Imagine 9.2 for the processing and analysis of the hyperspectral imagery. ESRI’s ArcMap 9.2 software was used for the creation of shapefiles and associated attribute databases. Data deliverables included various data formats ranging from LiDAR LAS files, GeoTIFF image files, ENVI .dat files and ESRI shapefiles.

Under STTC funding, ARA developed several automated capabilities to turn processed and classified LiDAR and hyperspectral imagery into synthetic terrain for training. These automated capabilities allowed the importing of all delivered processed data in a series of GeoTIFF (imagery and elevation data) and ESRI shapefiles (terrain features such as roads segments, tree locations, buildings outlines, and surface materials) into the simulation database.

These thematic data layers form the foundational database for the generation of 3D visualizations for urban and natural terrain training simulations. For targeted live training, the synthetic terrain only needs to contain elevations and any terrain features that can affect the flight of simulated projectiles. The trainees are physically located in the natural environment, so the locations of roads, buildings, canopy cover, etc., are clearly evident to them. The synthetic terrain is used to resolve interactions between live trainees (i.e., Did I hit the object I was aiming at?). The simple nature of the format makes the process of generating the synthetic terrain a matter of extruding 3D geometry from the 2D descriptive-numeric attributions found within the base data and organizing and indexing them for efficient storage and query operations.

For virtual training, much more transformation occurs in the generation process than for live training. All of the detail in the environment is exposed so that all of the elements extracted from the original source data are represented and textures are applied. For the Semi-Automated Forces (SAF) entities in the simulated world to reason effectively, topology is established and represented in the synthetic terrain, which makes the process much more complex. The goal is to present enough detail for a convincing scene with enough relationships to enable fast SAF performance while maintaining accurate correlation to the original source data to support live training interoperability.

Hyperspectral derived classification included building material composition, paved roads and lots, lakes and ponds, grass, brush, and tree canopy vegetation.

Virtual Reality

The LiDAR and hyperspectral imagery collected and processed under this effort is critical to producing accurate, realistic synthetic terrain for the modeling and simulation training community. It provides true Earth surface feature occurrences (including urban infrastructure and natural terrain features) at actual locations along with their descriptive-numeric physical characteristics, which are used to generate simulated, complex real-world visualization experiences. The inherent cartographic accuracy of these high-resolution datasets, the ability to identify and classify both man-made and natural urban terrain features in a quantitative manner, and the use of real-world locations (as with the Orlando, Fla., dataset) can provide a realistic 3D visualization experience for the M&S training community and, ultimately, the individual warfighter. These data provide the foundation for many research efforts while supporting the U.S. Army and other agencies in their continuous efforts to become more efficient and effective. 

Author’s acknowledgments: Thanks to the following individuals for their assistance with this article: Chuck Campbell, project manager for Applied Research Associates (www.ara.com); and Julio de la Cruz, SNE chief engineer at the Embedded Training Technology Division at the RDECOM SFC Paul Ray Smith STTC in Orlando.