“You can’t model, protect or preserve a ‘what’ if you don’t know what ‘what’ is and ‘where’ it is.” This is Curt Storlazzi’s philosophy in a nutshell when it comes to understanding the characteristics of coral reefs.
A researcher and geological oceanographer with the U.S. Geological Survey (USGS), Storlazzi is a part of a team of research scientists, operational scientists, electronics technicians, mechanical technicians, underwater camera system designers and safety officers who work collectively to better understand the coral reefs of the U.S. and safeguard them.
According to USGS, coral reefs serve as useful indicators of the health of marine environments, but they are declining in many parts of the world, with 30 percent estimated to be destroyed or seriously degraded in the next decade. Fortunately, back in 1998, the United States Coral Reef Task Force (USCRTF) was established by presidential executive order. Made up of 12 federal agencies, seven U.S. states, territories, commonwealths and three freely associated states, the mission of USCRTF is to foster on-the-ground action to conserve coral reefs.
One facet of the initiative, the USGS Pacific Coral Reefs Project, aims to assist decision makers whose judgment affects coral reefs by establishing a basic knowledge of how the ecosystems are structured and how they function.
“Whenever we do a project, the first thing you need to know is what is there. If we’re going to make measurements of currents or temperature or biology, you want to know, where is that in this environment?” Storlazzi says. “Mapping is a fundamental data layer upon which the process base measurements and modeling sit.”
The goal of the USGS Pacific Coral Reefs Project is to determine the spatial variability of the following parameters at high resolution:
- Reef tract structure, including overall morphology, rugosity and complexity
- Benthic habitat, including bottom type (e.g. limestone or volcanic pavement, sand, mud), bottom cover (e.g. coral, algae, macroalgae) and biodiversity
- Transitions between colonized coral reef habitat and adjacent, less developed environments
Reaching conclusions on the above mentioned attributes requires the collection of field measurements in addition to laboratory research. A variety of mapping techniques on the field measurement side are used to help establish the most comprehensive coral reef models possible. They include high-resolution bathymetry powered by airborne LiDAR, space-based multispectral remote sensing, underwater digital photo/video mapping systems and swath acoustic seabed mapping systems.
One form of geospatial data acquisition the USGS project does not generally take part in acquiring itself, but benefits from, is bathymetric LiDAR. The systems, mounted on manned aircraft operated by the National Oceanic and Atmospheric Administration (NOAA) and U.S. Army Corps of Engineers, produce 3D data that the groups make freely available.
NOAA describes bathymetric LiDAR as the measurement of time delay between the transmission of a laser pulse and its return signal, which ultimately measures water depth and shoreline elevation. A lower frequency infrared pulse is reflected off of the water’s surface and a higher frequency green laser is reflected off of the seafloor. The NOAA says that bathymetric LiDAR systems can reach water depths of up to 50 meters in cases of low turbidity. Storlazzi says his team generally works off the coast in depths less than 150 feet, which bathymetric LiDAR systems are most often used for, in part, due to the challenges associated with operating large boats in shallow, rugged ocean environments.
In many coral reef areas of the U.S., bathymetric LiDAR data collection simply has not yet been conducted. In a number of these locations, Storlazzi says he and his USGS colleagues have, together with the NOAA, generated what they call “pseudo bathymetry.”
“So if you know that carbonate sand is white, and real close to the surface, in shallow water, it will look white. As you go deeper and deeper, it will look more and more blue. So you can model water depth based on that light transition,” Storlazzi says. “An example of that is, in the Republic of the Marshall Islands the U.S. government hasn’t used LiDAR because LiDAR’s expensive. So we used existing satellite imagery like DigitalGlobe WorldView-2 and some single-beam bathymetry in the area to calibrate and validate this pseudo bathymetric model.”
In other instances, the USGS Pacific Coral Reefs Project has collected data by way of interferometric sidescan sonar. Storlazzi says that there are three different ship-borne sonar system levels: the “old” single-beam system where one ping goes down and one comes up, the multi-beam system that is usually comprised of a few hundred beams spread out perpendicular to the ship, and interferometric sidescan sonars capable of sending out 4,096 pings. Storlazzi notes important differences between the data derived from the different systems. He considers single-beam the typical boat’s fathometer and multi-beam the most precise, useful for measuring harbor depths and the like. He says multi-beam sonars may also collect acoustic backscatter data, which offers up information as to the softness of the surface, which helps in interpretations of sand, mud, rock or rippled sand. However, he notes that users may not collect the full waveform of the energy that is bounced off of the seafloor.
“Now, these new interferometric systems are capturing that and you can use some high-end processing techniques and get some information on what the distance to that target is so that you can start to generate bathymetry.”
In this case, using interferometric sidescan sonar, the seafloor is essentially being mapped both in terms of shape and character. It is suitable for geology because of the finer spatial resolution, which is useful for coral reef habitat research that calls for extensive detail.
“The exact elevation of that point — the z value, in an x, y, z — may not be as precise as multibeam, but you’ve got an order of magnitude more of those spatially. So you can say: Is the seabed rippled here? Just how irregular is it? With all of those acoustic tools, the closer you are to the seabed, the higher resolution you get. It’s like shining a flashlight. Farther away it’s broad, but not as intense. As you get closer, it illuminates a smaller area, but much brighter.”
Underwater Video Footage
The sonar toolkit USGS uses for coral reef mapping is often accompanied by the use of underwater imagery to help further characterize the data acquired. The imagery is collected using towed cameras, often with two parallel lasers so the imagery has scale. All of it is georeferenced so the team can go back, say 10 or 20 years later, and see what change has occurred.
An important development that has come up is structure-from-motion, Storlazzi says. “People for a while used two-camera systems and were making what we call stereo pairs, which is how your eye works at any one time. If you’ve got both of your eyes open, you’re working in stereo and you develop a three-dimensional model in your brain. … So instead of having stereo vision at one time, if you move one eye, you’ve now been able to determine structure-from-motion. So instead of two images at once, you take that same shot from two different positions by moving the camera and you can develop the same stereo model.”
With the help of improved computer processing, Storlazzi says the structure-from-motion approach has exploded. In a case where the acquisition technique is paired with acoustics, Storlazzi says the acoustics might indicate that a given seafloor spot is high up or rugose, while the video camera system identifies that the area is covered with coral and how dense the coral coverage is and what species of coral is present there.
Most of the high-definition cameras are 1,000-by-1,000 pixels, so if it is being towed 2 meters above the seabed, for example, Storlazzi highlights the fact that pixel size is on the order of centimeters or high millimeters. Since the video consists of numerous frames, the team can extract those frames.
“Say you drop a quarter on the seabed and you’re towing a video camera over it and you’re shooting at 30 frames per second. You might see that quarter in 80 frames and you’re seeing it from all these different angles as the camera’s being towed over it. So now you have 80 solutions to that point and, using computer processing, it can tell you where that point is relative to its adjacent points.”
USGS is able to take the underwater video and make 3D models out of it at two or three orders higher resolution than airborne LiDAR data, and at one or two magnitude higher than sonar systems like multi-beam or interferometric sidescan. Although a smaller area is accounted for, higher resolution is accomplished.
He compares the video footage to the still image cameras that often accompany terrestrial laser scanners, helping operators not only identify points in a point cloud, but also helping them refer to real images to be sure of what the point cloud represents, like a tree, for example. “We still have a terrestrial LiDAR scanner,” Storlazzi says. “The thing’s $300,000. What we can do with a $1,000 digital SLR camera is get about 95 percent the precision for one three-hundredth of the cost.”
Data Acquisition Challenges
Choosing which data acquisition technique to apply all depends on scale, Storlazzi says. “Finer scale, finer scaled pixels. So you trade off area coverage with resolution.”
When trying to understand the general characteristics of a coral reef tract (e.g. hydrodynamic modeling, waves, currents, larval dispersal) that may extend 15 to 20 kilometers, airborne bathymetric LiDAR is the best way to go so long as the water is clear because it can account for such wide areas with such little time and effort. In one plane passover, Storlazzi says 700-meter widths can be mapped at 2- to 4-meter resolution. He says sonar often has a depth-to-width ratio around five-to-one, meaning mapped areas may be significantly smaller than with LiDAR, which is more expensive to carry out.
On the other end of LiDAR are new optical methods like structure-from-motion with video, which offer increased detail and require more time and manpower than aerial techniques. Such an approach is useful when researchers want to get down to relationships between species and the seafloor, how rough the seafloor is, and whether coral is alive or dead.
While LiDAR is great, Storlazzi says there are big costs associated with it, and the problem is that most of the coral reefs are in very remote areas. “If you look at the globe in the Pacific, there are a lot of islands out there and it’s very costly to mobilize those aircraft and get them out there and do that.”
For this reason, he says advancements to LiDAR with respect to affordability and accessibility are going to be key moving forward. Another notable challenge the USGS coral reef research mapping team faces, the biggest one according to Storlazzi, is acquisition in general. Because they are usually mapping in shallower water, where waves and weather are much bigger factors, time and safety are concerns.
Bringing a 300-foot boat into 30 feet of water, for example, is not realistic. Storlazzi says these factors are a part of the reason why these shallow areas are some of the most poorly mapped areas of the seafloor. While he acknowledges the vast swaths of open ocean that aren’t necessarily accounted for in mapping, he says the seafloor there is not as variable as near the coast and it does not come into nearly as much contact with humans.
As for time, he says when using sonar and camera systems there is a small field of view, which requires the boat to pass over many close lines in a given spot in order to obtain the data needed to carry out thorough analysis. Storlazzi gives an example comparing coverage in two different water depths, assuming a one-to-one ratio. For every one foot of water depth, one foot across the seafloor can be seen. This means in 100 meters of water, 100 meters across can be accounted for in one pass. But in one meter of water, just one meter across can be accounted for, requiring many more passes of the vessel for seamless coverage. “The computing and the processing power and the data storage, that’s not the limitation,” Storlazzi says. “It’s the dangerous environments and having to do a lot of it in the shallow waters.”
Looking ahead, he says he can see the human element being gradually removed from the data acquisition process through the advancement of autonomous surface vehicles, guided by GPS. Early models are already available and in use by colleagues in low-energy shallow environments, and says he’s looking forward to applicability in rougher waters. “Because it gets boring. The term we use is ‘mowing the lawn,’ back and forth, back and forth.”
In the meantime, Storlazzi expects the USGS Pacific Coral Reefs Project to continue mapping U.S. coral reefs, mostly in the Pacific Ocean, but also in the Atlantic from time to time. He places great value on the geospatial technologies and methods utilized to characterized coral reefs because, as a scientist concerned about learning how humans affect the ecosystems, how climate and weather events impact them, and how they interact with the species that inhabit them, developing models that can be thoroughly analyzed can help foster initiatives that keep coral reef thriving.
For more information on USGS coral reef mapping, visit coralreefs.wr.usgs.gov/mapping.html.