Playing by the Rules

In eCognition, rule sets rule. They are the lifeblood of the software and the key to successfully and accurately classifying datasets. 
Here’s how it works:

Rules are a series of user-defined algorithms that instruct the software to automatically analyze a data stack to produce a classified dataset. Collectively, those step-by-step rules are called a rule set.

Each rule has a specific role within the workflow. The first key rule is segmentation, where the software looks at all the pixels in images and merges similar pixels into image objects based on certain parameters like spectral (color) signature, shape and scale. 

The next key rule is classification, where it follows defined if/then scenarios to classify image objects based on a targeted class description. The export rule then creates specified data products that can be integrated into GIS or CAD software.

Creating rules is a completely interactive experience driven by the software graphic user interface (GUI), so users do not need to be programmers or have knowledge of a specific programming language. Users are offered a large variety of rules that they can configure to solve their specific image analysis problem. Rule sets can be applied to a single image or group of images–the level of automation and transferability of the rule set will depend on the specific project needs and the intentions of the user.

A primary strength of eCognition is that it can integrate multiple dataset types such as raster images, vector files and point clouds into a single project environment. Data stacks can then be segmented and, more importantly, objects can be identified and extracted based on the different features of these datasets. This means that as the software analyzes data stacks, it uses all of the rule parameters to distinguish an object accurately, for example, by its spectral values as well as its height and proximity to an existing vector layer–a process similar to how the human brain recognizes and categorizes things.

“The automatic segmentation and identification of objects is why eCognition is such a powerful approach,” says Whitney Broussard, a senior scientist at JESCO, an environmental and geotechnical company. “Because it can handle multiple different data layers, it has that ability to bring in elevation, for example, and find tall trees on a prairie. On the imagery alone, it may all look green. But because it can analyze height values, it can determine the green of a tree canopy versus the green of low vegetation. And once those object boundaries are drawn correctly, your classification will be spot on.”

Another example of how eCognition can differentiate vegetation by using color and near infrared values in combination with elevation values is in identifying vegetation encroachment within a power line corridor. Let’s assume the user has 4-band imagery (RGB + NIR), a point cloud and a vector file for the power line (i.e. centerline). In this case, the vector file can be buffered to generate the extents of the power line corridor in order to focus the analysis purely on this area. Then parameters can be set to extract vegetation that is encroaching on the power line–trees greater than a specified height and within a specified distance to the power line. A combination of spectral and elevation values can be used to describe such tree-objects; a nDSM can isolate “elevated” objects and a NDVI (derived from the 4-band input image) can differentiate between vegetation and non-vegetation. 

Finally, objects classified as elevated vegetation (i.e. trees) within the corridor can be further discriminated based on their proximity to the power line vector and flagged for removal. With eCognition, users have an objective, automated image analysis process that is repeatable and customizable. They just need to set the rules.