GEOG 883
Remote Sensing Image Analysis and Applications

Change Detection Workflows

PrintPrint

Your textbook does an excellent job of describing various change detection methods. As with feature extraction, there are change detection techniques that operate at the pixel level, and others that operate at the object level. The videos below provide examples of two different approaches to change detection, one using a pixel-based approach, and another using an object-based approach. There is no single optimal approach to change detection, with the most successful change detection project often employing a combination of techniques. As with any remote sensing project, mapping change requires that you have a comprehensive understanding of your data and that you develop a comprehensive remote sensing workflow.

Both videos use the 2013 Yosemite Rim Fire as the example with Landsat 8 pre- and post-fire data as the source imagery.

Video: GEOG 883 Change Detection Yosemite Rim Fire Pixel-based (7:15)

Click here for video transcript.

JARLATH O'NEIL-DUNNE: In this video, we'll take a look at pixel-based image differencing change detection techniques in our ArcGIS, using the Yosemite Rim Fire as an example.

The Yosemite Rim Fire started on August 17, 2013. It wasn't fully contained until late October of that year. It burned over 250,000 acres in the Sierra Nevada mountain ranges in California.

Landsat satellite imagery, with its 30-meter resolution and multi-spectral capabilities, is an excellent data source for change detection. The USGS Globalization Viewer, Glovis, makes it easy to browse through landsat scenes. We see the landsat 8 image was acquired pre-fire on August 15. And that we have another image on September 16 that shows the majority of the burned area. We'll download both of these images and use them in our example.

After creating composite band images for both landsat scenes, I loaded them into ArcGIS. In both cases, I've displayed the images as 654 band composites, meaning that short-wave infrared, near infrared, and red light are assigned to the red, green, and blue color guns, respectively. These band combinations are excellent for vegetation analysis.

Using our image analysis tools and the swipe capability, we can swipe away the August 15 scene and clearly see the effects of the fire. To better illustrate the change that's occurred between these two scenes, we use the difference tool, also in the Image Analysis window.

The first step is to select both scenes in the Image Analysis window. Then scroll down and click on the button for Differencing. The output of the Image Difference function is a new image that contains the band-by-band differences between the pixel values from the August 15 scene and the September 16 scene.

I'll rename this new image, Difference, in the layer tree, and then go in and adjust the symbology, so that it, too, has a 654 band combination. It's important to note that the output of image analysis functions, such as the difference tool, only exist within the ArcMap document. And you'll need to save your image, if you want to make it permanent.

The output of the Difference function clearly displays the area that's burned between August 15 and September 16, 2013. In order to get an estimate of the burned area, we'll need to classify the image. We will do so using an unsupervised pixel-based technique.

We'll start off by going over to our toolbox and opening the ISO Cluster Unsupervised Classification tool. This tool applies the ISO data unsupervised classification to the input image. The output raster layer will contain a specified number of classes. We're going to set this as 10. This means that each one of the 10 classes is spectrally similar, based on the difference image.

Next, we'll give our output raster layer a name, using the .img extension to specify it as imagined format. And then click OK to run the tool.

As expected, our output raster layer consists of 10 classes. Once again, each one of these classes especially distinct from the other nine classes, based on the ISO data algorithm. In order to isolate those classes that actually reflect the burned area, we'll have to do some data exploration. As we've done before, we'll make use of the Image Analysis tools and the swipe function, to compare our output classified image to our input difference layer, and also our original Landsat scenes.

It appears that most of the change is contained within the first two classes, Classes 1 and 2. So we'll use the raster calculator to create a brand new image, containing the result of only these two classes.

The expression we will enter into the raster calculator says that the ISO data classification is equal to 1, or that the ISO data classification is equal to 2, producing a new raster image. The output raster image will consist of cells that have values of either 0 or 1. Cells with a value of 0 means they don't meet the criteria. Cells with a value of 1 means that they've met the criteria, that is, their original cell values were either 1 or 2.

In our new output raster layer, those areas that correspond to change, Classes 1 and 2, a have a cell value of 1. So we'll make the cell values of 0 transparent.

Using our Swipe tools, we see that we've done a fairly decent job of capturing change through the combination of image differencing and unsupervised classification.

We can now use the Reclassify tool to create a new raster layer that removes the cells that have a value of zero, that is, all 0 cells will have a value of No Data, and will only retain the change class, Class 1.

Converting the output of the Reclassify tool into Polygon format will allow us to do two things. First of all, we could manually edit the polygon layer, to deal with errors and inconsistencies. Second, it will allow us to more easily compute the actual area of change so that we can quantify the effect of the Yosemite Rim Fire.

Once our raster layer is finished converting to polygon format, we will adjust the symbology, so that we can more easily view those areas of change due to the fire. Because this particular feature class is stored in the geodatabase, we can go into the attribute table and use the Shape Area field, to get an estimate of the area that is burned.

In this video, we shared an example of a simple and straightforward approach to change detection, using image differencing and unsupervised classification. It's important to note that change direction can be a very complex process. And it may be important to take into account radiometric differences between your input scenes and also apply a host of post-processing techniques, in order to improve the results of your classification.

Video: GEOG 883 Change Detection Yosemite Rim Fire Object-based (7:13)

Click here for video transcript.

JARLATH O'NEIL-DUNNE: This video will take a look at change detection using Landsat data from the Yosemite Rim Fire as an example. We will employ object-based post-classification change detection to map the extent of vegetation loss from the fire.

The Yosemite Rim Fire was a major fire in the Sierra Nevadas in California. It burned over 250,000 acres and was started as the result of an illegal campfire from a hunter. The fire was clearly visible in satellite imagery. We used August Landsat imagery acquired prior to the fire and September Landsat imagery acquired after most of the damage had occurred to map the change of vegetation loss for the Yosemite Rim Fire.

I've created an eCognition project containing the pre- and post-fire Landsat scenes. If we go in and view the project properties, we see that I have a very simple project set up.

I've used the red and near-infrared bands from the August scene, which was pre-fire, and the red and near-infrared bands from the September scene, which were after most of the damage had occurred. One could develop a more robust approach using all Landsat bands, but we'll keep this example simple.

I'm displaying the data in two different ways. In the top view, I'm looking at only the near-infrared and red bands from the August or pre-fire image. In the bottom image, I'm displaying a combination of the data pre- and post-fire. The bottom image helps to highlight the burned area. The top image shows the conditions prior to the onset of the Yosemite Rim Fire.

Now we'll move step by step through the rule set. The first algorithm in the rule set is a segmentation algorithm. The pre- and post-fire Landsat scenes have been weighted equally, and we're using a scale parameter of 100 and weighting the spectral properties of the data over the shape properties.

I'll now execute the segmentation algorithm, and we'll zoom in to explore some of the resulting object properties. Selecting an object reveals those properties in the Image Object Information window. And you can see that we've got three different NDVI calculations, NDVI for August, September, and then also the difference. NDVI values are not automatically calculated in eCognition. They're customized features, which is to say that I entered these calculations.

Let's explore the NDVI calculation for August. A similar NDVI calculation was developed using values from the September imagery. Finally, I used both NDVI values and a customized feature called NDVI Diff, which subtracts the NDVI values from September from those in August.

Customized features are computed at the Image Object level, and thus each and every object in our project has NDVI values for August, September, along with the NDVI difference. We can use these NDVI values in our classification algorithms to identify those areas that have experienced vegetation loss due to the fire.

To obtain these values, we'll need to explore our data both by clicking on the Image Objects and using other tools such as Feature View. Using Feature View, we can symbolize our objects based on their attributes. In this case, I'm looking at the NDVI difference values displayed as grayscale for the entire area.

Now I'll move through the rest of the rule set. After segmentation of our rule called remove classification, this simply deletes the classification and allows us to start with a clean slate after segmentation.

Following that, we have a simple assign class algorithm. This simply says if you're an unclassified Image Object and your NDVI value from August is greater than 0.2, you're assigned to the Veg-August class. Thus, in this rule set, identifying vegetation in the pre-fire image is the starting point for our whole change detection classification. And we see that this simple rule and NDVI threshold does an excellent job of identifying areas in August that were vegetated.

Our next algorithm, which is also an assigned class algorithm, uses the August vegetation classification as the starting point. The algorithm says if you're vegetation in August, but your NDVI difference is greater than 0.1, you're assigned to the Vegetation Loss class.

When we execute this algorithm, we see that the area of vegetation loss corresponds very well with our change detection composite image on the lower pane. Now that I've identified areas of vegetation loss, I can use the assign class algorithm to remove the August Vegetation classification and merge both the Vegetation Loss and unclassified objects.

Although the classification seems to be largely successful, it's not the most cartographically pleasing representation of the extent of the Yosemite Rim Fire. We can now make use of the spatial properties contained within our objects to clean up the classification. One of the great advantages of object-based approaches is that we can incorporate information such as the geometric characteristics and spatial relationships.

To remove some of the isolated areas classified as Vegetation Loss that are clearly outside of the Rim Fire, we'll employ a simple rule that says if the number of pixels is less than 3,000, call the Vegetation Loss objects unclassified. When we execute this algorithm, small patches of isolated vegetation loss are removed.

Given that we just assigned those small isolated vegetation loss objects to the unclassified category, we'll follow this up with a merge region algorithm to break down the boundaries. In zooming into the area affected by the Rim Fire, we see that we have a lot of small patches of unclassified objects. It appears that these objects were likely affected by the Rim Fire, but probably didn't experience the severe vegetation loss that other areas did.

We'll employ a simple rule that says all unclassified objects that have a relative border to vegetation loss of 1, meaning they're completely surrounded by vegetation loss, get reassigned to the Vegetation Loss category. Finally, we'll clean things up by merging all the objects in the Vegetation Loss category. Zooming out, you can see that we have an excellent representation of the extent of the Rim Fire that's both accurate and cartographically pleasing.

In this video, we looked at an example of post-classification change detection using object-based techniques. We started off finding out those areas in the pre-fire image that were healthy vegetation. And then using that classification as the baseline for identifying areas of change using differences and NDVI from the pre- and post-fire images, we then employed geometric characteristics and spatial properties to clean up the classification to make it more cartographically appealing.