Detecting changes in land-use/land-cover is one of the most fundamental and common uses of remote sensing image analysis. One of the most rudimentary forms of change detection is the visual comparison of two images by a trained interpreter. With an effective display system large enough to display both images simultaneously and to explore and digitize with a cursor tracking to the same location in both images, this is a quick method that can be used to locally collect valuable GIS compatible data while streaming the images themselves over a relatively low-bandwidth Internet connection.
Digital algorithms also exist for change detection. Unclassified images can be compared on a pixel-by-pixel or patch-by-patch basis; classified images can be compared with the results indicating changes in specific classes over time. Either way, the concept seems quite simple; in practice, there are a great number of influences that must be monitored and controlled to achieve valid change detection results.
- Georeferencing of each image must be relatively precise. If there are spatial offsets due to sensor positioning or relief displacement due to an inadequate DEM, the individual pixels representing a particular object or region on the ground will not coincide in the two images, resulting in inaccurate results. The degree to which this affects the usefulness of the results will depend on the magnitude of the georeferencing errors compared to the size of the object or regions of interest.
- Atmospheric effects must also be accounted for in a change detection analysis. In a supervised classification, one may assume that the atmosphere affects the training sets in the same way it affects the rest of the image. Atmospheric effects are in this way normalized, and atmospheric correction of the individual images may not be required. If classification is based on an existing library of spectral signatures created from a reference image or in a laboratory, then all of the images (reference and target) must be atmospherically corrected to produce accurate results.
- Resolution of all types should be as similar as possible in the two images to be compared. It is most desirable to use two images acquired with the same sensor so that spatial, spectral, and radiometric resolution are the same. In terms of temporal resolution, it is desirable that the two scenes to be compared are obtained with the same sun angle (influenced both by time of day and date) to control shadows and incident light. If possible, it is also desirable that the two images be taken at the same time of year to eliminate differences due to the amount of foliage.
Ideally, two images being compared should meet the following criteria (Campbell, 2011):
- Acquired from the same sensor, or two sensors that have been rigorously inter-calibrated (such as two individual sensors from the same system of sensors)
- Acquired at the same time of day using the same field of view and look angle
- If of different years, acquired during the same season to minimize differences due to normal plant life cycles
- Co-registered to within two-tenths of a pixel or less.
- Atmospherically corrected to surface reflectance
- Free of other differences that are not part of the signal of interest (i.e., soil moisture content could be a distraction or a relevant signal depending on the application; normal forest harvest could be confused with trees downed in a storm).
Visually comparing co-registered images from two dates is always the first place to start, even if the ultimate goal is to use an automated algorithm for classification or change detection (Campbell, 2011). Most image processing packages include tools to swipe one image over the other, flicker between images, and view images side-by-side. In some cases, heads-up digitizing may be used to identify and classify change; in other cases, visual inspection is used to help select the most appropriate automated change detection technique.
Post-Classification (Thematic) Change Detection
One method of change detection is to first create two independent thematic rasters using supervised classification and a common set of classes. Change detection is then a simple matter of comparing the before class and the after class of each pixel. For example, if the class scheme consisted of 3 classes: grass, sand, and water, the change detection results would be expressed as follows:
- Was grass, is now grass (no change)
- Was grass, is now sand
- Was grass, is now water
- Was sand, is now grass
- Was sand, is now sand (no change)
- Was sand, is now water
- Was water, is now grass
- Was water, is now sand
- Was water, is now water (no change)
Three initial classes become nine change classes. With a larger number of input classes, the results quickly become fairly complex to interpret visually. GIS tools can be used to quickly simplify the results for presentation to decision makers. Thematic, or post-classification, change detection results are typically of low accuracy because they are contingent on the accuracy of the input classifications (Campbell, 2011).
Pre-Classification Change Detection
It is also possible to simply subtract the value in one image pixel from the value found in the same location in the second image. While conceptually and computationally this seems extremely simple and quick, direct differencing alone is unlikely to produce any useful results. First of all, remotely sensed images are actually a composite of a number of individual co-registered color bands. A multispectral digital camera has 4 bands: near-IR, red, green, and blue; Landsat VII has seven bands. Band differencing can only be applied to one band at a time; there is no way to differentiate based on the “color” that one sees in the composite image. Atmospheric effects are contained in each of the spectral bands, so in order for a band differencing to show anything other than changes in the atmosphere from one image to the next, both images should be atmospherically corrected.