GEOG 480
Exploring Imagery and Elevation Data in GIS Applications



Defining characteristics of remote sensing instruments, platforms, and data were discussed in Lessons 1 and 2. Any remotely-sensed image or dataset can be defined in these terms and evaluated against the end-user application requirements to determine potential suitability. A raw scene can be produced in real-time or near-real-time from most digital sensors, and can be distributed as fast as the technology infrastructure allows. Simple visual interpretation can be quite useful for general situational awareness and decision-making. Most of us saw daily satellite images over the New Orleans Superdome after Hurricane Katrina and can appreciate the positive impact of these lightly-processed datasets.

Additional preparation and processing is often required for any more complex analysis. If the end-user application requires the overlay of multiple remotely sensed images or detailed GIS data, such as road centerlines and property boundaries, georeferencing must be performed. If spectral information is to be used to classify pixels or areas in the image based on their content, then the effects of the atmosphere must be accounted for. To detect change between multiple images, both georeferencing and atmospheric correction of all individual images may be required.

Georeferencing: The degree of accuracy and rigor required for the georeferencing depends on the desired accuracy of the result. More error can be tolerated in an image backdrop intended for visual interpretation, where a human interpreter can use judgment to work around some geographic misalignments. If the intent is to use automated processing to intersect, combine, or subtract one data layer from the other using mathematical algorithms, then the spatial overlay must be much more accurate in order to produce meaningful results. Higher accuracy is achieved only with better ground control, accurate elevation data, and thorough quality assurance. Most remotely-sensed data is delivered with some level of georeferencing information, which locates the image in a ground coordinate system. There are generally three levels of georeferencing, each corresponding to a different geometric accuracy. 

  • Level 1: uses positioning information obtained directly from the sensor and platform to roughly geolocate the remotely-sensed scene on the ground. This level of georeferencing is sufficient to provide geographic context and support visual interpretation of the data. It is not often not accurate enough to support robust image or GIS analysis that requires combining the remotely-sensed dataset with other layers.
  • Level 2: uses a Digital Elevation Model (DEM) to remove relief displacement caused by variation in the height of the terrain. This improves the relative spatial accuracy of the data; distances measured between points within the geo-corrected image will be more accurate, particularly in scenes containing significant elevation changes. The DEM is usually obtained from another source, and the spatial accuracy of the Level 2 image will depend on the accuracy of the DEM.
  • Level 3: uses a DEM and ground control points to most accurately georeferenced the image on the ground. In addition to the DEM, ground control points must be obtained from another source, and the accuracy of the Level 3 image will depend on the accuracy of the ground control points. Level 3 processing is usually required in order to provide the most accurate overlays of remotely-sensed data sets and other relevant GIS data.

Most satellite imagery is distributed with Level 1 georeferencing, which is often sufficient for making quick visual assessments of conditions on the ground. Additional processing to Level 2 or 3 (involving additional time and expense) is usually needed to analytically compare multiple scenes over the same location or precisely overlay other types of geographic information, such as property boundaries or building footprints. End-users may choose to perform this additional processing themselves, if they have the requisite control materials, expertise, and time.

Atmospheric Correction: If the end-user application intends to make use of spectral information contained in the image pixels to identify and separate different types of material or surfaces based on sample spectral libraries, then contributions to those pixels values made by the atmosphere must be removed. Atmospheric correction is a complex process utilizing control measurements, information about the atmospheric content, and assumptions about the uniformity of the atmosphere across the project area. The process is automated, but requires sophisticated software, highly skilled technicians, and again, time. Furthermore, atmospheric correction parameters used on one dataset cannot be summarily applied to a dataset collected on another day.