GEOG 480
Exploring Imagery and Elevation Data in GIS Applications

Lesson 7 Introduction

PrintPrint

Image analysis is a broad term encompassing a wide variety of human and computer-based methods used to extract useful information from images. Three major categories of image analysis are relevant to this discussion. Image interpretation is performed by a human, using technology to facilitate speedy viewing and recording of observations. Digital image classification techniques are used to delineate areas within an image based on a classification scheme relevant to the application. Change detection applies to two or more images acquired over the same location; change detection results may also be classified in terms of the type of change and quantified in terms of the amount or degree of change.

Lesson 1 introduced the physical basis for spectral analysis of remotely sensed imagery. We departed from that fundamental background into topics central to mapping and map products. In Lesson 7, we will return to the fundamental physical concepts in a deeper look at state-of-the-art sensors, platforms, and methods for extracting information content from multispectral data.

Until very recently, spaceborne sensors produced the majority of multispectral data. Federal agencies distribute some of their data at no cost. Commercial data providers (SPOT, GeoEYE, Digital Globe, and others) license imagery to end-users for a fee, with limits on further distribution. Airborne multispectral data, from sensors such as the Leica ADS-40, the Intergraph DMC, and the Vexcel UltraCam, is becoming available in the public domain through the US Department of Agriculture (USDA) National Agriculture Imagery Program (NAIP).

Multispectral satellite imagery is often georeferenced but not orthorectified; further processing is required to be able to overlay the data in GIS with other vector layers. The imagery you will be using in the Lesson 7 activity has already been rigorously orthorectified, but if it were not, you have been given the tools and skills to be able to do that yourself, using ArcMap and Geomatica.

Lesson Objectives

At the end of this lesson, you will be able to:

  • describe and perform image enhancement techniques to improve interpretability of imagery;
  • describe common image interpretation tasks;
  • describe eight elements of image interpretation;
  • perform simple supervised and unsupervised classification using automated methods.

Questions?

If you have any questions now or at any point during this week, please feel free to post them to the Lesson 7 Questions and Comments Discussion Forum in Canvas.