L3.04: Remote Sensing

PrintPrint

The collection, processing, and exploration (analysis) of imagery in GEOINT is more complex than what can be presented completely in this brief summary. Having said this, the goal of this section is only to provide you an overview of a few of the critical factors to consider when choosing an appropriate remote sensing platform and sensor. Much of the following content related to remote sensing was developed for GEOG 480 - Exploring Imagery and Elevation Data in GIS Applications by Penn State's College of Earth and Mineral Sciences. It is licensed for use under CC BY 3.0.

Remote Sensing: A Cornerstone of GEOINT

Remote sensing is a cornerstone of GEOINT. Remote sensing—a major driver of Geographic information Science (GIScience) and Geographic Information Technology (GIT)—is the acquisition of information about an object or phenomenon without making physical contact with the object. Remote sensing is in direct contrast to direct onsite observation and makes possible data collection over large, dangerous, and inaccessible areas. As such, it is an essential tool in intelligence work. In GEOINT, remote sensing historically referred to the use of satellite and airborne sensors to detect and classify objects related to human activity on Earth. However, this definition has been expanding to include newer types of collection systems (LiDAR, RADAR) as well as terrestrial collection systems.

The history of remote sensing as a governmental activity, a commercial industry, and an academic field provides a perspective on development of the technology and emergence of remote sensing as a key technology in GEOINT. Accounts of remote sensing history generally begin in the 1800s, following the development of photography. Many of the early advancements of remote sensing can be tied to military applications, which continue to drive most of remote sensing technology development even today. However, after World War II, the use of remote sensing in science and infrastructure extended its reach into many areas of academia and civilian life. The recent revolution of geospatial technology and applications, linked to the explosion of computing and internet access, brings remote sensing technology and applications into the everyday lives of most people on the planet. It is important to understand the motivation behind technology development and to see how technology contributes to the broader societal, political, and economic framework of geospatial systems, science, and intelligence, be the application military, business, social, or environmental intelligence.

Remote sensing in GEOINT is evolving to include multiple layers of integrated sensing systems—some of these are not what one would think of as the traditional systems found on space or airplane platforms. The layers of sensors are characterized by an integration of sensors, infrastructure, and exploitation capabilities. Such layered sensing provides decision makers with redundant, timely, trusted, and relevant information. Layered sensing might be divided by the vertical dimension:

  • Spaceborne layer. Sensor operated by large national organizations. Although the number of these large organizations is small, they collect a vast amount of sensor data. An example of this might be imagery produced by a governmental entity such as the United States National Reconnaissance Office or a commercial entity like DigitalGlobe.
  • Airborne layer. This layer includes a moderate number of sensors, perhaps focused on a region, and information tailored to a specific need. Examples are a fleet of Unmanned Aerial Vehicles (UAV) gathering full motion video or aircraft flying missions with a suite of sensors.
  • Terrestrial layer. This layer includes a large number of sensors focused on a small area or the individual. These sensors need not be optical; by this I mean referring to visual sight. An example sensor in the layer could be an individual’s cell phone. In contrast to the other two traditional sensor layers, this layer is comprised of a small number of high quality (and expensive) sensors. Trends in personal computing devices and consumer electronics have made possible dense sensor networks at no direct cost to the collecting organization.

An underwater layer can be added to this multi-dimensional view. While the underwater sensors are different, the applications of underwater sensors are similar to terrestrial sensors. Industrial applications are related to oil or mineral extraction, pipelines, and commercial fisheries. Military and homeland security applications include autonomous vehicles, securing port facilities, and de-mining.

Role of the Human

Remote sensing and the exploitation of remotely sensed data are human activities aided by technology. Misinterpretation and ill-informed decision making can easily occur if the individuals involved do not understand the operating principles of the remote sensing system used to create the data, which is in turn used to derive information. Humans select the remote sensing system to collect the data, specify the various resolutions of the remote sensor data, calibrate the sensor, select the platform that will deliver the sensor, determine when the data will be collected, and specify how the data are processed.

In GEOINT, image analysis is the act of examining images for the purpose of identifying objects and judging their significance. John Jensen (2007) describes factors that distinguish a superior image analyst. He says, "It is a fact that some image analysts are superior to other image analysts because they: 1) understand the scientific principles better, 2) are more widely traveled and have seen many landscape objects and geographic areas, and/or 3) they can synthesize scientific principles and real-world knowledge to reach logical and correct conclusions."

Key Definitions

  • Image analysis: Image analysis is a broad term encompassing a wide variety of human and computer-based methods used to extract useful information from images. In GEOINT, image analysis is the understanding of the relationship between the extracted information and the human aspects.
  • Image interpretation: Image interpretation is the extraction of qualitative and quantitative information about the shape, location, structure, function, quality, condition, relationship of and between objects, etc. by using knowledge and experience. Generally, image interpretation requires some ground investigation for checking the data accuracy. Image interpretation in satellite remote sensing often is made using a single scene of a satellite image, while usually a pair of stereoscopic aerial photographs is used in photo-interpretation to provide stereoscopic vision.
  • Image mensuration: Image mensuration is the extraction of physical quantities, such as length, location, height, density, temperature and so on, by using reference data or calibration data deductively or inductively.
  • Image reading: Image reading is a basic form of image interpretation. It is the identification of objects using such elements as shape, size, pattern, tone, texture, color, shadow and other associated relationships.
  • Sensor Footprint: A remote sensing system comprises two basic components: a sensor and a platform. The sensor is the instrument used to record data; a platform is the vehicle used to deploy the sensor. Every sensor is designed with a unique field of view which defines the size of the area instantaneously imaged on the ground. The sensor field of view combined with the height of the sensor platform above the ground determines the sensor footprint. A sensor with a very wide field of view on a high-altitude platform may have an instantaneous footprint of hundreds of square kilometers; a sensor with a narrow field of view at a lower altitude may have an instantaneous footprint of tens of square kilometers.
  • Resolution: Resolution, as a general term, refers to the degree of fineness with which an image can be produced and the degree of detail that can be discerned. In remote sensing, there are four types of resolution:
    • Spatial resolution is a measure of the finest detail distinguishable in an image. Spatial resolution depends on the sensor design and is often inversely related to the size of the image footprint. Sensors with very large footprints tend to have low spatial resolution, and sensors with very high spatial resolution tend to have small footprints. Spatial resolution will determine whether individual houses can be distinguished in a scene and to what degree detailed features of the house or damage to the house can be seen. For imaging satellites of potential interest to the housing inspection program, spatial resolution varies from tens of kilometers per pixel to sub-meter. Spatial resolution is closely tied to Ground Sample Distance (GSD), which is the nominal dimension of a single side of a square pixel in ground units. The grid cells in high resolution data, such as DigitalGlobes WorldView-3 correspond to ground areas of approximately 31 centimeters on a side. Remotely sensed data whose grid cells range from 15 meters on a side, such as the MSS sensors, are considered medium resolution. The cells in low resolution data, such as those produced by NOAA's AVHRR sensor, are measured in kilometers.
      Diagram showing high and low spatial resolution
      Figure 3.2: Spatial resolution is a measure of the coarseness or fineness of a raster grid.
       
    • Temporal resolution refers to the frequency at which data are captured for a specific place on the Earth. The more frequently data are captured by a particular sensor, the better or finer is the temporal resolution of that sensor. Temporal resolution is often quoted as a “revisit time” or “repeat cycle.” Temporal resolution is relevant when using imagery or elevation datasets captured successively over time to detect changes to the landscape.
    • Spectral resolution describes the way an optical sensor responds to various wavelengths of light. High spectral resolution means that the sensor distinguishes between very narrow bands of wavelength; a “hyperspectral” sensor can discern and distinguish between many shades of a color, recording up to 256 degrees of color across the infrared, visible, and ultraviolet wavelengths. Low spectral resolution means the sensor records the energy in a wide band of wavelengths as a single measurement; the most common “multispectral” sensors divide the electromagnetic spectrum from infrared to visible wavelengths into four generalized bands: infrared, red, green, and blue. The way a particular object or surface reflects incoming light can be characterized as a spectral signature and can be used to classify objects or surfaces within a remotely sensed scene. For example, panchromatic film is sensitive to a broad range of wavelengths. An object that reflects a lot of energy in the green portion of the visible band would be indistinguishable in a panchromatic photo from an object that reflected the same amount of energy in the red band, for instance. A sensing system with higher spectral resolution would make it easier to tell the two objects apart.
      Diagram showing high and low spectral resolution
      Figure 3.3: Spectral resolution. The area under the curve represents the magnitude of electromagnetic energy emitted by the sun at various wavelengths. Low resolution sensors record energy within relatively wide wavelength bands (represented by the lighter and thicker purple band). High-resolution sensors record energy within narrow bands (represented by the darker and thinner band).
       
    • Radiometric resolution refers to the ability of a sensor to detect differences in energy magnitude. Sensors with low radiometric resolution are able to detect only relatively large differences in the amount of energy received; sensors with high radiometric resolution are able to detect relatively small differences. The greater the bit depth (number of data bits per pixel) of the images that a sensor records, the higher its radiometric resolution. The AVHRR sensor, for example, stores 210 bits per pixel, as opposed to the 28 bits that the Landsat sensors record. Thus, although its spatial resolution is very coarse (~4 km), the Advanced Very High Resolution Radiometer takes its name from its high radiometric resolution.
    Diagram showing high and low radiometric resolution
    Figure 3.4: Radiometric resolution. The area under the curve represents the magnitude of electromagnetic energy emitted by the sun at various wavelengths. Sensors with low radiometric resolution are able to detect only relatively large differences in energy magnitude (as represented by the lighter and thicker purple band). Sensors with high radiometric resolution are able to detect relatively small differences (represented by the darker and thinner band).
     

Basic Concepts of Information Extraction

The nature of the information extraction in remote sensing can be divided into general activities using spectral, spatial, and temporal information. These are:

  • Identification: specifying the identity of an object with enough confidence to assign it to a very specific class.
  • Classification: assigning objects, features, or areas to classes.
  • Detection: determining the presence or absence of a feature.
  • Recognition: assigning an object or feature to a general class or category.
  • Enumeration: listing or counting discrete items visible on an image.
  • Mensuration: measurement of objects and features in terms of distance, height, volume, or area.
  • Delineation: drawing boundaries around distinct regions of the image, characterized by specific tones or textures.

Such information extraction can be by human or computer methods. However, human and computer methods typically supplement each other since they both may offer better results when used together. For example, in disaster management, computers will detect damage from a hurricane, which humans recognize as a specific type of damage to the property. A combination of human and computer techniques is frequently used since human interpretation is time consuming and expensive but necessary for accuracy.

From a technology perspective, the simplest way to extract information from remotely sensed data is human interpretation. However, significant training and experience are needed to produce a skilled image interpreter. A well-trained image analyst uses many of these elements without really thinking about them. The beginner may not only have to force himself or herself to consciously evaluate an unknown object with respect to these elements, but also analyze its significance in relation to the other objects or phenomena in the photo or image. Eight elements of image interpretation employed by human image interpreters are:

  • Image tone: the lightness or darkness of a region within an image.
  • Image texture: the apparent roughness or smoothness of a region within an image.
  • Shadow: may reveal information about the size and shape of an object that cannot be discerned from an overhead view alone.
  • Pattern: the arrangement of individual objects in distinctive recurring patterns, such as buildings in an industrial complex or fruit trees in an orchard.
  • Association: the occurrence of one type of object may infer the presence of another commonly associated object nearby.
  • Shape: man made and natural features often have shapes so distinctive that this characteristic alone provides clear identification.
  • Size: the relative size of an object related to other familiar objects gives the interpreter a sense of scale, which can aid in the recognition of objects less easily recognized.
  • Site: refers to topographic position. For example, certain crops are commonly grown on hillsides or near large water bodies.

The results of image interpretation are most often delivered as a set of attributed points, lines, and/or polygons in any one of a variety of CAD or GIS data formats. The classification scheme or interpretation criteria must be agreed upon with the end user before the analysis begins.

Electromagnetic Radiation

Most remote sensing instruments measure the same thing—electromagnetic radiation. Electromagnetic radiation is a form of energy emitted by all matter above absolute zero temperature (0 Kelvin or -273° Celsius). X-rays, ultraviolet rays, visible light, infrared light, heat, microwaves, and radio and television waves are all examples of electromagnetic energy. If you have studied an engineering or physical science discipline, much of this may be familiar to you. Electromagnetic energy is described in terms of:

  • wavelength (the distance between successive wave crests),
  • frequency (the number of wave crests passing a fixed point in a given period of time), and
  • amplitude (the height of each wave peak).

Frequency and wavelength are inversely related. This is important because some fields choose to represent system performance using wavelength, and some using frequency.

The visible and infrared portions of the electromagnetic spectrum are the most important for the type of remote sensing. Table 3.2 illustrates the relationship between named colors and wavelength/frequency bands.

Table 3.2: Methods of Defining the Color Spectrum. (Source: Jensen 2007)
  Wavelength Descriptions
Colora Angstrom
(A)
Nanometer
(nm)
Micrometer
(µm)
Frequency
(Hz x 1014)
Ultraviolet, sw 2,537 254 0.254 11.82
Ultraviolet, lw 3,660 366 0.366 8.19
Violet (limit)b 4,000 400 0.40 7.50
Blue 4,500 450 0.45 6.66
Green 5,000 500 0.50 6.00
Green 5,500 550 0.55 5.45
Yellow 5,800 580 0.58 5.17
Orange 6,000 600 0.60 5.00
Red 6,500 650 0.65 4.62
Red (limit)b 7,000 700 0.70 4.29
Infrared, near 10,000 1,000 1.0 3.00
Infrared, far 300,000 30,000 30.00 0.10

Understanding the interactions of electromagnetic energy with the atmosphere and the Earth's surface is critical to the interpretation and analysis of remotely sensed imagery. Radiation is scattered, refracted, and absorbed by the atmosphere, and these effects must be accounted for and corrected in order to determine what is happening at the ground. The Earth's surface can reflect, absorb, transmit, and emit electromagnetic energy, and in fact is doing all of these at the same time, in varying fractions across the entire spectrum, as a function of wavelength. The spectral signature that is recorded for each pixel in a remotely sensed image is unique based on the characteristics of the target surface and the effects of the intervening atmosphere. In remote sensing analysis, similarities and differences among the spectral signatures of individual pixels are used to establish a set of more general classes that describe the landscape or help identify objects of particular interest in a scene.

Diagram of a portion of the electromagnetic spectrum
Figure 3.5: A portion of the electromagnetic spectrum, ranging from wavelengths of 0.1 micrometer (a micrometer is one millionth of a meter) to one meter, within which most remote sensing systems operate.
Source: Adapted from Lillesand & Kiefer, 1994

The graph above shows the relative amounts of electromagnetic energy emitted by the sun and the Earth across the range of wavelengths called the electromagnetic spectrum. Values along the horizontal axis of the graph range from very short wavelengths (ten millionths of a meter) to long wavelengths (meters). Note that the horizontal axis is logarithmically scaled, so that each increment represents a ten-fold increase in wavelength. The axis has been interrupted three times at the long wave end of the scale to make the diagram compact enough to fit on your screen. The vertical axis of the graph represents the magnitude of radiation emitted at each wavelength.

Hotter objects radiate more electromagnetic energy than cooler objects. Hotter objects also radiate energy at shorter wavelengths than cooler objects. Thus, as the graph shows, the sun emits more energy than the Earth, and the sun's radiation peaks at shorter wavelengths. The portion of the electromagnetic spectrum at the peak of the Sun's radiation is called the visible band because the human visual perception system is sensitive to those wavelengths. Human vision is a powerful means of sensing electromagnetic energy within the visual band. Remote sensing technologies extend our ability to sense electromagnetic energy beyond the visible band, allowing us to see the Earth's surface in new ways, which, in turn, reveals patterns that are normally invisible.

Diagram of the electromagnetic spectrum split into 5 bands
Figure 3.6: The electromagnetic spectrum divided into five wavelength bands
Source: Adapted from Lillesand & Kiefer, 1994

The graph above names several regions of the electromagnetic spectrum. Remote sensing systems have been developed to measure reflected or emitted energy at various wavelengths for different purposes. This section highlights systems designed to record radiation in the bands commonly used for land use and land cover mapping: the visible, infrared, and microwave bands.

spectrum_transmit.gif
Figure 3.7: The transmissivity of the atmosphere across a range of wavelengths. Black areas indicate wavelengths at which the atmosphere is partially or wholly opaque.
Source: Adapted from Lillesand & Kiefer, 1994

At certain wavelengths, the atmosphere poses an obstacle to satellite remote sensing by absorbing electromagnetic energy. Sensing systems are therefore designed to measure wavelengths within the windows where the transmissivity of the atmosphere is greatest.

Platforms

Remote sensing can be done from space (using satellite platforms), from the air (using aircraft platforms), and from the ground (using static and vehicle-based systems). The same type of sensor, such as a multispectral digital frame camera, may be deployed on all three types of platforms for different applications. Each type of platform has unique advantages and disadvantages in terms of spatial coverage, access, and flexibility.

Since the launch of the first satellite-based remote sensing, mapping has grown. Interestingly enough, even as more satellites are launched, the demand for data acquired from airborne platforms continues to grow. The historic and growth trends for both airborne and spaceborne remote sensing are well-documented in the ASPRS Ten-Year Industry Forecast. The well-versed geospatial intelligence professional should be able to discuss the advantages and disadvantages for each type of platform. He/she should also be able to recommend the appropriate data acquisition platform for a particular application and problem set. While the number of satellite platforms is quite low compared to the number of airborne platforms, the optical capabilities of satellite imaging sensors are approaching those of airborne digital cameras. However, there will always be important differences, strictly related to characteristics of the platform, in the effectiveness of satellites and aircraft to acquire remote sensing data.

Spaceborne

Since the 1967 inception of the Earth Resource Technology Satellite (ERTS) program (later renamed Landsat), mid-resolution spaceborne sensors have provided the vast majority of multispectral datasets to image analysts studying land use/land cover change, vegetation and agricultural production trends and cycles, water and environmental quality, soils, geology, and other earth resource and science problems. Landsat has been one of the most important sources of mid-resolution multispectral data globally.

The French SPOT satellites have been another important source of high-quality, mid-resolution multispectral data. The imagery is sold commercially, and is significantly more expensive than Landsat. SPOT can also collect stereo pairs; images in the pair are captured on successive days by the same satellite viewing off-nadir. Collection of stereo pairs requires special control of the satellite; therefore, the availability of stereo imagery is limited. Both traditional photogrammetric terrain extraction techniques, as well as automatic correlation, can be used to create topographic data in inaccessible areas of the world, especially where a digital surface model may be an acceptable alternative to a bare-earth elevation model.

DigitalGlobe, a commercial company collects high-resolution multispectral imagery, which is sold commercially to users throughout the world. US Department of Defense users and partners have access to these datasets through commercial procurement contracts; therefore, these satellites are quickly becoming a critical source of multispectral imagery for the geospatial intelligence community. Bear in mind that the trade-off for high spatial resolution is limited geographic coverage. For vast areas, it is difficult to obtain seamless, cloud-free, high-resolution multispectral imagery within the single season or at the particular moment of the phenological cycle of interest to the researcher.

One obvious advantage satellites have over aircraft is global accessibility; there are numerous governmental restrictions that deny access to airspace over sensitive areas or over foreign countries. Satellite orbits are not subject to these restrictions, although there may well be legal agreements to limit distribution of imagery over particular areas.

The design of a sensor destined for a satellite platform begins many years before launch and cannot be easily changed to reflect advances in technology that may evolve during the interim period. While all systems are rigorously tested before launch, there is always the possibility that one or more will fail after the spacecraft reaches orbit. The sensor could be working perfectly, but a component of the spacecraft bus (attitude determination system, power subsystem, temperature control system, or communications system) could fail, rendering a very expensive sensor effectively useless. The financial risk involved in building and operating a satellite sensor and platform is considerable, presenting a significant obstacle to the commercialization of space-based remote sensing.

Artist's rendition of the GeoEye-1 high-resolution commercial imaging satellite in orbit
Figure 3.8: Artist's rendition of the GeoEye-1 high-resolution commercial imaging satellite in orbit.
Source: GeoEye.

Satellites are placed at various heights and orbits to achieve desired coverage of the Earth's surface. When the orbital speed exactly matches that of the Earth's rotation, the satellite stays above the same point at all times, in a geostationary orbit. This is useful for communications and weather monitoring satellites. Satellite platforms for electro-optical (E/O) imaging systems are usually placed in a sun-synchronous, low-earth orbit (LEO) so that images of a given place are always acquired at the same local time (Figure 3.9). The revisit time for a particular location is a function of the individual platform and sensor, but generally it is on the order of several days to several weeks. While orbits are optimized for time of day, the satellite track may not always coincide with cloud-free conditions or specific vegetation conditions of interest to the end-user of the imagery. Therefore, it is not a given that usable imagery will be collected on every sensor pass over a given site.

Satellite orbits: definition of terms (left), sun-synchronous orbit (right)
Figure 3.9: Satellite orbits: definition of terms (left), sun-synchronous orbit (right).
Source: Campbell, 2007.

Airborne

Aircraft often have a definite advantage because of their flexibility. They can be deployed wherever and whenever weather conditions are favorable. Clouds often appear and dissipate over a target over a period of several hours during a given day. Aircraft on site can respond with a moment's notice to take advantage of clear conditions, while satellites are locked into a schedule dictated by orbital parameters. Aircraft can also be deployed in small or large numbers, making it possible to collect imagery seamlessly over an entire county or state in a matter of days or weeks simply by having lots of planes in the air at the same time.

Aircraft platforms range from the very small, slow, and low flying (Figure 3.10), to twin-engine turboprop and small jets capable of flying at altitudes up to 35,000 feet. Unmanned platforms (UAVs) are becoming increasingly important, particularly in military and emergency response applications, both international and domestic. Flying height, airspeed, and range are critical factors in choosing an appropriate remote sensing platform. Modifications to the fuselage and power system to accommodate a remote sensing instrument and data storage system are often far more expensive than the cost of the aircraft itself. While the planes themselves are fairly common, choosing the right aircraft to invest in requires a firm understanding of the applications for which that aircraft is likely to be used over its lifetime.

Image of a Helio Courier
Figure 3.10: Helio Courier.
Source: EarthData Fugro
Image of a Cessna Conquest
Figure 3.11: Cessna Conquest.
Source: EarthData Fugro

The scale and footprint of an aerial image is determined by the distance of the sensor from the ground; this distance is commonly referred to as the altitude above the mean terrain (AMT). The operating ceiling for an aircraft is defined in terms of altitude above mean sea level. It is important to remember this distinction when planning for a project in mountainous terrain. For example, the National Aerial Photography Program (NAPP) and the National Agricultural Imagery Program (NAIP) both call for imagery to be acquired from 20,000 feet AMT. In the western United States, this often requires flying much higher than 20,000 feet above mean sea level. A pressurized platform such as the Cessna Conquest (Figure 3.11) would be suitable for meeting these requirements.

With airborne systems, the flying height is determined on a project-by-project basis depending on the requirements for spatial resolution, GSD, and accuracy. The altitude of a satellite platform is fixed by the orbital considerations described above; scale and resolution of the imagery are determined by the sensor design. Medium resolution satellites, such as Landsat, and high-resolution satellites, such as GeoEye, orbit at nearly the same altitude, but collect imagery at very different ground sample distance (GSD).

Terrestrial

Terrestrial sensors include seismic, acoustic, magnetic, and pyroelectric transducers, and optical and passive imaging sensors to detect the presence of persons or vehicles. Terrestrial sensors many be part of a wireless sensor network (WSN) of spatially distributed sensors intended to monitor and to report the data through a network to a main location. Think about all of the electronic sensors surrounding you right now. There are GPS sensors and motion detectors in your smartphone. Terrestrial sensors have become abundant because they keep getting smaller and cheaper, and network connectivity has increased. With new microelectronics design, a microchip that costs less than a dollar can now link an array of sensors to a low-power wireless communications network.

Typical Space and Airborne Sensors

As you can see, there are innumerable types of platforms upon which to deploy an instrument. Satellites and aircraft collect the majority of base map data and imagery; the sensors typically deployed on these platforms include film and digital cameras, light-detection and ranging (lidar) systems, synthetic aperture radar (SAR) systems, and multispectral and hyperspectral scanners. Many of these instruments can also be mounted on land-based platforms, such as vans, trucks, tractors, and tanks. In the future, it is likely that a significant percentage of GIS and mapping data will originate from land-based sources.

Optical

You will be introduced to three types of optical sensors: airborne film mapping cameras, airborne digital mapping cameras, and satellite imaging. Each has particular characteristics, advantages, and disadvantages, but the principles of image acquisition and processing are largely the same regardless of the sensor type.

The size, or scale, of objects in a remotely sensed image varies with terrain elevation and with the tilt of the sensor with respect to the ground, as shown in Figure 3.12. Accurate measurements cannot be made from an image without rectification, the process of removing tilt and relief displacement. In order to use a rectified image as a map, it must also be georeferenced to a ground coordinate system.

Camera orientation and scale effects for vertical and oblique aerial photographs. Described in text.
Figure 3.12: Camera orientation and scale effects for vertical and oblique aerial photographs.
Source: Wolf and Dewitt, 2000. Page 8.

If remotely sensed images are acquired such that there is overlap between them, then objects can be seen from multiple perspectives, creating a stereoscopic view, or stereomodel. A familiar application of this principle is the View-Master toy many of us played with as children. The apparent shift of an object against a background due to a change in the observer's position is called parallax. Following the same principle as depth perception in human binocular vision, heights of objects and distances between them can be measured precisely from the degree of parallax in image space if the overlapping photos can be properly oriented with respect to each other, in other words, if the relative orientation (Note: before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the interpretation of binocular stereo information) is known (Figure 3.13).

Photogrammetry uses multiple views of the same point on the ground from two perspectives to create a three-dimensional image. Described in text.
Figure 3.13: Photogrammetry uses multiple views of the same point on the ground from two perspectives to create a three-dimensional image.
Source: David Maune, Dewberry and Davis.

Airborne film cameras have been in use for decades. Black and white (panchromatic), natural color, and false color infrared aerial film can be chosen based on the intended use of the imagery; panchromatic provides the sharpest detail for precision mapping; natural color is the most popular for interpretation and general viewing; false color infrared is used for environmental applications. High-precision manufacturing of camera elements such as lens, body, and focal plane; rigorous camera calibration techniques; and continuous improvements in electronic controls have resulted in a mature technology capable of producing stable, geometrically well-defined, high-accuracy image products. Lens distortion can be measured precisely and modeled; image motion compensation mechanisms remove the blur caused by aircraft motion during exposure. Aerial film is developed using chemical processes and then scanned at resolutions as high as 3,000 dots per inch. In today's photogrammetric production environment, virtually all aerotriangulation, elevation, and feature extraction are performed in an all-digital work flow.

Airborne digital mapping cameras have evolved over the past few years from prototype designs to mass-produced operationally stable systems. In many aspects, they provide superior performance to film cameras, dramatically reducing production time with increased spectral and radiometric resolution. Detail in shadows can be seen and mapped more accurately. Panchromatic, red, green, blue, and infrared bands are captured simultaneously so that multiple image products can be made from a single acquisition (Figure 3.14).

Three aerial photographs in grayscale (also called panchromatic), true color (RBG), and false-color infrared (CIR)
Figure 3.14: With an airborne digital camera, images can be captured simultaneously in grayscale (also called panchromatic), true color (RGB), and false-color infrared (CIR).
Source: EarthData Fugro.

High-resolution satellite imagery is now available from a number of commercial sources, both foreign and domestic. The federal government regulates the minimum allowable GSD for commercial distribution, based largely on national security concerns; 0.6-meter GSD is currently available and higher-resolution sensors being planned for the near future (McGlone, 2007). The image sensors are based on a linear push-broom design. Each sensor model is unique and contains proprietary design information; therefore, the sensor models are not distributed to commercial purchasers or users of the data. Through commercial contracts, these satellites provide imagery to NGA in support of geospatial intelligence activities around the globe.

As digital aerial photography has matured, it has become integrated into many consumer-level, web-based applications, such as Google Earth and numerous navigation and routing packages. Microsoft has recently deployed a large number of aerial survey planes equipped with the Vexcel UltraCam sensor in an ambitious Global Ortho program. Their goal is to provide very high resolution color imagery over the entire land surface of the Earth, made publicly available through the Bing Maps platform.

Multispectral

Until very recently, spaceborne sensors produced the majority of multispectral data. Commercial data providers (SPOT, Digital Globe, and others) license imagery to end-users for a fee, with limits on further distribution. The origins of commercial multispectral remote sensing can be traced to interpretation of natural color and color infrared (CIR) aerial photography in the early 20th century. CIR film was developed during World War II as an aid in camouflage detection (Jensen, 2007). It also proved to be of significant value in locating and monitoring the condition of vegetation. Healthy green vegetation shows up in shades of red; deep, clear water appears dark or almost black; concrete and gravel appear in shades of grey. CIR photography captured under the USGS National Aerial Photography Program was manually interpreted to produce National Wetlands Inventory (NWI) maps for much of the United States. While film is quickly being replaced by direct digital acquisition, most digital aerial cameras today are designed to replicate these familiar natural color or color-infrared multispectral images.

Computer monitors are designed to simultaneously display three color bands. Natural color image data is comprised of red, green, and blue bands. Color infrared data is comprised of infrared, red, and green bands. For multispectral data containing more than three spectral bands, the user must choose a subset of three bands to display at any given time, and furthermore must map those 3 bands to the computer display in such a way as to render an interpretable image.

Processing for Analysis

Simple visual interpretation can be quite useful for general situational awareness and decision making. Additional preparation and processing is often required for any more complex analysis. If the end-user application requires the overlay of multiple remotely sensed images or detailed geospatial data, such as road centerlines or building outlines, georeferencing must be performed. If spectral information is to be used to classify pixels or areas in the image based on their content, then effects of the atmosphere must be accounted for. To detect change between multiple images, both georeferencing and atmospheric correction of all individual images may be required.

Georeferencing

Digital images are clearly very useful—a picture is worth a thousand words—in many applications; however, the usefulness is greatly enhanced when the image is accurately georeferenced. The ability to locate objects and make measurements makes almost every remotely sensed image far more useful. Georeferencing of images must be accomplished using either some form of technology (such as GPS) or method (such as warping to known control points or more rigorous aerotriangulation). Geometric distortions due to the sensor optics, atmosphere and earth curvature, perspective, and terrain displacement must all be taken in account. Furthermore, a reference system must be established in order to assign real-world coordinates to pixels or features in the image. Georeferencing is relatively simple in concept, but quickly becomes more complex in practice due to the intricacies of both technology and coordinate systems.

Georeferencing an analog or digital photograph is dependent on the interior geometry of the sensor as well as the spatial relationship between the sensor platform and the ground. The single vertical aerial photograph is the simplest case; we can use the internal camera model and six parameters of exterior orientation (X, Y, Z, roll, pitch, and yaw) to extrapolate a ground coordinate for each identifiable point in the image. We can either compute the exterior orientation parameters from a minimum of three ground control points using space resection equations, or we can use direct measurements of the exterior orientation parameters obtained from GPS and IMU.

Direct georeferencing solves a large part of the image rectification problem, but not all of it. We can only extrapolate an accurate coordinate on the ground when we actually know where the ground is in relationship to the sensor and platform. We need some way to control the scale of the image. Either we need stereo pairs to generate intersecting light rays, or we need some known points on the ground. A georeferenced satellite image can be orthorectified if an appropriate elevation model is available. The effects of relief displacement are often less pronounced in satellite imagery than in aerial photography, due to the great distance between the sensor and the ground. It is not uncommon for scientists and image analysts to make use of satellite imagery that has been registered or rectified, but not orthorectified. If one is attempting to identify objects or detect change, the additional effort and expense of orthorectification may not be necessary. If precise distance or area measurements are to be made, or if the analysis results are to be used in further GIS analysis, then orthorectification may be important. It is important for the analyst to be aware of the effects of each form of georeferencing on the spatial accuracy of his/her analysis results and the implications of this spatial accuracy in the decision-making process.

The degree of accuracy and rigor required for the georeferencing depends on the desired accuracy of the result. More error can be tolerated in an image backdrop intended for visual interpretation, where a human interpreter can use judgment to work around some geographic misalignments. If the intent is to use automated processing to intersect, combine, or subtract one data layer from others using mathematical algorithms, then the spatial overlay must be much more accurate in order to produce meaningful results. Higher accuracy is achieved only with better ground control, accurate elevation data, and thorough quality assurance. Most remotely-sensed data is delivered with some level of georeferencing information, which locates the image in a ground coordinate system. There are generally three levels of georeferencing, each corresponding to a different geometric accuracy. *< >Level 1: uses positioning information obtained directly from the sensor and platform to roughly geo-locate the remotely-sensed scene on the ground. This level of georeferencing is sufficient to provide geographic context and support visual interpretation of the data. It is often not accurate enough to support robust image or GIS analysis that requires combining the remotely-sensed dataset with other layers.Level 2: uses a Digital Elevation Model (DEM) to remove relief displacement caused by variation in the height of the terrain. This improves the relative spatial accuracy of the data; distances measured between points within the geo-corrected image will be more accurate, particularly in scenes containing significant elevation changes. The DEM is usually obtained from another source, and the spatial accuracy of the Level 2 image will depend on the accuracy of the DEM.Level 3: uses a DEM and ground control points to most accurately georeference the image on the ground. In addition to the DEM, ground control points must be obtained from another source, and the accuracy of the Level 3 image will depend on the accuracy of the ground control points. Level 3 processing is usually required in order to provide the most accurate overlays of remotely-sensed data sets and other relevant GIS data.

Atmospheric Correction

If the end-user application intends to make use of spectral information contained in the image pixels to identify and separate different types of material or surfaces based on sample spectral libraries, then contributions to those pixels values made by the atmosphere must be removed. Atmospheric correction is a complex process utilizing control measurements, information about the atmospheric content, and assumptions about the uniformity of the atmosphere across the project area. The process is automated, but requires sophisticated software, highly skilled technicians, and again, time. Furthermore, atmospheric correction parameters used on one dataset cannot be summarily applied to a dataset collected on another day.