Welcome to Lesson 3! This lesson introduces GEOINT Data Sources and Collection Strategies and focuses on the third GEOINT principle that geospatial intelligence source and collection strategies are about getting enough information to answer a question about place and time. GEOINT data is collected from a variety of sources using various collection strategies. In general, "sources" are the means or systems used to observe and record data. The term "collection strategy" is used to convey a high-level approach to GEOINT data collection. The nature of access to the data is important and determines the source. In general, GEOINT data can be collected openly and covertly (secretly). Data collected openly and maintained transparently is considered "open source." Data collected covertly and maintained in secret is considered "closed source."
Throughout this lesson, we will address the concepts of GEOINT data sources and collection strategies. By the end of this lesson, you will be able to:
Please view my video lecture (7:56) concerning GEOINT data sources and collection strategies.
The purpose of GEOINT is to supply decision makers with timely geospatial insights that allow for informed, knowledgeable decision making. In order to fulfill this purpose, we prioritize the intelligence requirements of the decision maker. These requirements define the mission, functions, and structure of the GEOINT; they also should drive GEOINT data collection and analysis. The Intelligence Cycle, depicted in Figure 3.1, starts with requirements. The requirements are sorted and prioritized, and are then used to drive the collection activities. Once information has been collected, it is initially evaluated, processed, and reported to consumers. The cycle is then repeated until the intelligence requirements have been satisfied.
Based on the requirements, collection systems are given specific tasks to execute. Requirements are often difficult to define because they require a focus on long-term needs while most decision makers do not know what information they want until they actually need it.
Collection includes acquiring information and providing that information for processing and production. Collection management is the formal process within an intelligence organization of converting intelligence requirements into collection requirements, establishing priorities, tasking, coordinating with the collection sources, monitoring results, and re-tasking as required. The collection process encompasses the management of various activities including developing collection guidelines that ensure the best use of resources. There are four management criteria:
A collection strategy seeks to determine if sources can satisfy the requirements. Three key goals of a collection strategy are to:
Detecting deception requires information from a variety of sources so that one source can be verified. Multiple collection sources enable collection managers to cross-cue between different sources. Collection may require redundancy so that the loss or failure of one source can be compensated by another source.
GEOINT data is collected by computer systems, automated sensors, and humans. In general, "intelligence sources [2]" are the means used to observe and record information relating to the condition, situation, or activities of a targeted location, organization, or individual. GEOINT data sources have traditionally included imagery and geospatial data. While imagery is still the dominate and most important source, GEOINT is evolving to integrate forms of intelligence and information beyond the traditional sources of geospatial information. This lesson recognizes this transition and structures the discussion in broad terms. The collection strategy addresses the approach pursued for GEOINT data collection, which I divided into the continuum of strategies including the categories of "persistent collection" and "discontinuous collection."
GEOINT Data sources can be categorized from discontinuous to persistent versus closed to open, shown in Table 3.1 below:
Collection Strategy: Discontinuous |
Collection Strategy: Persistent |
|
---|---|---|
Source Access: Open |
Example: The home location of the students taking this course. | Example: A continuously monitored video camera along a street. |
Source Access: Closed |
Example: The Coca-Cola Company's Coca-Cola recipe. | Example: UAV equipped with a camera tracking a military target for 24 hours. |
A strategy is a plan to achieve goals; it provides direction and scope for an effort. In this lesson the term "collection strategy" is a way to pursue GEOINT data collection. A collection strategy is important because the resources available to achieve goals are usually limited. We have divided a continuum of strategies into two categories of "persistent collection" and "discontinuous collection." The collection strategy determines frequency at which data are captured for a specific place on the earth. This frequency is termed temporal resolution.The more frequently data are captured by a particular sensor, the better or finer is the temporal resolution of that sensor. Temporal resolution is relevant when using imagery or elevations datasets captured successively over time to detect changes to the landscape.
Discontinuous collection is not a full record of activity during a time period. It also might be termed non-persistent. Discontinuous collection is not a "permanent stare" at a target from one or more systems. Such a "permanent stare" might not be available, technically feasible, or an efficient use of resources. The practice of discontinuous data collection currently dominates the discipline.
Traditional remote sensing is an example of discontinuous source strategy. Here, sensors are mounted on board aircraft or spacecraft orbiting the earth. At present, there are several remote sensing satellites providing imagery for research and operational applications. Spaceborne remote sensing provides repetitive coverage of an area at a relatively low cost per unit area.
Discontinuous collection can also occur when a device sporadically provides data. For example, a roaming cellphone locates itself with signals from multiple antennas or through its GPS. There are numerous other examples of discontinuous GEOINT data. For instance, toll highway devices record your point of entry and point of exit to compute a toll. Twitter is another example. Tweets can be geotagged to record information about the location of the device that created the tweet. Images taken with a phone or a GPS-enabled camera can also contain location data.
Persistent collection is defined as a strategy that emphasizes the ability to linger for a period of time to detect, locate, characterize, identify, track, target, and possibly provide near- or real-time data. Persistent collection facilitates the collection and prediction of human behavior, for example, pattern of life. The goal is to achieve near-perfect knowledge by increasing the rate of data collection and, therefore, understanding about the target. This enables a faster decision cycle from more detailed information. The collection goal is that a target will be unable to evade information collection. The purpose is to improve decision making while reducing risk. In different domains, Persistent Collection is called Persistent Surveillance, Intelligence, Surveillance, Reconnaissance (ISR), Persistent Stare, or Pervasive Knowledge. Unmanned Aerial Vehicles (UAVs) are frequently associated with persistent collection.
The collection, processing, and exploration (analysis) of imagery in GEOINT is more complex than what can be presented completely in this brief summary. Having said this, the goal of this section is only to provide you an overview of a few of the critical factors to consider when choosing an appropriate remote sensing platform and sensor. Much of the following content related to remote sensing was developed for GEOG 480 - Exploring Imagery and Elevation Data in GIS Applications [3] by Penn State's College of Earth and Mineral Sciences. It is licensed for use under CC BY 3.0 [4].
Remote sensing is a cornerstone of GEOINT. Remote sensing—a major driver of Geographic information Science (GIScience) and Geographic Information Technology (GIT)—is the acquisition of information about an object or phenomenon without making physical contact with the object. Remote sensing is in direct contrast to direct onsite observation and makes possible data collection over large, dangerous, and inaccessible areas. As such, it is an essential tool in intelligence work. In GEOINT, remote sensing historically referred to the use of satellite and airborne sensors to detect and classify objects related to human activity on Earth. However, this definition has been expanding to include newer types of collection systems (LiDAR, RADAR) as well as terrestrial collection systems.
The history of remote sensing as a governmental activity, a commercial industry, and an academic field provides a perspective on development of the technology and emergence of remote sensing as a key technology in GEOINT. Accounts of remote sensing history generally begin in the 1800s, following the development of photography. Many of the early advancements of remote sensing can be tied to military applications, which continue to drive most of remote sensing technology development even today. However, after World War II, the use of remote sensing in science and infrastructure extended its reach into many areas of academia and civilian life. The recent revolution of geospatial technology and applications, linked to the explosion of computing and internet access, brings remote sensing technology and applications into the everyday lives of most people on the planet. It is important to understand the motivation behind technology development and to see how technology contributes to the broader societal, political, and economic framework of geospatial systems, science, and intelligence, be the application military, business, social, or environmental intelligence.
Remote sensing in GEOINT is evolving to include multiple layers of integrated sensing systems [5]—some of these are not what one would think of as the traditional systems found on space or airplane platforms. The layers of sensors are characterized by an integration of sensors, infrastructure, and exploitation capabilities. Such layered sensing provides decision makers with redundant, timely, trusted, and relevant information. Layered sensing might be divided by the vertical dimension:
An underwater layer can be added to this multi-dimensional view. While the underwater sensors are different, the applications of underwater sensors are similar to terrestrial sensors. Industrial applications are related to oil or mineral extraction, pipelines, and commercial fisheries. Military and homeland security applications include autonomous vehicles, securing port facilities, and de-mining.
Remote sensing and the exploitation of remotely sensed data are human activities aided by technology. Misinterpretation and ill-informed decision making can easily occur if the individuals involved do not understand the operating principles of the remote sensing system used to create the data, which is in turn used to derive information. Humans select the remote sensing system to collect the data, specify the various resolutions of the remote sensor data, calibrate the sensor, select the platform that will deliver the sensor, determine when the data will be collected, and specify how the data are processed.
In GEOINT, image analysis is the act of examining images for the purpose of identifying objects and judging their significance. John Jensen (2007) describes factors that distinguish a superior image analyst. He says, "It is a fact that some image analysts are superior to other image analysts because they: 1) understand the scientific principles better, 2) are more widely traveled and have seen many landscape objects and geographic areas, and/or 3) they can synthesize scientific principles and real-world knowledge to reach logical and correct conclusions."
The nature of the information extraction in remote sensing can be divided into general activities using spectral, spatial, and temporal information. These are:
Such information extraction can be by human or computer methods. However, human and computer methods typically supplement each other since they both may offer better results when used together. For example, in disaster management, computers will detect damage from a hurricane, which humans recognize as a specific type of damage to the property. A combination of human and computer techniques is frequently used since human interpretation is time consuming and expensive but necessary for accuracy.
From a technology perspective, the simplest way to extract information from remotely sensed data is human interpretation. However, significant training and experience are needed to produce a skilled image interpreter. A well-trained image analyst uses many of these elements without really thinking about them. The beginner may not only have to force himself or herself to consciously evaluate an unknown object with respect to these elements, but also analyze its significance in relation to the other objects or phenomena in the photo or image. Eight elements of image interpretation employed by human image interpreters are:
The results of image interpretation are most often delivered as a set of attributed points, lines, and/or polygons in any one of a variety of CAD or GIS data formats. The classification scheme or interpretation criteria must be agreed upon with the end user before the analysis begins.
Most remote sensing instruments measure the same thing—electromagnetic radiation. Electromagnetic radiation is a form of energy emitted by all matter above absolute zero temperature (0 Kelvin or -273° Celsius). X-rays, ultraviolet rays, visible light, infrared light, heat, microwaves, and radio and television waves are all examples of electromagnetic energy. If you have studied an engineering or physical science discipline, much of this may be familiar to you. Electromagnetic energy is described in terms of:
Frequency and wavelength are inversely related. This is important because some fields choose to represent system performance using wavelength, and some using frequency.
The visible and infrared portions of the electromagnetic spectrum are the most important for the type of remote sensing. Table 3.2 illustrates the relationship between named colors and wavelength/frequency bands.
Wavelength Descriptions | ||||
---|---|---|---|---|
Colora | Angstrom (A) |
Nanometer (nm) |
Micrometer (µm) |
Frequency (Hz x 1014) |
Ultraviolet, sw | 2,537 | 254 | 0.254 | 11.82 |
Ultraviolet, lw | 3,660 | 366 | 0.366 | 8.19 |
Violet (limit)b | 4,000 | 400 | 0.40 | 7.50 |
Blue | 4,500 | 450 | 0.45 | 6.66 |
Green | 5,000 | 500 | 0.50 | 6.00 |
Green | 5,500 | 550 | 0.55 | 5.45 |
Yellow | 5,800 | 580 | 0.58 | 5.17 |
Orange | 6,000 | 600 | 0.60 | 5.00 |
Red | 6,500 | 650 | 0.65 | 4.62 |
Red (limit)b | 7,000 | 700 | 0.70 | 4.29 |
Infrared, near | 10,000 | 1,000 | 1.0 | 3.00 |
Infrared, far | 300,000 | 30,000 | 30.00 | 0.10 |
Understanding the interactions of electromagnetic energy with the atmosphere and the Earth's surface is critical to the interpretation and analysis of remotely sensed imagery. Radiation is scattered, refracted, and absorbed by the atmosphere, and these effects must be accounted for and corrected in order to determine what is happening at the ground. The Earth's surface can reflect, absorb, transmit, and emit electromagnetic energy, and in fact is doing all of these at the same time, in varying fractions across the entire spectrum, as a function of wavelength. The spectral signature that is recorded for each pixel in a remotely sensed image is unique based on the characteristics of the target surface and the effects of the intervening atmosphere. In remote sensing analysis, similarities and differences among the spectral signatures of individual pixels are used to establish a set of more general classes that describe the landscape or help identify objects of particular interest in a scene.
The graph above shows the relative amounts of electromagnetic energy emitted by the sun and the Earth across the range of wavelengths called the electromagnetic spectrum. Values along the horizontal axis of the graph range from very short wavelengths (ten millionths of a meter) to long wavelengths (meters). Note that the horizontal axis is logarithmically scaled, so that each increment represents a ten-fold increase in wavelength. The axis has been interrupted three times at the long wave end of the scale to make the diagram compact enough to fit on your screen. The vertical axis of the graph represents the magnitude of radiation emitted at each wavelength.
Hotter objects radiate more electromagnetic energy than cooler objects. Hotter objects also radiate energy at shorter wavelengths than cooler objects. Thus, as the graph shows, the sun emits more energy than the Earth, and the sun's radiation peaks at shorter wavelengths. The portion of the electromagnetic spectrum at the peak of the Sun's radiation is called the visible band because the human visual perception system is sensitive to those wavelengths. Human vision is a powerful means of sensing electromagnetic energy within the visual band. Remote sensing technologies extend our ability to sense electromagnetic energy beyond the visible band, allowing us to see the Earth's surface in new ways, which, in turn, reveals patterns that are normally invisible.
The graph above names several regions of the electromagnetic spectrum. Remote sensing systems have been developed to measure reflected or emitted energy at various wavelengths for different purposes. This section highlights systems designed to record radiation in the bands commonly used for land use and land cover mapping: the visible, infrared, and microwave bands.
At certain wavelengths, the atmosphere poses an obstacle to satellite remote sensing by absorbing electromagnetic energy. Sensing systems are therefore designed to measure wavelengths within the windows where the transmissivity of the atmosphere is greatest.
Remote sensing can be done from space (using satellite platforms), from the air (using aircraft platforms), and from the ground (using static and vehicle-based systems). The same type of sensor, such as a multispectral digital frame camera, may be deployed on all three types of platforms for different applications. Each type of platform has unique advantages and disadvantages in terms of spatial coverage, access, and flexibility.
Since the launch of the first satellite-based remote sensing, mapping has grown. Interestingly enough, even as more satellites are launched, the demand for data acquired from airborne platforms continues to grow. The historic and growth trends for both airborne and spaceborne remote sensing are well-documented in the ASPRS Ten-Year Industry Forecast [12]. The well-versed geospatial intelligence professional should be able to discuss the advantages and disadvantages for each type of platform. He/she should also be able to recommend the appropriate data acquisition platform for a particular application and problem set. While the number of satellite platforms is quite low compared to the number of airborne platforms, the optical capabilities of satellite imaging sensors are approaching those of airborne digital cameras. However, there will always be important differences, strictly related to characteristics of the platform, in the effectiveness of satellites and aircraft to acquire remote sensing data.
Since the 1967 inception of the Earth Resource Technology Satellite (ERTS) program (later renamed Landsat), mid-resolution spaceborne sensors have provided the vast majority of multispectral datasets to image analysts studying land use/land cover change, vegetation and agricultural production trends and cycles, water and environmental quality, soils, geology, and other earth resource and science problems. Landsat has been one of the most important sources of mid-resolution multispectral data globally.
The French SPOT satellites have been another important source of high-quality, mid-resolution multispectral data. The imagery is sold commercially, and is significantly more expensive than Landsat. SPOT can also collect stereo pairs; images in the pair are captured on successive days by the same satellite viewing off-nadir. [13] Collection of stereo pairs requires special control of the satellite; therefore, the availability of stereo imagery is limited. Both traditional photogrammetric terrain extraction techniques, as well as automatic correlation, can be used to create topographic data in inaccessible areas of the world, especially where a digital surface model may be an acceptable alternative to a bare-earth elevation model.
DigitalGlobe, a commercial company collects high-resolution multispectral imagery, which is sold commercially to users throughout the world. US Department of Defense users and partners have access to these datasets through commercial procurement contracts; therefore, these satellites are quickly becoming a critical source of multispectral imagery for the geospatial intelligence community. Bear in mind that the trade-off for high spatial resolution is limited geographic coverage. For vast areas, it is difficult to obtain seamless, cloud-free, high-resolution multispectral imagery within the single season or at the particular moment of the phenological cycle of interest to the researcher.
One obvious advantage satellites have over aircraft is global accessibility; there are numerous governmental restrictions that deny access to airspace over sensitive areas or over foreign countries. Satellite orbits are not subject to these restrictions, although there may well be legal agreements to limit distribution of imagery over particular areas.
The design of a sensor destined for a satellite platform begins many years before launch and cannot be easily changed to reflect advances in technology that may evolve during the interim period. While all systems are rigorously tested before launch, there is always the possibility that one or more will fail after the spacecraft reaches orbit. The sensor could be working perfectly, but a component of the spacecraft bus (attitude determination system, power subsystem, temperature control system, or communications system) could fail, rendering a very expensive sensor effectively useless. The financial risk involved in building and operating a satellite sensor and platform is considerable, presenting a significant obstacle to the commercialization of space-based remote sensing.
Satellites are placed at various heights and orbits to achieve desired coverage of the Earth's surface [14]. When the orbital speed exactly matches that of the Earth's rotation, the satellite stays above the same point at all times, in a geostationary [15] orbit. This is useful for communications and weather monitoring satellites. Satellite platforms for electro-optical (E/O) imaging systems are usually placed in a sun-synchronous [16], low-earth orbit (LEO) so that images of a given place are always acquired at the same local time (Figure 3.9). The revisit time for a particular location is a function of the individual platform and sensor, but generally it is on the order of several days to several weeks. While orbits are optimized for time of day, the satellite track may not always coincide with cloud-free conditions or specific vegetation conditions of interest to the end-user of the imagery. Therefore, it is not a given that usable imagery will be collected on every sensor pass over a given site.
Aircraft often have a definite advantage because of their flexibility. They can be deployed wherever and whenever weather conditions are favorable. Clouds often appear and dissipate over a target over a period of several hours during a given day. Aircraft on site can respond with a moment's notice to take advantage of clear conditions, while satellites are locked into a schedule dictated by orbital parameters. Aircraft can also be deployed in small or large numbers, making it possible to collect imagery seamlessly over an entire county or state in a matter of days or weeks simply by having lots of planes in the air at the same time.
Aircraft platforms range from the very small, slow, and low flying (Figure 3.10), to twin-engine turboprop and small jets capable of flying at altitudes up to 35,000 feet. Unmanned platforms (UAVs) are becoming increasingly important, particularly in military and emergency response applications, both international and domestic. Flying height, airspeed, and range are critical factors in choosing an appropriate remote sensing platform. Modifications to the fuselage and power system to accommodate a remote sensing instrument and data storage system are often far more expensive than the cost of the aircraft itself. While the planes themselves are fairly common, choosing the right aircraft to invest in requires a firm understanding of the applications for which that aircraft is likely to be used over its lifetime.
The scale and footprint of an aerial image is determined by the distance of the sensor from the ground; this distance is commonly referred to as the altitude above the mean terrain (AMT). The operating ceiling for an aircraft is defined in terms of altitude above mean sea level. It is important to remember this distinction when planning for a project in mountainous terrain. For example, the National Aerial Photography Program [17] (NAPP) and the National Agricultural Imagery Program [18] (NAIP) both call for imagery to be acquired from 20,000 feet AMT. In the western United States, this often requires flying much higher than 20,000 feet above mean sea level. A pressurized platform such as the Cessna Conquest (Figure 3.11) would be suitable for meeting these requirements.
With airborne systems, the flying height is determined on a project-by-project basis depending on the requirements for spatial resolution, GSD, and accuracy. The altitude of a satellite platform is fixed by the orbital considerations described above; scale and resolution of the imagery are determined by the sensor design. Medium resolution satellites, such as Landsat, and high-resolution satellites, such as GeoEye, orbit at nearly the same altitude, but collect imagery at very different ground sample distance (GSD).
Terrestrial sensors include seismic, acoustic, magnetic, and pyroelectric transducers, and optical and passive imaging sensors to detect the presence of persons or vehicles. Terrestrial sensors many be part of a wireless sensor network (WSN) of spatially distributed sensors intended to monitor and to report the data through a network to a main location. Think about all of the electronic sensors surrounding you right now. There are GPS sensors and motion detectors in your smartphone. Terrestrial sensors have become abundant because they keep getting smaller and cheaper, and network connectivity has increased. With new microelectronics design, a microchip that costs less than a dollar can now link an array of sensors to a low-power wireless communications network.
As you can see, there are innumerable types of platforms upon which to deploy an instrument. Satellites and aircraft collect the majority of base map data and imagery; the sensors typically deployed on these platforms include film and digital cameras, light-detection and ranging (lidar) systems, synthetic aperture radar (SAR) systems, and multispectral and hyperspectral scanners. Many of these instruments can also be mounted on land-based platforms, such as vans, trucks, tractors, and tanks. In the future, it is likely that a significant percentage of GIS and mapping data will originate from land-based sources.
You will be introduced to three types of optical sensors: airborne film mapping cameras, airborne digital mapping cameras, and satellite imaging. Each has particular characteristics, advantages, and disadvantages, but the principles of image acquisition and processing are largely the same regardless of the sensor type.
The size, or scale, of objects in a remotely sensed image varies with terrain elevation and with the tilt of the sensor with respect to the ground, as shown in Figure 3.12. Accurate measurements cannot be made from an image without rectification, the process of removing tilt and relief displacement. In order to use a rectified image as a map, it must also be georeferenced to a ground coordinate system.
If remotely sensed images are acquired such that there is overlap between them, then objects can be seen from multiple perspectives, creating a stereoscopic view, or stereomodel. A familiar application of this principle is the View-Master [19] toy many of us played with as children. The apparent shift of an object against a background due to a change in the observer's position is called parallax [20]. Following the same principle as depth perception in human binocular vision, heights of objects and distances between them can be measured precisely from the degree of parallax in image space if the overlapping photos can be properly oriented with respect to each other, in other words, if the relative orientation (Note: before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the interpretation of binocular stereo information) is known (Figure 3.13).
Airborne film cameras have been in use for decades. Black and white (panchromatic), natural color, and false color infrared aerial film can be chosen based on the intended use of the imagery; panchromatic provides the sharpest detail for precision mapping; natural color is the most popular for interpretation and general viewing; false color infrared is used for environmental applications. High-precision manufacturing of camera elements such as lens, body, and focal plane; rigorous camera calibration techniques; and continuous improvements in electronic controls have resulted in a mature technology capable of producing stable, geometrically well-defined, high-accuracy image products. Lens distortion can be measured precisely and modeled; image motion compensation mechanisms remove the blur caused by aircraft motion during exposure. Aerial film is developed using chemical processes and then scanned at resolutions as high as 3,000 dots per inch. In today's photogrammetric production environment, virtually all aerotriangulation, elevation, and feature extraction are performed in an all-digital work flow.
Airborne digital mapping cameras have evolved over the past few years from prototype designs to mass-produced operationally stable systems. In many aspects, they provide superior performance to film cameras, dramatically reducing production time with increased spectral and radiometric resolution. Detail in shadows can be seen and mapped more accurately. Panchromatic, red, green, blue, and infrared bands are captured simultaneously so that multiple image products can be made from a single acquisition (Figure 3.14).
High-resolution satellite imagery is now available from a number of commercial sources, both foreign and domestic. The federal government regulates the minimum allowable GSD for commercial distribution, based largely on national security concerns; 0.6-meter GSD is currently available and higher-resolution sensors being planned for the near future (McGlone, 2007). The image sensors are based on a linear push-broom design. Each sensor model is unique and contains proprietary design information; therefore, the sensor models are not distributed to commercial purchasers or users of the data. Through commercial contracts, these satellites provide imagery to NGA in support of geospatial intelligence activities around the globe.
As digital aerial photography has matured, it has become integrated into many consumer-level, web-based applications, such as Google Earth and numerous navigation and routing packages. Microsoft has recently deployed a large number of aerial survey planes equipped with the Vexcel UltraCam sensor in an ambitious Global Ortho [21] program. Their goal is to provide very high resolution color imagery over the entire land surface of the Earth, made publicly available through the Bing Maps platform.
Until very recently, spaceborne sensors produced the majority of multispectral data. Commercial data providers (SPOT, Digital Globe, and others) license imagery to end-users for a fee, with limits on further distribution. The origins of commercial multispectral remote sensing can be traced to interpretation of natural color and color infrared (CIR) aerial photography in the early 20th century. CIR film was developed during World War II as an aid in camouflage detection (Jensen, 2007). It also proved to be of significant value in locating and monitoring the condition of vegetation. Healthy green vegetation shows up in shades of red; deep, clear water appears dark or almost black; concrete and gravel appear in shades of grey. CIR photography captured under the USGS National Aerial Photography Program [17] was manually interpreted to produce National Wetlands Inventory (NWI) maps for much of the United States. While film is quickly being replaced by direct digital acquisition, most digital aerial cameras today are designed to replicate these familiar natural color or color-infrared multispectral images.
Computer monitors are designed to simultaneously display three color bands. Natural color image data is comprised of red, green, and blue bands. Color infrared data is comprised of infrared, red, and green bands. For multispectral data containing more than three spectral bands, the user must choose a subset of three bands to display at any given time, and furthermore must map those 3 bands to the computer display in such a way as to render an interpretable image.
Simple visual interpretation can be quite useful for general situational awareness and decision making. Additional preparation and processing is often required for any more complex analysis. If the end-user application requires the overlay of multiple remotely sensed images or detailed geospatial data, such as road centerlines or building outlines, georeferencing must be performed. If spectral information is to be used to classify pixels or areas in the image based on their content, then effects of the atmosphere must be accounted for. To detect change between multiple images, both georeferencing and atmospheric correction of all individual images may be required.
Digital images are clearly very useful—a picture is worth a thousand words—in many applications; however, the usefulness is greatly enhanced when the image is accurately georeferenced [22]. The ability to locate objects and make measurements makes almost every remotely sensed image far more useful. Georeferencing of images must be accomplished using either some form of technology (such as GPS) or method (such as warping to known control points or more rigorous aerotriangulation). Geometric distortions due to the sensor optics, atmosphere and earth curvature, perspective, and terrain displacement must all be taken in account. Furthermore, a reference system must be established in order to assign real-world coordinates to pixels or features in the image. Georeferencing is relatively simple in concept, but quickly becomes more complex in practice due to the intricacies of both technology and coordinate systems.
Georeferencing an analog or digital photograph is dependent on the interior geometry of the sensor as well as the spatial relationship between the sensor platform and the ground. The single vertical aerial photograph is the simplest case; we can use the internal camera model and six parameters of exterior orientation (X, Y, Z, roll, pitch, and yaw) to extrapolate a ground coordinate for each identifiable point in the image. We can either compute the exterior orientation parameters from a minimum of three ground control points using space resection equations, or we can use direct measurements of the exterior orientation parameters obtained from GPS and IMU.
Direct georeferencing solves a large part of the image rectification problem, but not all of it. We can only extrapolate an accurate coordinate on the ground when we actually know where the ground is in relationship to the sensor and platform. We need some way to control the scale of the image. Either we need stereo pairs to generate intersecting light rays, or we need some known points on the ground. A georeferenced satellite image can be orthorectified if an appropriate elevation model is available. The effects of relief displacement are often less pronounced in satellite imagery than in aerial photography, due to the great distance between the sensor and the ground. It is not uncommon for scientists and image analysts to make use of satellite imagery that has been registered or rectified, but not orthorectified. If one is attempting to identify objects or detect change, the additional effort and expense of orthorectification may not be necessary. If precise distance or area measurements are to be made, or if the analysis results are to be used in further GIS analysis, then orthorectification may be important. It is important for the analyst to be aware of the effects of each form of georeferencing on the spatial accuracy of his/her analysis results and the implications of this spatial accuracy in the decision-making process.
The degree of accuracy and rigor required for the georeferencing depends on the desired accuracy of the result. More error can be tolerated in an image backdrop intended for visual interpretation, where a human interpreter can use judgment to work around some geographic misalignments. If the intent is to use automated processing to intersect, combine, or subtract one data layer from others using mathematical algorithms, then the spatial overlay must be much more accurate in order to produce meaningful results. Higher accuracy is achieved only with better ground control, accurate elevation data, and thorough quality assurance. Most remotely-sensed data is delivered with some level of georeferencing information, which locates the image in a ground coordinate system. There are generally three levels of georeferencing, each corresponding to a different geometric accuracy. *< >Level 1: uses positioning information obtained directly from the sensor and platform to roughly geo-locate the remotely-sensed scene on the ground. This level of georeferencing is sufficient to provide geographic context and support visual interpretation of the data. It is often not accurate enough to support robust image or GIS analysis that requires combining the remotely-sensed dataset with other layers.Level 2: uses a Digital Elevation Model (DEM) to remove relief displacement caused by variation in the height of the terrain. This improves the relative spatial accuracy of the data; distances measured between points within the geo-corrected image will be more accurate, particularly in scenes containing significant elevation changes. The DEM is usually obtained from another source, and the spatial accuracy of the Level 2 image will depend on the accuracy of the DEM.Level 3: uses a DEM and ground control points to most accurately georeference the image on the ground. In addition to the DEM, ground control points must be obtained from another source, and the accuracy of the Level 3 image will depend on the accuracy of the ground control points. Level 3 processing is usually required in order to provide the most accurate overlays of remotely-sensed data sets and other relevant GIS data.
If the end-user application intends to make use of spectral information contained in the image pixels to identify and separate different types of material or surfaces based on sample spectral libraries, then contributions to those pixels values made by the atmosphere must be removed. Atmospheric correction is a complex process utilizing control measurements, information about the atmospheric content, and assumptions about the uniformity of the atmosphere across the project area. The process is automated, but requires sophisticated software, highly skilled technicians, and again, time. Furthermore, atmospheric correction parameters used on one dataset cannot be summarily applied to a dataset collected on another day.
A drone is an unmanned aerial vehicle (UAV) and also referred to as an unpiloted aerial vehicle (UPV) or a remotely piloted aircraft (RPA). The following discussion was developed as part of GEOG 597G, Geospatial Applications for Unmanned Aerial System (UAS) [23] by Penn State's College of Earth and Mineral Sciences [24] and is licensed under CC BY 3.0 [4].
Here you will learn about the history of UAS development and its introduction to civilian and military applications. The history of flying objects, or the unmanned aerial vehicle in its rudimentary forms, extends way back to ancient civilizations. The Chinese, around 200 AD, used paper balloons (equipped with oil lamps to heat the air) to fly over their enemies after dark, which caused fear among the enemy soldiers who believed that there was divine power involved in the flight.
The idea of unmanned aerial objects came long before manned flights. This was for the obvious reason of removing the risk of loss of life in conjunction with these experimental objects. In modern times, the idea of unmanned flying objects developed to mean flying aerial vehicles, or aircraft without pilots on board. Thanks to advancements in technology, the maneuvering and control of piloted flight can be sufficiently mimicked. Names like aerial torpedo, radio controlled vehicle, remotely piloted vehicle (RPV), remote controlled vehicle, autonomous controlled vehicle, pilotless vehicle, unmanned aerial vehicle (UAV), unmanned aircraft system (UAS), and drone are names that may be used to describe a flying object or machine without a pilot on board.
The main challenge that faced early aerospace pioneers of piloted and pilotless airplanes alike was the issue of controlling flight once the flying object was up in the air. The Wright Brothers (1903), and at about the same time, Dr. Samuel Pierpont Langley, taught the aviation world a lot about the secrets of controlled flight. Afterwards, the war machine of WWI put intense pressure on inventors and scientists to come up with innovations in all aspects of flight design including power plants, fuselage structures, lifting wing configurations and control surface arrangements. By the time WWI ended, modern day aviation had been born.
In late 1916, the US navy funded Sperry Gyroscope Company (later named Sperry Corporation) to develop an unmanned torpedo that could fly a guided distance of 1000 yards to detonate its warhead close enough to an enemy warship. Almost two years later, on March 6, 1918, after a series of failures, Sperry efforts succeeded in launching an unmanned torpedo to fly a 1000-yard course in stable guided flight. It dived onto its target at the desired time and place, and later was recovered and landed. With this successful flight, the world’s first unmanned aircraft system, which is called Curtis N-9, was born.
In the late 1930s, the U.S. Navy returned to the development of drones. This was highlighted by the Navy Research Lab’s development of the Curtis N2C-2 drone. (See Figure 3.15). The 2500-lb. bi-plane was instrumental in testing the accuracy and efficiency of the Navy anti-aircraft defense system.
Penn State Extension is testing drones to determine possible uses of drones for such purposes as observing pest control and fertilizer application patterns. Monitoring these types of things from the air can help growers with crop management decisions. Click on the image below to watch a short video (:36) about using drones in crop management.
If you are interested in reading the accompanying story, you can use the following link to access the "Penn State crop educator explores drone-driven crop management" [26] article.
The way a pilotless aircraft is controlled determines its categorization. In general, there are three main names for pilotless aircraft:
Whether it is named a UAV, an RPV, or a drone, at a minimum, the pilotless aircraft should include the following elements:
Naming the different missions for UAVs is a difficult task, as there are so many possibilities and there have never been enough systems in use to explore all the possibilities. However, the two main classifications for UAV missions are the following:
As of today, civilian missions include various applications such as:
Military and civilian missions of UAV overlap in many areas. They both use UAV for reconnaissance and surveillance. In addition, they both use UAV as a stationary platform over a point on the ground from which to perform many of the communications or remote sensing satellite functionalities with a fraction of the cost.
The following content was derived from the Foundations of Geographic Information and Spatial Analysis Boot Camp(No link is currently available) by Penn State's College of Earth and Mineral Sciences [24] and is licensed under CC BY 3.0 [4].
As I said on the previous page, both military and civilian missions use Drones or UAVs for reconnaissance and surveillance. These systems often deliver real-time video. Real-time video capabilities, referred to as "motion imagery" or "full motion video" (FMV) are expanding the role of remote sensing and GEOINT in high-tempo military operations and civilian applications. The conventional sensors and platforms presented in previous sections provide critically important, spatially accurate, base map layers for geographic information systems and traditional image analysis. FMV sensors and platforms open the door for persistent surveillance of pinpointed targets on the ground, tracking them as they move and fusing intelligence from other sources to support immediate action. FMV presents significant new challenges to geospatial infrastructure—hardware, software, and analysts.
According to the Motion Imagery Standards Board (MISB) a motion imagery system is "any imaging system that collects at a rate of one frame per second (1 Hz) or faster, over a common field of regard." While MISB makes no formal distinction between motion imagery and full motion video, FMV is generally regarded as "that subset of motion imagery at television-like frame rates (24 - 60 Hz)."
The key phrase "persistent surveillance" provides the important distinction between FMV and traditional analysis of discontinuous imagery. It connotes "constant stare," the ability to watch a point on the ground or to follow a moving target for a long period of time, without interruption. Contrast this need with the typical field of view and revisit time for traditional airborne or spaceborne remote sensing systems and you will begin to appreciate the paradigm shift in geospatial intelligence being stimulated by FMV technology. Traditional imagery analysis tends to be feature-based, focusing structural features of buildings or identification of known objects of interest, such as tank formations and fleets of ships or aircraft. FMV, on the other hand, is activity-based, focusing on capturing the movements of individual people and vehicles or the "patterns of life" observed by small groups (Copeland, 2009).
FMV technologies provide the capability to monitor high-interest activities, including "tracking moving, fleeting, and emerging targets as well as observation of rapidly developing events." (ASPRS, 2009) The phrase "find, fix, and finish" neatly describes the tactical advantage imparted to those who possess this powerful intelligence tool.
Terms commonly used when performing or discussing persistent surveillance are defined by Copeland (2009) as:
Digital video cameras, infrared, multispectral, and hyperspectral systems can all be adapted for an FMV application. Because these systems are primarily intended for human interpretation in surveillance applications, where limiting the size of the dataset is needed to facilitate real-time streaming and processing, they tend to have lower spatial resolution and smaller fields of view than their traditional remote sensing counterparts. As with conventional systems, the instantaneous field of view and scale of the resulting imagery will be determined by the operational altitude above ground level (AGL) and the focal length of the camera lens.
The most common sensors comprising an FMV system are a electro-optical (EO) panchromatic digital video camera and an infrared video camera. The EO sensor is simply either a panchromatic or color digital video camera using daylight as the illumination source and recording data in the visible (red to blue) part of the electromagnetic spectrum.
The infrared video (IR) camera acquires thermal imagery, which is useful for detecting thermally emissive objects (e.g. people, running vehicles, etc.) or for collecting imagery at night when there is no ambient light available for the EO camera. The IR cameras can be set to collect "white hot," where the hottest objects in the scene are depicted with high (light) grayscale values, or "black hot," where the hottest objects in the scene are depicted with low (dark) grayscale values. Choice of hot-white or hot-black would be made by those performing interpretation of the imagery, depending on what is of particular interest in the scene.
FMV sensor packages are compact, portable packages that can be quickly installed on a wide variety of airborne vehicles, manned and unmanned. Specifications follow for many of the primary motion imagery platforms, both manned and unmanned. Each of these are primarily focused on intelligence gathering activities where extended loitering over areas of interest is of key importance to operations. Around the world, these aircraft can also be used to collect against dire environmental emergency events.
There are differences between classical remote sensing and FMV, and the skill set required for an analyst in these respective domains. In classical image analysis, success depends on the ability of the analyst to reliably identify specific objects of interest based on shape, texture, or radiometric signature. In FMV, success is less dependent on object identification and largely achieved through the ability of the analyst to work through shortcomings in the system architecture without losing the tempo required to maintain real-time operations. Whereas the classical image analyst may be the type of person who is very detail-oriented, thorough, and focused in a specific niche of expertise, the FMV analyst must be an interactive integrator of many sources of intelligence, capable of acting and making critical decisions without the aid of extensive research.
The power of FMV is brought to bear when the team controlling acquisition and exploitation of real-time video know precisely where and when to look at a target of interest. Discovering a target is not the goal of FMV. The goal is to follow a target in order to ascertain patterns of behavior which can then be used for a tactical advantage. In classical remote sensing, on the other hand, previously unknown or unsuspected information can often be discovered by detecting changes in a region of interest over time. These two disciplines can potentially complement each other, but the skills, training, and technology needed to support them are clearly quite different.
Please view my second video (6:01) lecture concerning GEOINT data sources and collection strategies.
Any type of lawfully and ethically collected geospatial information from publicly available sources is considered to be open source material. Open source is contrasted with closed source material that is not available to the public. There is an increasing reliance on the open source collection of information.
One of the attractions of open source information is the perception that it is easily collected with no accountability. This can be incorrect, and in some countries the information must be collected for a legitimate purpose. European countries have incorporated the "European Convention on Human Rights" into their legislation. A search that engages Article 8, "private life" issues, must meet the threshold for interference as set out in the Convention. Article 8 protects the private life of individuals against arbitrary interference by public authorities and private organizations such as the media. Article 8 is a qualified right, so in certain circumstances public authorities can interfere with the private and family life to prevent disorder or crime, protect health or morals, or to protect the rights and freedoms of others. However, such interference must be in accordance with the law and necessary to protect national security, public safety, or the well-being of the country.
Cyber can be an important source for open source data. When viewed with an eye to intelligence analysis, social media, and more broadly, cyber transactions between individuals and groups, describe past and current events and help to anticipate future events. These transactions can be aggregated into trends, models and conditions that describe who is involved and where and when an event will be or is happening. While much of the information disseminated in social media is not geographic information per se, such social media transactions contain massive amounts of geographic information in the “from” and “to” communication nodes and possible geographic references in the content. Significantly, such cyber activities are intrinsically self-documenting and provide spatial and temporal information to enable analysts to focus on point events, group behaviors, or larger trends. The result is that cyber activities create a vast amount of useful data about an individual or group in the context of local, regional and global activities. Given the growing impact of the technology and the rich information content, their importance has and will likely continue to grow in value to the intelligence community.
Crowdsourced geospatial data (CGD) is an emerging trend that is influencing future methods for geospatial data acquisition. CGD involves the participation of untrained individuals with a high degree of interest in geospatial technology. Working collectively, these individuals collect, edit, and produce datasets. Crowdsourced geospatial data production is typically an open, lightly-controlled process with few constraints, specifications, or quality assurance processes. This contrasts with the highly-controlled geospatial data production practices of national mapping agencies and businesses. Adoption of CGD and production methods has been a concern, especially to government organizations, due to quality concerns related to differences in production methods.
You might enjoy reading: "Frustrating hunt for Genghis Khan’s long-lost tomb just got a whole lot easier." [27]
After a period of initial skepticism, government agencies are now incorporating CGD. There are three main methods, which are:
An important emerging area for hybrid CGD projects is in the area of emergency management aided by volunteers and by CGD. An example is organizations that are fighting Ebola in the three hardest-hit countries—Sierra Leone, Guinea, and Liberia—and who need maps to help aid workers get around the country and do the difficult job of checking village by village for victims of the disease. The UN, Red Cross, and Doctors Without Borders have turned to OpenStreetMap (OSM) for their map data. OSM's crowdsourced mapping project brings together mappers on the ground using GPS devices with mapping capabilities.
Explore OpenStreetMap (OSM) [28]. OSM is a collaborative project to create a free, editable map of the world. Two major driving forces behind the establishment and growth of OSM have been restrictions on use or availability of map information across much of the world, and the advent of inexpensive portable satellite navigation devices.
Created by Steve Coast in the UK in 2004, it was inspired by the success of Wikipedia and the preponderance of proprietary map data in the UK and elsewhere. Since then, it has grown to over 1.6 million registered users, who can collect data using manual survey, GPS devices, aerial photography, and other free sources. This crowdsourced data is then made available under the Open Database License. The OpenStreetMap Foundation, a non-profit organization registered in England, supports the site.
Rather than the map itself, the data generated by the OpenStreetMap project is considered its primary output. This data is then available for use in both traditional applications, like its usage by Craigslist, Geocaching, MapQuest Open, JMP statistical software, and Foursquare to replace Google Maps, and more unusual roles, like replacing default data included with GPS receivers. This data has been favorably compared with proprietary data sources, though data quality varies worldwide.
See the West African Ebola Response. Mapping efforts in support of the relief operation are still ongoing and new contributors are always welcome. Getting involved is simple: go to The HOT Tasking Manager [29] and check out an area to map from one of the ebola-related tasks. Currently tasks are open for Bo, Sierra Leone [30] and Panguma, Sierra Leone [31].
GEOINT Data are collected openly and covertly (secretly). Data collected covertly may be considered closed source. Some intelligence collection must remain secret as their revelation could jeopardize the individuals involved.
Closed source data is government or private data not available through open inquiry. In simple terms, it is closed source data if the data is not meant to be openly available to the public. Closed source data, because of its origin, is often considered more accurate and reliable.
Closed source data includes material that a government denotes "classified" in order to restrict public access so as to protect confidentiality, integrity, or availability. Access to government "classified data" is typically restricted by law to particular trusted individuals. An unauthorized disclosure can result in administrative or criminal penalties. A formal security clearance is often required to handle or access classified data. Classified data are typically marked with a level of sensitivity - e.g. restricted, confidential, secret, or top secret.
Closed source typically includes material such as proprietary business information, law enforcement data, educational records, banking records, and medical records. In general, closed source data are obtained or derived from sources that:
Proprietary describes the level of confidentiality given by the owner. The term proprietary information is often used interchangeably with the term trade secret. Generally, data termed by the owner as "proprietary" limits who can view it or know about its contents. Examples of proprietary information include:
There is no standard used by businesses for determining what is proprietary. The nature of what is held as proprietary varies by industry and individual business practice. If the information does not seem to be readily available, it may be considered proprietary and not for public use.
In the United States, the US Economic Espionage Act of 1996 addresses industrial espionage or commercial spying. This act imposes severe penalties for stealing trade secrets. An owner of a trade secret seeking to protect proprietary information must derive an economic value from not having this information publicly known and must take steps to maintain the secrecy of the information for any legal recourse if sensitive information is disclosed. Businesses often require employees to sign non-competition and proprietary information agreements that restrict the information employees can disclose during their employment or after leaving the company. Many businesses limit employee access to computer files, maintain secure areas where sensitive information is stored, and control visitor access.
Many countries deem the records they keep on an individual to be private. In some countries it might be a criminal offense to ask for or disclose any personal information to an unauthorized person.
Tomnod [32] is an example of a crowdsourcing platform that is used to extract GEOINT data from satellite imagery. DigitalGlobe’s constellation of satellites captures high-resolution images of an area the size of India every day. Tomnod expedites the analysis of this huge amount of satellite imagery by engaging a large public crowd. For example, millions of people took part in a Tomnod campaign to search for signs of the missing Malaysian Airlines flight MH370 [33].
Tomnod responds to global events like natural disasters, security incidents, search and rescue missions, or mapping remote populations. Tomnod loads current satellite images onto their website where they can be explored by a global community of volunteer image taggers. Each volunteer scans a subset of the imagery, pixel by pixel, and places a “tag” over objects of interest. Within hours, the crowd can cover thousands of square kilometers many times over. By gathering multiple, independent views of every location, crowdsourced consensus begins to emerge. Even though most individuals are novice imagery interpreters, the “wisdom of the crowd” converges on the locations that are most important. Using a statistical geospatial algorithm, this consensus can be extracted and used to focus the efforts of expert analysts (known as “tipping and cueing”) or provided to responders on the ground.
See how Tomnod helped to track the damage from the Boles Fire near Weed, California [35]. The Boles Fire started on September 15, 2014, southeast of Weed, California. The wind-driven fire quickly moved into town destroying 100 homes and forcing 1500 people to evacuate. DigitalGlobe’s satellites collect imagery in the infrared spectrum in addition to the red/green/blue colors visible to the human eye. Using such infrared imagery, in this Tomnod campaign the images are “false-colored” which highlight the burned areas in black, while areas of healthy vegetation appear red.
Use the following link to access the Tomnod Boles Fire near Weed, California site [36].
For this week's discussion I want to focus on Crowdsourced Geospatial Data (CGD). Here are a few prompts for this week's discussion:
I am always skeptical of crowdsourced data or, indeed, any data. As a geographer and remote sensor whose focus is enumerating displaced populations, I have to be. Skepticism is part of my job. All data contain error, so best to acknowledge it and decide what that error means. There is still a lot of uncertainty around these types of volunteered geographic information [37]; specifically questions over the positional accuracy, precision, and validity of these data among a wide variety of other issues [38]. These quantitative issues are important because the general assumption is that these data will be operationalized somehow and it is, therefore, [39] imperative that they add value to already confusing situations if this enterprise is to be taken seriously in an operational sense [40]. The good news is that research so far show that these “asserted” data are not – a priori – necessarily any worse than “authoritative” data and can be quite good due to the greater number of individuals to correct error [41].
Source: http://blog.standbytaskforce.com/tag/tomnod/ [42].
Head over to the dedicated discussion forum [43] to talk about these issues.
Forum link resolves to Coursera
This lesson introduced high-level GEOINT data source and collection strategies. The major focus of this lesson is that GEOINT source and collection strategies are about getting enough information to answer an intelligence question. We explored how GEOINT data is collected from a variety of sources using various collection strategies, none of which at a conceptual level are unique to GEOINT. Sources are the means or systems used to observe and record data. The collection strategy is an overarching approach to data collection. Discontinuous collection is defined as not collecting a full record of activity during a time period. Persistent collection was introduced as a strategy that emphasizes the ability to linger for a period of time to detect, locate, characterize, identify, track, target, and possibly provide near- or real-time data. We explored how GEOINT data can be collected openly and covertly. Data collected openly and maintained transparently is considered open source. Data collected covertly and maintained in secret is considered closed source. We discussed how closed source data is not unique to GEOINT.
Don't forget to complete the Lesson 3 Quiz [44]!
Link to quiz resolves to Coursera
Department of Defense dictionary of military and associated terms joint publication 1-02. (2002). Washington, D.C.: Joint Chiefs of Staff.
Priorities for GEOINT research at the National Geospatial-Intelligence Agency. (2006). Washington, D.C.: National Academies Press.
Links
[1] http://www.gpo.gov/fdsys/pkg/GPO-WMD/pdf/GPO-WMD.pdf
[2] http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf
[3] https://www.e-education.psu.edu/geog480/
[4] http://creativecommons.org/licenses/by-nc-sa/3.0/us/
[5] http://www.wpafb.af.mil/shared/media/document/AFD-080820-005.pdf
[6] http://www.merriam-webster.com/dictionary/spaceborne
[7] http://www.nro.gov
[8] https://www.digitalglobe.com
[9] http://www.merriam-webster.com/dictionary/airborne
[10] http://www.merriam-webster.com/dictionary/terrestrial
[11] http://en.wikipedia.org/wiki/WorldView-3
[12] http://www.asprs.org/10-Year-Industry-Forecast/Ten-Year-Industry-Forecast.html
[13] http://support.esri.com/en/knowledgebase/GISDictionary/term/off-nadir
[14] http://www.classzone.com/books/earth_science/terc/content/investigations/esu101/esu101page03.cfm
[15] http://en.wikipedia.org/wiki/Geostationary_orbit
[16] http://en.wikipedia.org/wiki/Sun-synchronous_orbit
[17] https://lta.cr.usgs.gov/NAPP
[18] http://www.fsa.usda.gov/FSA/apfoapp?area=home&subject=prog&topic=nai
[19] http://en.wikipedia.org/wiki/View-master
[20] http://en.wikipedia.org/wiki/Parallax
[21] http://www.bing.com/community/site_blogs/b/maps/archive/2011/06/27/bing-maps-unveils-exclusive-high-res-imagery-with-global-ortho-project.aspx
[22] http://en.wikipedia.org/wiki/Georeference
[23] https://www.e-education.psu.edu/geog597g
[24] http://open.ems.psu.edu/
[25] http://www.navalaviationmuseum.org/
[26] http://news.it.psu.edu/article/penn-state-crop-educator-explores-drone-driven-crop-management
[27] http://www.washingtonpost.com/news/morning-mix/wp/2015/01/08/the-frustrating-hunt-for-genghis-kahns-long-lost-tomb-just-got-a-whole-lot-easier/
[28] http://www.openstreetmap.org/
[29] http://tasks.hotosm.org/
[30] http://tasks.hotosm.org/project/605
[31] http://tasks.hotosm.org/project/586
[32] http://www.tomnod.com/
[33] http://www.theguardian.com/world/2014/mar/14/tomnod-online-search-malaysian-airlines-flight-mh370
[34] https://www.e-education.psu.edu/emsc100s/sites/www.e-education.psu.edu.emsc100s/files/images_textversions/GeoIntMOOCL3_LD.html
[35] http://chimes.biola.edu/story/2014/sep/28/boles-fire-devastates-community/
[36] http://www.tomnod.com/campaign/bolesfireca2014/map/2f2xfyo
[37] http://www.esri.com/news/arcwatch/1208/goodchild-talks.html
[38] http://www.springerlink.com/content/2414838775l810tr/
[39] http://mobileactive.org/how-useful-humanitarian-crowdsourcing
[40] http://www.crowdsourcing.org/document/if-all-you-have-is-a-hammer---how-useful-is-humanitarian-crowdsourcing/3533
[41] http://povesham.wordpress.com/2011/01/10/how-many-volunteers-does-it-take-to-map-an-area-well-the-validity-of-linus-law-to-volunteered-geographic-information/
[42] http://blog.standbytaskforce.com/tag/tomnod/
[43] https://class.coursera.org/geoint-001/forum
[44] https://class.coursera.org/geoint-001/quiz