Welcome to Lesson 1 of Geospatial Intelligence & the Geospatial Revolution!
We've identified five principles of Geospatial Intelligence that will be addressed throughout this course. These principles are:
This lesson introduces Geospatial Intelligence (GEOINT) and the broad types of problems it can help you solve. I will use the term GEOINT as an abbreviation of geospatial intelligence throughout most of this course. GEOINT is defined by the intersection of the intelligence tradecraft, Geographic Science (GIScience), and Geographic Information Technology (GIT). As a discipline, GEOINT provides decision makers with insights about the ways in which people organize and arrange their activities on Earth's surface. These insights can prevent surprises, leverage emerging opportunities, counteract threats, or provide time to adapt to a changing situation. GEOINT is not just about defense applications. GEOINT provides decision makers with actionable knowledge in business, humanitarian efforts, and law enforcement. We will explore the ethical aspects of GEOINT that arise since Geographic Information Technologies are inherently surveillance technologies.
Throughout this lesson, we will demonstrate the concepts of GEOINT. By the end of this lesson, you will be able to:
Before we begin to explore GEOINT, let's examine the Geospatial Revolution by watching Episode One of the Geospatial Revolution (14 minutes) video series.
If you are unable to access the video above, try going directly to the Geospatial Revolution site [1]
As the video illustrates, in a few short years we have gone from mountains of hardcopy maps to amazing automated systems that provide insights about human use of geography. This class is not about the Geospatial Revolution per se, but how the revolution changed the way we think about, develop, and use GEOINT. Seamless layers of satellites, surveillance, and location-based technologies create a knowledge base vital to the interconnected global community. The Geospatial Revolution video describes the history, applications, related privacy issues and impact of location-based technologies that have transformed geospatial intelligence.
Need the video embed code from Coursera.
Let's explore what GEOINT is in the following video lecture (9 minutes).
Let's create two ad hoc maps by dropping pin markers to designate the following:
For the first part of the assignment, you are asked to drop a pin marker on the location of your home using ArcGIS Online. After you add your pin, you will see a popup box with three fields to populate with your age, gender, and why you are taking this course (Why GEOINT). Click on the map below to launch ArcGIS Online.The instructions are contained on the right side of the map. You can change to a background image in the "Basemap Gallery" to help accurately locate your house before you drop the pin.
For the next part of the assignment, I am going to ask you to anticipate something based on your experience, knowledge, or just a gut feeling. Think carefully about this. Below the map, I have provided some guidelines you should consider before placing your pin on the map. When you are ready, click on the map below to launch ArcGIS Online and drop a pin marker on the location of where you predict the next international disaster will occur. A pop-up will appear for you to select the country that you live in. I will refer back to this data in Lesson 4.
Your anticipated disaster must fulfill one or more of the following criteria:
The cause of the anticipated disaster should fall into one of the following subgroups:
Disaster Subgroup | Definition | Disaster Main Type |
---|---|---|
Geophysical [5] | Events originating from solid earth | Earthquake, Volcano, Mass Movement (dry) |
Meteorological [6] | Events caused by short-lived/small to meso scale atmospheric processes (in the spectrum from minutes to days) | Storm |
Hydrological [7] | Events caused by deviations in the normal water cycle and/or overflow of bodies of water caused by wind set-up | Flood, Mass Movement (wet) |
Climatological [8] | Events caused by long-lived/meso to macro scale processes (in the spectrum from intro-seasonal to multo-decadal climate variability) | Extreme Temperature, Drought, Wildfire |
Biological [9] | Disaster caused by the exposure of living organisms to germs and toxic substances | Epidemic, Insect infestation, Animal Stampede |
You might wish to view the Emergency Events Database (EM-DAT). EM-DAT was created with the initial support of the World Health Organization and the Belgian Government. The main objective of the database is to serve the purposes of humanitarian action at national and international levels. Use the following link to access the Emergency Events Database [10] directly.
Let's briefly discuss a few technical notes to ensure your success using ArcGIS Online in our course:
Here re a few resources that might be helpful:
The ArcGIS Online capabilities used here were developed by the absolutely amazing Joseph Kerski [14], Geographer and Esri Education Manager.
The United States influenced the geospatial intelligence [15] discipline with the creation of intelligence organizations. The NIMA Act of 1996 [16] establishing the National Imagery and Mapping Agency [17], and the subsequent amended language in the 2003 Defense Authorization Act [18] codified the mission of the National Geospatial-Intelligence Agency (NGA) [19]. This resulted in the integration of multiple sources of information, intelligence and trade crafts into NIMA, which subsequently became NGA. Then Director James Clapper [20] (2001- 2006) designated this discipline as Geospatial Intelligence, or GEOINT [21]. It teaches something that Director Clapper used a hyphen when the National Geospatial-Intelligence Agency was formed. What might have been the “NGIA” is referred to today as the NGA and thereby giving it seeming parity with its three-letter counterparts; the FBI [22], CIA [23], DIA [24], NSA [25], and NRO [26]. This naming seems to be a public policy decision by agency officials, so it's not unreasonable to suggest that politics played a role not only in the name, but that it also influenced the vocation and discipline that emerged as Geospatial Intelligence, or as abbreviated, GEOINT. The following quote from a former three-decade agency employee illustrates the nature of the change:
I was at work the day he [Director Clapper] declared we are NGA, not NIMA. We all looked at each other and said "now what the hell is he doing?" Clapper was, without a doubt, the most prolific reorganizer I ever worked under, and we figured it was just an excuse to reorganize yet again. Everyone ran all around, including my boss, waving their arms "what does geospatial mean? ... As we found out later, it was really the beginning of the true integration of maps, charts, geodesy, imagery and intelligence that has evolved into what NGA is today.
Significantly, the term Geospatial Intelligence, and GEOINT, has become an internationally recognized concept as exemplified by its use in events such as the United States Geospatial Intelligence Foundation's (USGIF) [27] GEOINT Symposium, which has roots back to 2003. Why do I mention this history?
What is Geospatial Intelligence, or GEOINT? Let's begin examining this question with a video (NGA Power of GEOINT, 6:20 [28]) produced by the US National Geospatial-Intelligence Agency (NGA). [29]
So, what is unique about this discipline that distinguishes it from other branches of geography and geographic analysis? Let's begin by defining the individual terms Geospatial and Intelligence.
Let me expand upon the word geospatial. While the words geospatial, geographic, and spatial are often used interchangeably to mean similar things, the reasoning behind the linguistic blend forming geospatial is that spatial alone is too generic and geographic is too related to the particular discipline of geographic intelligence [30]—one of the oldest forms of military intelligence.
Let's also look at the word intelligence. Intelligence provides a “decision advantage” intended to prevent surprise, capitalize on emerging opportunities, neutralize threats, or provide time to adapt to a changing situation. To the average person, intelligence is often associated with secrets and spying. However, according to noted author, academic, and experienced national security expert, Mark Lowenthal [31], viewing intelligence as primarily secrets misses the important point that intelligence is ultimately information that meets the needs of a decision maker. Intelligence is information that has been collected, processed, narrowed, and offered to meet those needs, which could apply to any domain. Lowenthal also asserts a central notion that intelligence is not about truth, but is more accurately thought of as a “proximate reality.” Intelligence analysts do their best to arrive at an accurate approximation, but they are rarely certain that their best analytic insights are true when completing an analysis. So, semantically, GEOINT provides insights to a decision maker about how humans relate to the Earth.
The official United States Government definition of Geospatial Intelligence is stated in U.S. Code Title 10, §467 [32] as:
The term 'geospatial intelligence' means the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on the earth. Geospatial intelligence consists of imagery, imagery intelligence, and geospatial information.”
It is not unreasonable to say that this definition, written in 2003, was crafted to inform how NGA operated with the other US intelligence agencies, but was not intended as the functional definition of the discipline. I might note that there is no hyphen between the words geospatial and intelligence.
GEOINT is the commonly used acronym formed from the initial components of the words "Geospatial" and "Intelligence." It has been said that the 2003 renaming of NIMA to NGA recognized the emergence of geospatial information as an intelligence source [33] in its own right. In the US, the term GEOINT is used to connote a source of intelligence like HUMINT [34] or SIGINT [35]. The national differences notwithstanding, GEOINT is different from the other sources (e.g., SIGINT) and is inherently multi-source. This is to say, it integrates and enriches information collected by the other INTs into a space-time context.
The narrow definition of GEOINT in the U.S. Code does a disservice to the discipline when considering geospatial intelligence is broader than as narrowly defined in the US law. While Geospatial Intelligence is essential in the traditional intelligence areas of deterring war, resolving conflict, and promoting peace, GEOINT is also important to other areas such as supporting the commercial sector and civil authorities. There is an emerging definition of Geospatial Intelligence that is vastly different from the U.S.'s statutory definition. An enhanced definition was offered by Bacastow and Bellafiore in 2009. The emerging definition is:
Geospatial intelligence is actionable knowledge, a process, and a profession. It is the ability to describe, understand, and interpret so as to anticipate the human impact of an event or action within a place-time environment. It is also the ability to identify, collect, store, and manipulate data to create geospatial knowledge through critical thinking, geospatial reasoning, and analytical techniques. Finally, it is the ability to present knowledge in a way that is appropriate to the decision-making environment.
In the above definition, I want to highlight that GEOINT's process includes Geographic Information Science (GIScience) and Geographic Information Technologies (GIT) when creating geospatial knowledge. Recall that GIScience and GIT are also referred to collectively as GIS&T [36]. I want to be absolutely clear that GIScience and GIT includes the US Title 10 elements of imagery and imagery analysis. In fact, GIScience and GIT (collectively GIS&T) are promoting an integration of the science and technologies associated with remote sensing, geodesy, surveying, photogrammetry, image processing, and many more disciplines including, but not limited to, cartography and the cognitive sciences.
Central to this emerging definition is the notion that the focus of GEOINT is understanding how people organize and arrange their activities on Earth's surface. For all the emphasis in GEOINT on data and technologies, people remain GEOINT's most precious analytic resource. The analyst describes, understands, and interprets so as to anticipate the human impact of an event or action. The analyst's work involves the analysis of place [37] and time using geographic information sciences (GIScience [38]), geographic technologies (e.g. satellites [39], UAS [40], UAV [41], GPS [42], GIS [43], and cyber [44]) and a unique tradecraft [45].
Please view Chapter One of Episode Three in the WPSU Geospatial Revolution [1] series. This video examines how geospatial and location-based technologies influence warfare, diplomacy, police protection and privacy issues.
If you are unable to access the video above, try going directly to the Geospatial Revolution site [1]. If YouTube is blocked in your country, you may be able to see this video on Vimeo [46].
Optionally, please feel free to view the following videos that illustrate how GEOINT's principles provide apply to deterring war, resolving conflict, promoting peace, defining business competition, or supporting civil authorities.
Rob Williams, Penn State, developed the following video as part of his Capstone for the GEOINT Certificate. It involves a business problem that has had his personal and professional interest over the years.
Need embed code for video from Coursera.
See how many of GEOINT's technology and principles apply in other realms. Louis Liebenberg, a scientist turned tracker, is revolutionizing conservation and wildlife management using GEOINT-like technologies and techniques. His efforts are documented in The CyberTracker Story [48] video below (8:18).
Need embed code for this video too --- do we even have permission to use this? or need it?
Do you have a question related to the above examples and not directly associated with this lesson? Do you just want to express your thoughts about some aspect of GEOINT? A team of GEOINT practitioners will address your posts made to GeoINTsights Forum! [49]. This link currently resolves to Coursera.
At this point you may be asking, "How is GEOINT different as a discipline and unique from other geospatial analytic activities?" GEOINT is a sub-discipline of geography and is unique from other forms of geospatial analysis because of its tradecraft. Specifically, GEOINT delivers insights gained from place and time for a decision advantage by integrating Geographic Information Science (GIScience), Geographic Information Technology [38], [38] and its unique tradecraft. Let's examine the highlighted terms:
Place is fundamental to geography and perhaps the most important concept in GEOINT. At first glance, location and place might seem to be similar terms; however, places have physical and human attributes that make them what they are. Physical attributes may include a description of such things as the mountains, rivers, beaches, and topography of a place. Human characteristics may include the human-designed cultural features of a place, from land use and architecture, to forms of livelihood and religion, to food and folkways, to transportation and communication networks. Place emphasizes the understanding of both of these factors and their integration. To illustrate this you might explore the TED Talk by Daniele Quercia that explains how "happy maps [50]" that take into account not only the route you want to take, but how you want to feel along the way. The following is from an article published by Dr. George Van Otten in the April-June 2014 Issue of the Military Intelligence Professional Bulletin [51].
Places have unique attributes and characteristics that give them identifiable personalities. Therefore, they are constantly evolving in response to a multiplicity of environmental and human influences. Because places are in a constant state of change, sometimes for the better and sometimes for the worse, they greatly influence the lives of those who live in or near them. Places are the settings in which people experience life, develop relationships, and form their own unique identities. Human personalities do not form in a vacuum. Instead, they are a product of biology, family structure, and the nature of the places in which individuals live.
In addition to the concrete or absolute space in which places are situated, there are also places that are products of human hearts and minds. In fact, it is common to hear elderly people describe the places where they lived, loved, and worked many years ago, even though these places (as they once were) are no longer part of the modern landscape. Moreover, people are often emotionally tied to specific places. Consider the emotional attachment of most Americans to the site of the World Trade Center in New York City, or the powerful emotional and cultural symbolism associated with the Vietnam War Memorial in Washington, D.C.
Places do not usually elicit the same reactions from everyone. While some people think of bucolic settings as ideal, others see them as primitive and uncivilized. Conversely, many see great beauty in the skylines of great cities, while others consider them to be blights on the landscape.
In some places, people are resilient and open to change, while in others, people are tradition-bound and resist even the mildest cultural, social, economic or political changes. Moreover, such resistance sometimes results in conflict, violence, and even war. Understanding the degree to which a people are committed to preserving the status quo relative to the places in which they live should be a fundamental part of the IPB process [52], because conflict (even low intensity conflict), nation building efforts, and peace-keeping missions all involve changing the nature of the places in which they occur.
Although places are unique expressions of human occupancy in time and space, most are also interdependent. This is because each place tends to fill a specialized role relative to the greater region in which it exists. Some places focus on primary sector activities including agriculture, mining, logging, and commercial fishing, while others serve as centers of commerce, processing, government, and/or manufacturing. All of these places must regularly interact with each other to survive. To fully understand the character of a given place, geographers must be able to identify these interdependencies, while at the same time keeping in mind the distinctive qualities that give specific places their unique personalities. In order to fully comprehend the long-range implications of operational plans in a contested area, professional military analysts must understand the importance of interdependencies between communities, nations, and regions. History is replete with examples of military actions that have resulted in far-reaching unintended consequences. For example, while closing a major regional transportation artery might bring short-term positive results to a specific place, it may also create great regional chaos for allies who depend on that network for vital shipments of food, fiber, and energy.
Places are building blocks of analysis in GEOINT, keys to making sense of the landscape, stages for events, and testimonial to the fact that humans require space to live, work, play, and flourish. People create distinctive places according to their knowledge, technology, and needs. Places are involved in important decisions—personal, corporate, and governmental. Places exemplify the principle events of history. Understanding a place's history, variety, and complexity, and how that place may have shaped a human's life and experiences, are keys to cultural understanding.
Ultimately, in GEOINT, we are concerned with understanding why places, and people in those places, are located where they are and how they interact. To answer this question, we must be comfortable with the underlying concepts and theories of the spatial distribution of a particular phenomenon. The spatial distributions can be a human phenomena, such as population and religion. The spatial distribution can be some sort of the relationship between society and nature, such as potential deaths to natural disasters, as a reflection of topography and socioeconomic processes allocating particular kinds of people to particular places and construction types.
In general terms, geospatial analysis and GEOINT can be considered to be the formal quantitative study of spatial linkages that manifest themselves in space. Spatial linkages are explained using theories and observations such as the First Law of Geography [53]. [53] According to Waldo Tobler [54], "Everything is related to everything else, but near things are more related than distant things." This law is related to the law of demand, in that interactions between places are inversely proportional to the cost of travel, which is much like how the probability of purchasing a good is inversely proportional to the cost. Within the GEOINT community, Tobler’s First Law of Geography is regarded as a fundamental and a key to analytic insights.
Geography and, hence GEOINT, approaches the explanation of spatial phenomena through the change of geographical features. For example, before May of 2011, an estimated 300 Syrian refugees had crossed the Turkish border to avoid war. By 2014 the number of Syrian refugees in Turkey grew to 400,000. Space and time are inexorably linked. Over forty years ago, a Swedish geographer, Torsten Hägerstrand [55], illustrated how human spatial activity is often governed by time limitations. He identified three categories of limitations: capability constraints, coupling constraints, and authority constraints.
Hägerstrand's concept of space-time is powerful. Hägerstrand's space-time model [56] provides a framework for understanding human activity and provides a theoretical foundation for intelligence concepts such as Activity-Based Intelligence (ABI) [57].
Please recall that geographic information science and geographic information technology are referred to collectively as GIS&T. I will address each of the concepts separately here.
Geographic information science (GIScience or GISci) includes scientific research areas of geographic information systems (GIS), cartography, remote sensing, photogrammetry, and surveying. GIScience addresses fundamental issues in the use of digital technology to handle geographic information about places, activities, and phenomena on and near the surface of the Earth that are stored in maps or images. GIScience includes fundamental questions of data structures, analysis, accuracy, meaning, cognition, and visualization to name a few. As such, GIScience overlaps with many traditional disciplines such as earth science, mathematics, computer science, physics, cognitive science, and ethics. While GIScience is not central to any of these disciplines, researchers from many different backgrounds work together on particular sets of interrelated problems.
I've gotten the important question: why is imagery and imagery analysis not included in this list as a separate item since remote sensing, remote sensing technologies, and image analysis are important to capturing, creating, storing, managing, querying, displaying, and analyzing picabytes of accurate, reliable, timely and relevant GEOINT data? I suggest that remote sensing and imagery analysis are a key and growing part of GIScience and Geographic Information Technologies. What are your thoughts?
The term "geographic information systems" (GIS) has been used to describe the hardware, software, geographic data, and people that enable geospatial information processes for making decisions. More recently, GIS, together with global positioning systems (GPS), remote sensing techniques, and other spatially related tools for decision making, comprise a larger array of tools that can be grouped together under the more complete term of "geographic information technologies" (GIT).
Tradecraft is what makes GEOINT unique and is addressed in greater detail in Lesson 4. We define the GEOINT tradecraft as encompassing the unique organizational sources and methods [58] for obtaining GEOINT data and making sense of it to support the decision maker. GEOINT's methods comprise the tools to organize geospatial data and the reasoning techniques for rendering judgments, insights, and forecasts about human activities and intentions. The skills of detecting geospatial deception [59] and, where required, maintaining secrecy of geospatial sources and methods makes GEOINT unique. I fully appreciate that the word secrecy is provocative. Secrecy, which is often contrasted with transparency as an ideal, has negative connotations and is often associated with spying [60] and espionage [61]. However, in a real way, total transparency is unnatural and seldom occurs. Humans conceal aspects of their lives from others due to fear of inappropriate use of the information, embarrassment, retribution, denunciation, harassment, or loss of employment.
Espionage link (last link on page) is broken
There are competing views on whether GEOINT is an art or a science. This controversy mirrors a long-standing debate in the intelligence community over whether intelligence analysis is based on subjective, intuitive judgment (an art), or systematic analytic methods (a science). One school is that intuition, experience, and subjective judgment dominate. Here, GEOINT is an art. Another school is that structured data [62]and computers are most relevant. Here GEOINT is science-like. While this controversy is somewhat academic, it does have an impact on what automation can be appropriately applied and on the training and education of the worker. To help understand these points of view, I will define the terms using the Merriam-Webster Collegiate Dictionary, tenth edition, as:
GEOINT has the qualities of an art and science where there is no certain dividing line between the two. GEOINT's automated processes are not rich enough in underlying geographical knowledge to accurately reflect reality. This is to say, for something to be automatable, how it works must be described (modeled) and the facts (data) quantified. Since a model is a simplified view of reality, the model represents a limited set of rules which allows analysts to work out an answer if they have certain information. Quantifiability of the information is important because unquantifiable inputs (data) cannot be tested, and thus unquantifiable results can neither be duplicated nor contradicted. The table below categorizes the broad types of GEOINT problem-solving processes.
Model Certainty: Low |
Model Certainty: High |
|
---|---|---|
Data Certainty: High |
Model Building | Puzzle Solving |
Data Certainty: Low |
Mystery Solving | Data Foraging |
The upper right panel of the matrix identifies analysis type as Puzzle Solving (upper right quadrant) in which there is good knowledge of the data and models surrounding an output. This is, perhaps, the ideal. Analysts understand the problem and can take into account the data. The notion of fixed-in-advance standard procedures typically plays an important role in such geospatial analysis. However, many of the analytic tasks in geospatial intelligence fall outside this quadrant. Consider the Data Foraging (lower right) quadrant, in which there is agreement on models, but a lack of certain data. The notion of "foraging" for the data plays an important role in problem solving.
Model Building (upper left quadrant) is the opposite. In this analytic environment, there is seemingly sufficient data, but disagreement on the model. Analysis is characterized by analysts involved in a struggle for their model to prevail, and decisions emerge from that struggle. This kind of analysis is characterized by bargaining, accommodation, and consensus, as well as controversy. GEOINT as Mystery Solving (lower left quadrant) is the most open to debate. Under these conditions, science and technology tools have significantly less relevance. As a consequence, the analytic process is experience-based. In the end, this is the framing of complex questions, often called wicked problems [63]. They can only be framed, not solved, and thus the logic of argument and analysis is important.
Is GEOINT an art or science? GEOINT's problems fall into any of the four quadrants ---- you, the individual doing the analysis, need to understand the problem and the nature of the problem solving process.
To some, the notion of GEOINT ethics is an oxymoron—especially in light of the secrecy associated with intelligence work. There is a love-hate relationship with intelligence agencies and what they do. We celebrate the intelligence work of Bletchley Park [64] and fictional characters like James Bond but, as James Olson points out, people are more likely to believe intelligence agencies pervert national values rather than protect them (Olson, 2006) and want nothing to do with the profession. Unfortunately, this sentiment misses an important point. GEOINT is here to stay and, moreover, is exploding in the largely unregulated business community. Given GEOINT's hidden massive increase within the commercial sector, the profession needs a code of ethics more than ever. If for no other reason, a code of ethics is important to GEOINT since it recognizes an obligation of the profession to society that transcends self interest.
Consider that we share our location via our phones, Twitter, and photos that are taken with our cameras. While these geospatial traces are generally benign, there is concern especially with respect to government collection and use of this geospatial trace information. Interestingly, commercial collection, use, and distribution (resale) of these personal geospatial traces likely exceeds that of government. The World Privacy Forum, a research and advocacy organization, estimates that there are about 4,000 data brokers. These companies collect and resell data from sources including consumer health websites, lenders, online surveys, warranty registrations, Internet sweepstakes, loyalty card data from retailers, charities’ donor lists, magazine subscription lists, and information from public records (The New York Times Op-Ed: "The Dark Market for Personal Data [65]"). Some countries are exploring measures to protect against location technology abuses.
Jeremy Crampton (2003), a well recognized academic in the realm of ethics and geography, noted that an "increasingly significant component of security discourse comes from a spatial or geographic standpoint" and the "issue of security is often contrasted against issues of privacy or civil rights." This is because technologies such as the Global Positioning System (GPS), satellite remote sensing, and geographic information systems (GIS) are inherently surveillance technologies. The data they produce may be used to invade the privacy of individuals and groups. Data gathered using geospatial technologies are used to make policy and personal decisions. Erroneous, inadequately documented, or inappropriate data can have grave consequences for individuals and society. Geospatial technologies have the potential to exacerbate inequities in society, insofar as large organizations enjoy greater access to technology, data, and technological expertise than smaller organizations and individuals (Ethics Education for Geospatial Professionals - About [66]).
In the following video, Donald R. Shemanski [67] of Penn State's College of Information Sciences and Technology (IST) addresses the growing concern with respect to US law.
Paste embed or iframe code here to replace the video below
Ethics involves systematizing, defending, and recommending concepts of right and wrong conduct. In our everyday life we follow the moral virtues: be honest, don't steal, harm, etc. But GEOINT often involves activities in gray areas of moral thought. The intelligence professional has long faced the moral dilemmas of balancing the national interests in security against some other societal virtues. An example is the case study Mapping Muslim Neighborhoods [68]. Today, GEOINT has the added moral dilemma of balancing privacy against the convenience of using such features as "Location Services" on your smart phone.
The Society of Competitive Intelligence Professionals (SCIP) has a code of ethics that is among the simplest. The University of Illinois has a collection of ethical codes from many sources at its Institute of Technology Code of Ethics website [69]. The SCIP Code of Ethics is:
Many governments have special laws for their spies that grant them immunity from laws that apply to ordinary citizens. Some of those special laws can be found in the public domain, but many are secret.
Michael Davis (Davis, Michael (1999) Ethics and the University, New York: Routledge, p. 166-167) proposes a seven-step guide to help think through some complex ethical case studies. A key feature of Davis' approach is his emphasis on identifying multiple (more than two) options for responding to ethical challenges. Another is the series of tests presented in Step 5.
Seven-step guide to ethical decision-making (Davis 1999)
In his book Profession, Code, and Ethics (2002) Davis makes the point that people who claim to be members of a profession have special ethical obligations beyond what the law and ordinary morality require. Those special obligations are stated in the codes of ethics that many professions, firms, and government agencies publish. The intelligence professional is expected to have an understanding of, and experience in thinking about, moral and ethical problems. It may well be that the most significant aspect of the preparation of the intelligence professional is the informed judgment that enables them to make discriminating moral choices.
For this week's discussion I want to focus on ethics and specifically on the "pin" of your home that I asked for. However, before you head over to the dedicated Lesson 1 Discussion Forum [49]to talk about the issue, here are a few prompts for this week's discussion: These are Coursera instructions.
You might then consider that The Society of Competitive Intelligence Professionals' (SCIP) Code of Ethics is:
Decisions about making or keeping geospatial information private might consider the following:
Use such tests as the following:
Participation in discussions is worth 10% of your overall grade in the class. To earn the full 10%, you need to make a total of 10 original posts or comments by the end of the class. If I were you, I'd shoot for posting twice each week. Read the Grading Policy [70] page for more details. This applies to the Coursera lesson --- remove it?
This lesson addressed the principle that GEOINT concerns insights about human activity. GEOINT is not just about human insight into defense applications; it provides decision makers with actionable knowledge in business, humanitarian efforts, and law enforcement. I discussed GEOINT as an intelligence discipline and the broad types of problems it can solve. We explored the moral dilemmas of geospatial privacy against the convenience of using such features as "Location Services." GEOINT has been defined by the intersection of the intelligence tradecraft, Geographic Science (GIScience), and Geographic Information Technology (GIT). As a discipline GEOINT provides decision makers with insights about the ways in which people organize and arrange their activities on Earth's surface. These insights can prevent surprises, leverage emerging opportunities, counteract threats, or provide time to adapt to a changing situation.
Bibliography needs reformatting--APA Style?
Bacastow, T.S. and Bellafiore, D.J. (2009). "Redefining geospatial intelligence." American Intelligence Journal. Pp 38-40.
Bell, Peter, Coyne, John, & Merrington, Shannon (2013) "Exploring ethics in intelligence and the role of leadership." International Journal of Business and Commerce, 2(10), pp. 27-37.
Crampton, Jeremy (2003) "Cartographic rationality and the politics of geosurveillance and security.” Cartography & GIS, 30(2), pp. 131-144. Reprinted in The Map Reader, Theories of Mapping Practice and Cartographic Representation (2011), M. Dodge, R. Kitchin, & C. Perkins (Eds.), John Wiley, pp. 440-447.
Emergency Events Database EM-DAT, http://www.emdat.be/ [10].
Olson, J. M. (2006). Fair play: The moral dilemmas of spying. Washington, D.C: Potomac Books.
United States Code, 2006 Edition, Title 10 - ARMED FORCES
GEOINT data has been said to be "any data used to create GEOINT." Geospatial information, imagery, and imagery intelligence are the primary sources from which GEOINT data are extracted. However, there is a more foundational perspective. These data are records of human and physical qualities at a location that uniquely created a place. Signs of human activity and measures of the land become GEOINT Data. Traditionally data are thought of as fixed statements of "what is;" however, GEOINT Data are part of a conversation driven by the need to explain what is recorded in the data in the context of a place. GEOINT Data are extracted chunks of information from other intelligence and geographic products. Inherent in using this data, GEOINT Data require knowledge of the data and how the spatial features might be organized on the landscape. Using the data and understanding spatial organization allows us to frame patterns of human activities to make predictions. GEOINT Data fitted into a frame also guides the search for additional information and ultimately provides a plausible story or map to account for what the data shows.
This lesson addresses the fundamental that the analyst works to complete the picture provided by geospatial intelligence data. Throughout this lesson, we will demonstrate the concepts of Geospatial Intelligence. By the end of this lesson, you will be able to:
Please view my video lecture discussing the concepts of GEOINT data.
Need the video embed code from Coursera.
I begin this lesson with a brief introduction of the concepts of locating points on the Earth, scale, and the representation of geographic features in a digital form. With these fundamentals, the lesson then discusses GEOINT data.
Essential to creating geospatial data is locating something on the Earth. The following discussion of coordinate systems was drawn from the Penn State course, GEOG 482, The Nature of Geographic Information [71].
As you might know, locations on the Earth's surface are measured and represented in terms of coordinates. A coordinate is a set of two or more numbers that specifies the position of a point, line, or other geometric figure in relation to some reference system. The simplest system of this kind is a Cartesian coordinate system (named for the 17th century mathematician and philosopher René Descartes). A Cartesian coordinate system is simply a grid formed by juxtaposing two measurement scales—one horizontal (x) and one vertical (y). The point at which both x and y equal zero is called the origin of the coordinate system. In Figure 2.1, above, the origin (0,0) is located at the center of the grid. All other positions are specified relative to the origin. The coordinate of the upper right-hand corner of the grid is (6,3). The lower left-hand corner is (-6,-3).
Cartesian and other two-dimensional (plane) coordinate systems are handy due to their simplicity. For obvious reasons, they are not perfectly suited to specifying geospatial positions, however. The geographic coordinate system is designed specifically to define positions on the Earth's roughly-spherical surface. Instead of the two linear measurement scales, x and y, the geographic coordinate system juxtaposes two curved measurement scales termed longitude and latitude. This is a geographic coordinate system.
Longitude specifies positions east and west as the angle between the prime meridian and a second meridian that intersects the point of interest. Longitude ranges from +180° (or 180° E) to -180° (or 180° W). 180° East and West longitude together form the International Date Line.
Latitude specifies positions north and south in terms of the angle subtended at the center of the Earth between two imaginary lines, one that intersects the equator and another that intersects the point of interest. Latitude ranges from +90° (or 90° N) at the North pole to -90° (or 90° S) at the South pole. A line of latitude is also known as a parallel.
At higher latitudes, the length of parallels decreases to zero at 90° North and South. Lines of longitude are not parallel, but converge toward the poles. Thus, while a degree of longitude at the equator is equal to a distance of about 111 kilometers, that distance decreases to zero at the poles.
We said that all of our positions on the Earth are specified relative to the origin. So, how is the origin established? A geodetic datum is a spatial reference system that describes the shape and size of the Earth and establishes an origin for coordinate systems. Two main types of datums include horizontal datums and vertical datums. Horizontal datums are used to describe what we typically think of as x and y coordinates. Vertical datums describe position in the vertical direction and are often based on height above sea level.
People have created hundreds of datums that are in use around the world today. The main reason that people have developed different datums for different places is so that they can choose an ellipsoid that best matches the shape of the Earth at an area of local interest (usually a country). There are two main types of datums: local datums and geocentric datums. In local datums, a point of the ellipsoid is matched to a point on the Earth’s surface (e.g., the North American Datum of 1927 intersects the surface of the Earth at Meades Ranch in Kansas, while the Australian Geodetic Datum intersects with the Johnston Geodetic Station in the Northern Territory). Geocentric datums, on the other hand, are based on the Earth’s center of mass. Our knowledge of where that center of mass is located has improved with modern satellite data. Many countries are now shifting to geocentric datums because GPS measurements are based on a geocentric datum. This switch avoids the need for transforming GPS-collected data from one coordinate system to another. It is important to understand which datum was used when your data were created, because the position of features may be different depending on which datum was used. In some cases, there may be a positional discrepancy of up to one kilometer! These shifts are especially important in large-scale mapping applications, as these discrepancies will be much larger than any projection-induced error.
You hear the word "scale" often when you work around people who produce or use geographic information. If you listen closely, you'll notice that the term has several different meanings, depending on the context in which it is used. You'll hear talk about the scales of geographic phenomena and about the scales at which phenomena are represented on maps and aerial imagery. You may even hear the word used as a verb, as in "scaling a map" or "downscaling." The goal of this section is to help you learn to tell these different meanings apart and to be able to use concepts of scale to help make sense of geographic data.
Often "scale" is used as a synonym for "scope" or "extent." For example, the title of an international research project called The Large Scale Biosphere-Atmosphere Experiment in Amazonia (1999) uses the term "large scale" to describe a comprehensive study of environmental systems operating across a large region. This usage is common not only among environmental scientists and activists, but also among economists, politicians, and the press. Those of us who specialize in geographic information usually use the word "scale" differently, however.
When people who work with maps and aerial images use the word "scale," they usually are talking about the sizes of things that appear on a map or aerial photo relative to the actual sizes of those things on the ground.
Map scale is the proportion between a distance on a map (Dm) and a corresponding distance on the ground (Dg): (Dm / Dg).
By convention, the proportion is expressed as a "representative fraction" in which map distance (Dm) is reduced to 1. The proportion, or ratio, is also typically expressed in the form 1 : Dg rather than 1 / Dg.
The representative fraction 1:100,000, for example, means that a section of road that measures 1 unit in length on a map stands for a section of road on the ground that is 100,000 units long.
If we were to change the scale of the map such that the length of the section of road on the map was reduced to, say, 0.1 units in length, we would have created a smaller-scale map whose representative fraction is 0.1:100,000, or 1:1,000,000. When we talk about large- and small-scale maps and geographic data, then, we are talking about the relative sizes and levels of detail of the features represented in the data. In general, the larger the map scale, the more detail is shown. This tendency is illustrated below in Figure 2.5.
One of the defining characteristics of topographic maps is that scale is consistent across each map and within each map series. This isn't true for aerial imagery, however, except for images that have been orthorectified. Large-scale maps are typically derived from aerial imagery. One of the challenges associated with using aerial photos as sources of map data is that the scale of an aerial image varies from place to place as a function of the elevation of the terrain shown in the scene. Assuming that the aircraft carrying the camera maintains a constant flying height (which pilots of such aircraft try very hard to do), the distance between the camera and the ground varies along each flight path. This causes aerial photo scale to be larger where the terrain is higher and smaller where the terrain is lower. An "orthorectified" image is one in which variations in scale caused by variations in terrain elevation (among other effects) have been removed.
You can calculate the average scale of an unrectified aerial photo by solving the equation Sp = f / (H-havg), where f is the focal length of the camera, H is the flying height of the aircraft above mean sea level, and havg is the average elevation of the terrain. You can also calculate aerial photo scale at a particular point by solving the equation Sp = f / (H-h), where f is the focal length of the camera, H is the flying height of the aircraft above mean sea level, and h is the elevation of the terrain at a given point.
Another way to express map scale is with a graphic (or "bar") scale. Unlike representative fractions, graphic scales remain true when maps are shrunk or magnified.
If they include a scale at all, most maps include a bar scale like the one shown above left (Figure 2.6). Some also express map scale as a representative fraction. Either way, the implication is that scale is uniform across the map. In fact, except for maps that show only very small areas, scale varies across every map. As you probably know, this follows from the fact that positions on the nearly-spherical Earth must be transformed to positions on two-dimensional sheets of paper. Systematic transformations of this kind are called map projections. As we will discuss in greater depth later in this chapter, all map projections are accompanied by deformation of features in some or all areas of the map. This deformation causes map scale to vary across the map. Representative fractions may, therefore, specify map scale along a line at which deformation is minimal (nominal scale). Bar scales denote only the nominal or average map scale. Variable scales, like the one illustrated above right, show how scale varies, in this case by latitude, due to deformation caused by map projection.
The term "scale" is sometimes used as a verb. To scale a map is to reproduce it at a different size. For instance, if you photographically reduce a 1:100,000-scale map to 50 percent of its original width and height, the result would be one-quarter the area of the original. Obviously, the map scale of the reduction would be smaller too: 1/2 x 1/100,000 = 1/200,000.
Because of the inaccuracies inherent in all geographic data, particularly in small-scale maps, scrupulous geographic information specialists avoid enlarging source maps. To do so is to exaggerate generalizations and errors. The original map used to illustrate areas in Pennsylvania disqualified from consideration for low-level radioactive waste storage shown below (Figure 2.7), for instance, was printed with the statement "Because of map scale and printing considerations, it is not appropriate to enlarge or otherwise enhance the features on this map."
Some or all of the above content is used with permission from GEOG 482—The Nature of Geographic Information [74], Penn State's College of Earth and Mineral Sciences and licensed for use under CC BY 3.0 [75].
Digital geospatial data are encoded as alphanumeric symbols that represent locations and attributes of locations measured at or near Earth's surface. No geographic data set represents every possible location, of course. The Earth is too big, and the number of unique locations is too great. In much the same way that public opinion is measured through polls, geographic data are constructed by measuring representative samples of locations. Just as serious opinion polls are based on sound principles of statistical sampling, geographic data represent reality by measuring carefully chosen samples of locations. Vector and raster data are, in essence, two distinct sampling strategies.
The vector approach involves sampling locations at intervals along the length of linear entities (like roads), or around the perimeter of areal entities (like property parcels). When they are connected by lines, the sampled points form line features and polygon features that approximate the shapes of their real-world counterparts.
The aerial photograph in Figure 2.8 shows two entities, a reservoir and a highway. The graphic in Figure 2.8 illustrates how the entities might be represented with vector data. The small squares are nodes—point locations specified by latitude and longitude coordinates. Line segments connect nodes to form line features. In this case, the line feature colored red represents the highway. Series of line segments that begin and end at the same node form polygon features. In this case, two polygons (filled with blue) represent the reservoir.
The vector data model is how surveyors measure locations at intervals as they traverse a property boundary. Computer-aided drafting (CAD) software used by surveyors, engineers, and others, stores data in vector form. CAD operators encode the locations and extents of entities by tracing maps mounted on electronic drafting tables, or by key-entering location coordinates, angles, and distances. Instead of graphic features, CAD data consist of digital features, each of which is composed of a set of point locations.
The vector strategy is well suited to mapping entities with well-defined edges, such as highways or pipelines or property parcels. Many of the features shown on paper maps, including contour lines, transportation routes, and political boundaries, can be represented effectively in digital form using the vector data model.
The raster approach involves sampling attributes at fixed intervals. Each sample represents one cell in a checkerboard-shaped grid. Figure 2.9 illustrates a raster representation of the same reservoir and highway as shown in the vector representation. The area covered by the aerial photograph has been divided into a grid. Every grid cell that overlaps one of the two selected entities is encoded with an attribute that associates it with the entity it represents. Actual raster data would not consist of a picture of red and blue grid cells, of course; they would consist of a list of numbers, one number for each grid cell, each number representing an entity. For example, grid cells that represent the highway might be coded with the number "1" and grid cells representing the reservoir might be coded with the number "2."
The raster strategy is a smart choice for representing phenomena that lack clear-cut boundaries, such as terrain elevation, vegetation, and precipitation. Digital airborne imaging systems, which are replacing photographic cameras as primary sources of detailed geographic data, produce raster data by scanning the Earth's surface pixel by pixel and row by row.
Both the vector and raster approaches accomplish the same thing—they allow us to caricature the Earth's surface with a limited number of locations. What distinguishes the two is the sampling strategies they embody. The vector approach is like creating a picture of a landscape with shards of stained glass cut to various shapes and sizes. The raster approach, by contrast, is more like creating a mosaic with tiles of uniform size. Neither is well suited to all applications, however. Several variations on the vector and raster themes are in use for specialized applications, and the development of new object-oriented approaches is underway.
Note: Some or all of the above content is used with permission from GEOG 482—The Nature of Geographic Information [77], Penn State's College of Earth and Mineral Sciences and licensed for use under CC BY 3.0 [75].
The NGA doctrine [78] said that GEOINT data "is any data used to create GEOINT." This doctrine also goes on the say that geospatial information, imagery, and imagery intelligence are the main source of data for GEOINT. We take a more general view that GEOINT data are data and information collected to understand the human and physical qualities of a location. So, what is GEOINT data? GEOINT data, including geospatial information, imagery, and imagery intelligence, fundamentally consists of:
Table 2.1 illustrates these general data types and the relation between them:
Data Organization: Structured Geospatial Data |
Data Organization: Unstructured Geospatial Data |
|
---|---|---|
Data Content: Physical Geography |
Example: Digital imagery of a land feature |
Example: Report describing facts about a land feature |
Data Content: Human Geography |
Example: Geospatial data of incidences of bacterial infections |
Example: Scholarly article describing facts associated with bacterial infections at a location |
Structured GEOINT Data are geospatial data organized to be immediately usable by technologies, such as Geographic Information Systems (GIS). A formal definition is:
Structured geospatial data is information about locations and shapes of geographic features and the relationships between them. It is usually stored as coordinates [79] and topology [80] with a high degree of organization to be readily searchable.
What often goes unstated and unappreciated is a broad class of data known as “unstructured geospatial data,” a catchall term because much of the data included under that term actually has elements of structure. Email, for instance, may contain a street address, senders, times, and the like. A definition for unstructured geospatial data is:
Unstructured geospatial data refers to geographic information that either does not have a predefined data model or is not organized in a predefined manner. Unstructured geospatial data may be text containing geographic information such as street addresses and site descriptions. Unstructured data is not readily searchable.
While unstructured geospatial data may be organized into a digital file, the data are still "unstructured" because they cannot be easily accessed for mapping. However, unstructured data are extremely important when completing an analysis and help to complete the partial picture provided by heavily structured data. Some suggest that between 50 and 80 percent of the data in an organization is unstructured. If this is correct, then most of the data that an analyst might encounter is unstructured. However, extracting geographic features from unstructured data into defined fields for analysis poses a challenge. To illustrate, I will compare three common datasets encountered in developing GEOINT:
The vector data files and satellite image represent structured data. With the vector file, we can use a GIS to geocode a list of street addresses, plot these as points on the satellite image, and determine the extent of vegetation around each house. However, the report of construction types is unstructured geospatial data. We cannot immediately import it into into a GIS and map the building construction types on the street map and satellite image.
GEOINT Data can be divided into two other major data content categories of physical and human geospatial data. Physical geospatial data is a record of the spatial characteristics of the various natural phenomena associated with the Earth's hydrosphere, biosphere, atmosphere, and lithosphere. Examples are:
Human geospatial data record the imprint of human activity on Earth. Examples include:
The World-Wide Human Geography Data Working Group (WWHGD WG [82]) is a voluntary partnership around human geography data focused on the general principle of making appropriate human information available to promote human security. This data helps us understand the behavior of people during different times and in different places and informs human security and humanitarian assistance initiatives. Human geography data enables us to understand why people do what they do and where they do it. WWHGD Working Group builds voluntary partnerships around geospatial datasets for human geography to support human security, humanitarian assistance, disaster relief and emergency preparedness, and response and recovery efforts globally. The WWHGD Working Group has catalogued more than 1200 data sources and links. This global mapping community has more than 1500 members with participating organizations representing the Department of Defense (DoD), civil agencies, academia, non-government organizations (NGO), international organizations, and private corporations.
Commercial firms also amass human geography data and transform it into a structured form for easy access and use by Geographic Information Systems. Their products are made possible by the fact that the original data exist in digital form, and because the companies have developed systems that enable them to structure the data efficiently.
Try out the demo of what Claritas used to call the "You Are Where You Live" tool. The Nielson Company has acquired Claritas and the tool is now called "MyBestSegments." Use the following link to access the My Best Segments - ZIP Code Look-up [83] page. Unfortunately this tool only works for locations within the United States; if you don't live in the United States, consider entering a ZIP code for a town or city you are familiar with or try Penn State's Zip code: 16802.
If you do live in the US, enter the ZIP code for your home town and then enter the security code provided on the page and click the Submit button. You will see a list of lifestyle segments listed on the left. Click on some of the lifestyle segment names to see if they seem accurate for the community you selected.
Does the market segmentation match your expectations?
The key point is that human, physical, structured, and unstructured data are used together to create a picture of place. The use of human geography data and imagery in the response to Ebola is an example [84]. Analysts utilize this data to better understand where infrastructure is located, where the disease has the greatest risk of transmission, and what populations are most at risk.
I know this will sound a little strange, but GEOINT Data is not really data in the strict definition of the term. GEOINT Data are extracted chunks of information from other intelligence and geographic products such as imagery and maps. Let's examine this statement by first viewing a few definitions, and then we'll explore this as an analogy of baking a cake. The definitions of data, information, and insights are:
Now for our cake baking analogy:
A chef has a request to bake a carrot cake. The carrots were previously picked from the ground, cleaned, packaged, and transported to the grocery store. The carrots are purchased by a chef at the store to be shredded, mixed with other ingredients, and baked into the cake. The carrots could also be sliced for a salad. It is the chef's knowledge, skill, and wisdom that determines how well the ingredients are made into a cake.
The analogy is that GEOINT data are previously collected (picked), inconsistencies removed (cleaned), structured (packaged), and distributed (transported). The analyst acquires the data from a warehouse (grocery store), subsets the data (shreds the carrots), and combines (mixes) it with other information to create insights (the cake). The same data might be used for other analytic purposes (making a salad). It is the analyst's (bakers) knowledge, skill, and wisdom that determines how well the information is converted into insights.
Satellite imagery is one of the key ingredients of GEOINT. A satellite image is processed data. A satellite image is made up of raw data (thousands of pixels) that are arranged in a particular way. This raw data is processed, organized, structured, and presented so as to make it useful as a satellite image. Importantly, this satellite imagery can then be used to create other data. For example, the height of a particular building extracted from the imagery is new data.
Mapmaking uses data in the creation of information and has been an integral part of human history for possibly up to 8,000 years. Maps and imagery are representations of the Earth. Before remote sensing technologies, if we wanted to know something about a location on the Earth, we would have had to visit the location. In those days, our knowledge was limited by our direct experiences. Information, such as maps and satellite imagery, allows us to expand our insights beyond the range of our experiences. Information can be shared and used by others at different times and places.
Primary data is raw information collected for a specific purpose. For example, the direct measurement of a building's height would be primary data. Secondary data are extracted from information developed by others. The advantage of primary data is the opportunity to tailor it to our need. We "know" the data. The disadvantage of collecting primary data is that it is costly and time consuming. The main advantages of secondary data are that it can be quicker and less expensive. It is easier to examine information, such as imagery, collected over a long period of time to identify changes. However, the information may be outdated, or inaccurate, or too vague. This is to say, we might not "know" the data and be able to fully articulate our insights. Figure 2.10 below illustrates this data-information-insights process.
People create data as a means to help understand how natural and human systems work. Such systems can be hard to analyze because they're made up of many interacting phenomena that are often difficult to observe directly, plus they tend to change over time. We attempt to make systems and phenomena easier to study by measuring their characteristics at certain times. We measure selectively because it's not practical to measure everything, everywhere, all the time. How accurately data reflect the phenomena they represent depends upon how, when, where, and what aspects of the phenomena were measured. Thus, all measurements contain a certain amount of error that the wisdom of the analyst must take into account.
Need the video embed code from Coursera.
Please view my second lecture video discussing the concepts of GEOINT data.
Data without the understanding brought by geospatial thinking and reasoning are really just meaningless symbols. This is because GEOINT data are separated from the knowledge of the place, this is to say, the physical and human aspects of the location the data represent. GEOINT data are fragmented and incomplete. We use the crafts of geospatial thinking and reasoning to help make it whole. Reginald Golledge makes this point in his Thinking Spatially [88] article in Directions Magazine:
Our knowledge about a geographic area is never perfect, but we still make effective decisions in that area because we use mental processes of perceptual closure (interpolation), or overlay (aggregation), or dissolve (disaggregation), and summarization. When we start to get overwhelmed with detail, we spatially classify (as by proximity) or cluster (as in "next to") so as to collapse lots of separate bits of information into meaningful "clumps" or "chunks." Sometimes, we make gross classifications ("all cats are gray in the night," or "all these trees are the same" when looking at a eucalyptus forest). We mentally cluster food stores, clothing stores, bars, beaches, and other phenomena into largely undefined generic classes and then give place-specific identifiers to single out particular members (e.g., "Albertson's is the supermarket that has fresh Maine lobsters;" "Google's is the beach with the bad undertow"). And we all realize that geographic data can be perceived at a variety of scales. We might use the same thought processes to reason about a colony of ants or bees as we do to think about people's activities in cities. We may use the same concepts when looking at our neighborhood as we would when studying San Francisco or Sydney (Australia). And, often, we use the vaguest of principles to guess about where things might be found (e.g., from trying to find a missing glove to searching for a bus stop in an unfamiliar area).
Thinking Spatially [88] - Directions Magazine, January 12, 2003
The human craft of bringing meaning to the models and data is called techne. Techne is a term derived from Greek that means "craftsmanship, craft, or art." It might be termed GEOINT data techne. Much of it is implicit and ambiguous, and is acquired largely by experience. The geospatial analyst uses techne when handling the data and forming judgments about a place. GEOINT data techne includes:
Models, both mental and otherwise, play a critical role in bridging the gap between data, information, and insights. They are an idealized representation of how to use data to solve problems. Recalling the table from Lesson 1, let's further explore this relationship between data, models, and insights.
Model Certainty: Low |
Model Certainty: High |
|
---|---|---|
Data Certainty: High |
Model Building | Puzzle Solving |
Data Certainty: Low |
Mystery Solving | Data Foraging |
Models represent our understanding of how the world works. We construct models for:
Models also give us the ability to overcome incomplete data by mentally filling in gaps, making an intuitive leap with only the sparsest of data. This is a sophisticated form of geospatial reasoning. Expertise in geospatial reasoning increases with experience because as we learn or experience additional models, our mind expands to accommodate them.
You can "preload" your mental models with typical understandings of place; these include conceptual models of how a place is organized and works. For example, why individuals use a store at a particular location is related to the number of people in the surrounding community. Theories of spatial organization can be a shortcut in our attempts to model an unfamiliar pattern. A few important geographic models are:
Mental models are a normal, everyday human activity essential for geographic problem solving. To make the point, you will predict part of a map by literally "using the data you see to predict the data beyond what is given."
Predict the missing part of a map by using the data you see and models you assume to predict the data beyond what is given. Using a pen and paper, sketch the missing half of the map using the symbols for roads and mountains. Remember that you will have to go beyond the information that is given. As you complete this exercise, think about the following:
You can compare your results with this solution map [94]. This is a link to Coursera.
A word of caution is in order. While models provide a useful way of understanding the world around us, blind adherence to a model can be disastrous. When we close our mind to disconfirming evidence and the possibility of alternative outcomes, we fail to see the weaknesses of our model and we will fail. History is replete with examples of people adhering stubbornly to an outdated paradigm (model) despite overwhelming evidence that a new way of thinking (a new model) is necessary.
Models and theories provide a basis for a structure that is called a frame. A frame helps us understand the world and may be influenced by a model or theory. For example, an individual's day-to-day interactions with family and associates is termed their "Pattern of Life." The family goes to work at a particular time using a particular route. The Central Place Theory may help to form a narrative that explains the Pattern of Life. Here, a theory suggests that our family may travel to larger settlements for services not available in smaller settlements. Models and theories may also direct our attention toward the information we seek.
In summary, we frame things to make sense of what we see in the real world and to fill in missing data. The frame may be influenced by an existing model or theory. A frame can take the following forms:
Insights result from a narrative that accounts for what the data reveals within the frame; this human process of explanation is termed sensemaking. There are three primary outcomes for GEOINT sensemaking:
As you can see, GEOINT data is a key part of a discourse to make sense of indefinite and ambiguous situations. Fitting the GEOINT data to the frame involves cognitive work to understand the relationships among data and sequence of events.
GEOINT addresses questions that technology alone is not particularly good at answering. These are explanatory questions that help you make sense of what you see. Examples are:
In addition, GEOINT is often concerned with predictive questions such as what will happen at this location if this happens at that location? In general, GEOINT data and software packages cannot be expected to provide clear-cut answers to explanatory and predictive questions right out of the box. Typically, analysts must turn to specialized statistical packages and simulation routines. Information produced by these analytical tools may then be re-introduced into the GEOINT database. It is always important to keep in mind that decision support tools are not substitutes for human experience and judgment.
Most of us are interested in data only to the extent that they can be used to help understand the world around us and to make better decisions. Analytic processes vary a lot from one organization to another. In general, however, the first steps in making an analysis are to articulate the questions that need to be answered and to gather and organize the data needed to answer the questions. To follow are examples of the kinds of questions GEOINT asks about how space is organized.
Questions concerning individual geographic entities. Such questions include:
Questions concerning multiple geographic entities. Such questions include:
The ability to identify and make sense of the patterns we see are critical skills in GEOINT. Maps are a fundamental source of information about spatial patterns. I will teach you a beginning approach to pattern sensemaking. After a while, the approach becomes natural. I caution you not to treat this as a recipe to be followed in a rote-learning fashion. It is a guide to start you on the sensemaking process. There are two general steps in the process:
Step 1: Referring to the above questions, describe what you see to answer "what is the pattern and where is the pattern?" This is done by describing the major parts or components to the pattern as:
Step 2: Explanation of the pattern to answer how and why the pattern occurred. Explain the pattern by:
Apply the factors in order to explain the components of the pattern that you described in the first step. Remember that:
Before we try to make sense of a pattern, let's practice.
Practice Question A:
Examine the map of earthquakes in Figure 2.12. (You can click on the map to open the ArcGIS Online earthquake webmap.) The earthquakes are obviously geological. But, why does this pattern occur? You can find an answer here [95].
Practice Question B:
What causes the global pattern of malaria cases in Figure 2.13? (You can click on the map to open the ArcGIS Online malaria webmap.) Click here for an answer to Question B [97]. Coursera link.
In this week's discussion, I'm focusing on the craft of making sense of patterns. This involves techne which is acquired largely by experience. It is knowledge allowing the use of data in a given situation and an understanding of people, things, events, situations, and the ability to apply perception, and judgment.
You might not have had the knowledge, experience, or understanding to correctly answer the practice questions in L2.11. This was intentional and to make the point that knowledge and experience are necessary. Let's now examine a pattern with which you have some background knowledge. Figure 2.14 is a map of the students in this MOOC. You can click on the map to open the ArcGIS Online student location webmap. Post a short paragraph to the Lesson 2 Discussion Forum explaining:
Credits
The ArcGIS Online capabilities used here were developed by Joseph Kerski [14], Esri Education Manager.
This lesson focused on how the analyst's craft completes the picture provided by GEOINT data. GEOINT data is an incomplete record of the human and physical qualities at a location that uniquely created a place. These data are extracted chunks of information from other intelligence and geographic products. Inherent in using this data, GEOINT data require knowledge of the data and how the geospatial features might be organized on the landscape. Using the data within the analyst's craft allows us to frame patterns of human activities to develop insights and make predictions.
Don't forget to complete the Lesson 2 Quiz!
Welcome to Lesson 3! This lesson introduces GEOINT Data Sources and Collection Strategies and focuses on the third GEOINT principle that geospatial intelligence source and collection strategies are about getting enough information to answer a question about place and time. GEOINT data is collected from a variety of sources using various collection strategies. In general, "sources" are the means or systems used to observe and record data. The term "collection strategy" is used to convey a high-level approach to GEOINT data collection. The nature of access to the data is important and determines the source. In general, GEOINT data can be collected openly and covertly (secretly). Data collected openly and maintained transparently is considered "open source." Data collected covertly and maintained in secret is considered "closed source."
Throughout this lesson, we will address the concepts of GEOINT data sources and collection strategies. By the end of this lesson, you will be able to:
Please view my video lecture (7:56) concerning GEOINT data sources and collection strategies.
The purpose of GEOINT is to supply decision makers with timely geospatial insights that allow for informed, knowledgeable decision making. In order to fulfill this purpose, we prioritize the intelligence requirements of the decision maker. These requirements define the mission, functions, and structure of the GEOINT; they also should drive GEOINT data collection and analysis. The Intelligence Cycle, depicted in Figure 3.1, starts with requirements. The requirements are sorted and prioritized, and are then used to drive the collection activities. Once information has been collected, it is initially evaluated, processed, and reported to consumers. The cycle is then repeated until the intelligence requirements have been satisfied.
Based on the requirements, collection systems are given specific tasks to execute. Requirements are often difficult to define because they require a focus on long-term needs while most decision makers do not know what information they want until they actually need it.
Collection includes acquiring information and providing that information for processing and production. Collection management is the formal process within an intelligence organization of converting intelligence requirements into collection requirements, establishing priorities, tasking, coordinating with the collection sources, monitoring results, and re-tasking as required. The collection process encompasses the management of various activities including developing collection guidelines that ensure the best use of resources. There are four management criteria:
A collection strategy seeks to determine if sources can satisfy the requirements. Three key goals of a collection strategy are to:
Detecting deception requires information from a variety of sources so that one source can be verified. Multiple collection sources enable collection managers to cross-cue between different sources. Collection may require redundancy so that the loss or failure of one source can be compensated by another source.
GEOINT data is collected by computer systems, automated sensors, and humans. In general, "intelligence sources [100]" are the means used to observe and record information relating to the condition, situation, or activities of a targeted location, organization, or individual. GEOINT data sources have traditionally included imagery and geospatial data. While imagery is still the dominate and most important source, GEOINT is evolving to integrate forms of intelligence and information beyond the traditional sources of geospatial information. This lesson recognizes this transition and structures the discussion in broad terms. The collection strategy addresses the approach pursued for GEOINT data collection, which I divided into the continuum of strategies including the categories of "persistent collection" and "discontinuous collection."
GEOINT Data sources can be categorized from discontinuous to persistent versus closed to open, shown in Table 3.1 below:
Collection Strategy: Discontinuous |
Collection Strategy: Persistent |
|
---|---|---|
Source Access: Open |
Example: The home location of the students taking this course. | Example: A continuously monitored video camera along a street. |
Source Access: Closed |
Example: The Coca-Cola Company's Coca-Cola recipe. | Example: UAV equipped with a camera tracking a military target for 24 hours. |
A strategy is a plan to achieve goals; it provides direction and scope for an effort. In this lesson the term "collection strategy" is a way to pursue GEOINT data collection. A collection strategy is important because the resources available to achieve goals are usually limited. We have divided a continuum of strategies into two categories of "persistent collection" and "discontinuous collection." The collection strategy determines frequency at which data are captured for a specific place on the earth. This frequency is termed temporal resolution.The more frequently data are captured by a particular sensor, the better or finer is the temporal resolution of that sensor. Temporal resolution is relevant when using imagery or elevations datasets captured successively over time to detect changes to the landscape.
Discontinuous collection is not a full record of activity during a time period. It also might be termed non-persistent. Discontinuous collection is not a "permanent stare" at a target from one or more systems. Such a "permanent stare" might not be available, technically feasible, or an efficient use of resources. The practice of discontinuous data collection currently dominates the discipline.
Traditional remote sensing is an example of discontinuous source strategy. Here, sensors are mounted on board aircraft or spacecraft orbiting the earth. At present, there are several remote sensing satellites providing imagery for research and operational applications. Spaceborne remote sensing provides repetitive coverage of an area at a relatively low cost per unit area.
Discontinuous collection can also occur when a device sporadically provides data. For example, a roaming cellphone locates itself with signals from multiple antennas or through its GPS. There are numerous other examples of discontinuous GEOINT data. For instance, toll highway devices record your point of entry and point of exit to compute a toll. Twitter is another example. Tweets can be geotagged to record information about the location of the device that created the tweet. Images taken with a phone or a GPS-enabled camera can also contain location data.
Persistent collection is defined as a strategy that emphasizes the ability to linger for a period of time to detect, locate, characterize, identify, track, target, and possibly provide near- or real-time data. Persistent collection facilitates the collection and prediction of human behavior, for example, pattern of life. The goal is to achieve near-perfect knowledge by increasing the rate of data collection and, therefore, understanding about the target. This enables a faster decision cycle from more detailed information. The collection goal is that a target will be unable to evade information collection. The purpose is to improve decision making while reducing risk. In different domains, Persistent Collection is called Persistent Surveillance, Intelligence, Surveillance, Reconnaissance (ISR), Persistent Stare, or Pervasive Knowledge. Unmanned Aerial Vehicles (UAVs) are frequently associated with persistent collection.
The collection, processing, and exploration (analysis) of imagery in GEOINT is more complex than what can be presented completely in this brief summary. Having said this, the goal of this section is only to provide you an overview of a few of the critical factors to consider when choosing an appropriate remote sensing platform and sensor. Much of the following content related to remote sensing was developed for GEOG 480 - Exploring Imagery and Elevation Data in GIS Applications [101] by Penn State's College of Earth and Mineral Sciences. It is licensed for use under CC BY 3.0 [75].
Remote sensing is a cornerstone of GEOINT. Remote sensing—a major driver of Geographic information Science (GIScience) and Geographic Information Technology (GIT)—is the acquisition of information about an object or phenomenon without making physical contact with the object. Remote sensing is in direct contrast to direct onsite observation and makes possible data collection over large, dangerous, and inaccessible areas. As such, it is an essential tool in intelligence work. In GEOINT, remote sensing historically referred to the use of satellite and airborne sensors to detect and classify objects related to human activity on Earth. However, this definition has been expanding to include newer types of collection systems (LiDAR, RADAR) as well as terrestrial collection systems.
The history of remote sensing as a governmental activity, a commercial industry, and an academic field provides a perspective on development of the technology and emergence of remote sensing as a key technology in GEOINT. Accounts of remote sensing history generally begin in the 1800s, following the development of photography. Many of the early advancements of remote sensing can be tied to military applications, which continue to drive most of remote sensing technology development even today. However, after World War II, the use of remote sensing in science and infrastructure extended its reach into many areas of academia and civilian life. The recent revolution of geospatial technology and applications, linked to the explosion of computing and internet access, brings remote sensing technology and applications into the everyday lives of most people on the planet. It is important to understand the motivation behind technology development and to see how technology contributes to the broader societal, political, and economic framework of geospatial systems, science, and intelligence, be the application military, business, social, or environmental intelligence.
Remote sensing in GEOINT is evolving to include multiple layers of integrated sensing systems [102]—some of these are not what one would think of as the traditional systems found on space or airplane platforms. The layers of sensors are characterized by an integration of sensors, infrastructure, and exploitation capabilities. Such layered sensing provides decision makers with redundant, timely, trusted, and relevant information. Layered sensing might be divided by the vertical dimension:
An underwater layer can be added to this multi-dimensional view. While the underwater sensors are different, the applications of underwater sensors are similar to terrestrial sensors. Industrial applications are related to oil or mineral extraction, pipelines, and commercial fisheries. Military and homeland security applications include autonomous vehicles, securing port facilities, and de-mining.
Remote sensing and the exploitation of remotely sensed data are human activities aided by technology. Misinterpretation and ill-informed decision making can easily occur if the individuals involved do not understand the operating principles of the remote sensing system used to create the data, which is in turn used to derive information. Humans select the remote sensing system to collect the data, specify the various resolutions of the remote sensor data, calibrate the sensor, select the platform that will deliver the sensor, determine when the data will be collected, and specify how the data are processed.
In GEOINT, image analysis is the act of examining images for the purpose of identifying objects and judging their significance. John Jensen (2007) describes factors that distinguish a superior image analyst. He says, "It is a fact that some image analysts are superior to other image analysts because they: 1) understand the scientific principles better, 2) are more widely traveled and have seen many landscape objects and geographic areas, and/or 3) they can synthesize scientific principles and real-world knowledge to reach logical and correct conclusions."
The nature of the information extraction in remote sensing can be divided into general activities using spectral, spatial, and temporal information. These are:
Such information extraction can be by human or computer methods. However, human and computer methods typically supplement each other since they both may offer better results when used together. For example, in disaster management, computers will detect damage from a hurricane, which humans recognize as a specific type of damage to the property. A combination of human and computer techniques is frequently used since human interpretation is time consuming and expensive but necessary for accuracy.
From a technology perspective, the simplest way to extract information from remotely sensed data is human interpretation. However, significant training and experience are needed to produce a skilled image interpreter. A well-trained image analyst uses many of these elements without really thinking about them. The beginner may not only have to force himself or herself to consciously evaluate an unknown object with respect to these elements, but also analyze its significance in relation to the other objects or phenomena in the photo or image. Eight elements of image interpretation employed by human image interpreters are:
The results of image interpretation are most often delivered as a set of attributed points, lines, and/or polygons in any one of a variety of CAD or GIS data formats. The classification scheme or interpretation criteria must be agreed upon with the end user before the analysis begins.
Most remote sensing instruments measure the same thing—electromagnetic radiation. Electromagnetic radiation is a form of energy emitted by all matter above absolute zero temperature (0 Kelvin or -273° Celsius). X-rays, ultraviolet rays, visible light, infrared light, heat, microwaves, and radio and television waves are all examples of electromagnetic energy. If you have studied an engineering or physical science discipline, much of this may be familiar to you. Electromagnetic energy is described in terms of:
Frequency and wavelength are inversely related. This is important because some fields choose to represent system performance using wavelength, and some using frequency.
The visible and infrared portions of the electromagnetic spectrum are the most important for the type of remote sensing. Table 3.2 illustrates the relationship between named colors and wavelength/frequency bands.
Wavelength Descriptions | ||||
---|---|---|---|---|
Colora | Angstrom (A) |
Nanometer (nm) |
Micrometer (µm) |
Frequency (Hz x 1014) |
Ultraviolet, sw | 2,537 | 254 | 0.254 | 11.82 |
Ultraviolet, lw | 3,660 | 366 | 0.366 | 8.19 |
Violet (limit)b | 4,000 | 400 | 0.40 | 7.50 |
Blue | 4,500 | 450 | 0.45 | 6.66 |
Green | 5,000 | 500 | 0.50 | 6.00 |
Green | 5,500 | 550 | 0.55 | 5.45 |
Yellow | 5,800 | 580 | 0.58 | 5.17 |
Orange | 6,000 | 600 | 0.60 | 5.00 |
Red | 6,500 | 650 | 0.65 | 4.62 |
Red (limit)b | 7,000 | 700 | 0.70 | 4.29 |
Infrared, near | 10,000 | 1,000 | 1.0 | 3.00 |
Infrared, far | 300,000 | 30,000 | 30.00 | 0.10 |
Understanding the interactions of electromagnetic energy with the atmosphere and the Earth's surface is critical to the interpretation and analysis of remotely sensed imagery. Radiation is scattered, refracted, and absorbed by the atmosphere, and these effects must be accounted for and corrected in order to determine what is happening at the ground. The Earth's surface can reflect, absorb, transmit, and emit electromagnetic energy, and in fact is doing all of these at the same time, in varying fractions across the entire spectrum, as a function of wavelength. The spectral signature that is recorded for each pixel in a remotely sensed image is unique based on the characteristics of the target surface and the effects of the intervening atmosphere. In remote sensing analysis, similarities and differences among the spectral signatures of individual pixels are used to establish a set of more general classes that describe the landscape or help identify objects of particular interest in a scene.
The graph above shows the relative amounts of electromagnetic energy emitted by the sun and the Earth across the range of wavelengths called the electromagnetic spectrum. Values along the horizontal axis of the graph range from very short wavelengths (ten millionths of a meter) to long wavelengths (meters). Note that the horizontal axis is logarithmically scaled, so that each increment represents a ten-fold increase in wavelength. The axis has been interrupted three times at the long wave end of the scale to make the diagram compact enough to fit on your screen. The vertical axis of the graph represents the magnitude of radiation emitted at each wavelength.
Hotter objects radiate more electromagnetic energy than cooler objects. Hotter objects also radiate energy at shorter wavelengths than cooler objects. Thus, as the graph shows, the sun emits more energy than the Earth, and the sun's radiation peaks at shorter wavelengths. The portion of the electromagnetic spectrum at the peak of the Sun's radiation is called the visible band because the human visual perception system is sensitive to those wavelengths. Human vision is a powerful means of sensing electromagnetic energy within the visual band. Remote sensing technologies extend our ability to sense electromagnetic energy beyond the visible band, allowing us to see the Earth's surface in new ways, which, in turn, reveals patterns that are normally invisible.
The graph above names several regions of the electromagnetic spectrum. Remote sensing systems have been developed to measure reflected or emitted energy at various wavelengths for different purposes. This section highlights systems designed to record radiation in the bands commonly used for land use and land cover mapping: the visible, infrared, and microwave bands.
At certain wavelengths, the atmosphere poses an obstacle to satellite remote sensing by absorbing electromagnetic energy. Sensing systems are therefore designed to measure wavelengths within the windows where the transmissivity of the atmosphere is greatest.
Remote sensing can be done from space (using satellite platforms), from the air (using aircraft platforms), and from the ground (using static and vehicle-based systems). The same type of sensor, such as a multispectral digital frame camera, may be deployed on all three types of platforms for different applications. Each type of platform has unique advantages and disadvantages in terms of spatial coverage, access, and flexibility.
Since the launch of the first satellite-based remote sensing, mapping has grown. Interestingly enough, even as more satellites are launched, the demand for data acquired from airborne platforms continues to grow. The historic and growth trends for both airborne and spaceborne remote sensing are well-documented in the ASPRS Ten-Year Industry Forecast [108]. The well-versed geospatial intelligence professional should be able to discuss the advantages and disadvantages for each type of platform. He/she should also be able to recommend the appropriate data acquisition platform for a particular application and problem set. While the number of satellite platforms is quite low compared to the number of airborne platforms, the optical capabilities of satellite imaging sensors are approaching those of airborne digital cameras. However, there will always be important differences, strictly related to characteristics of the platform, in the effectiveness of satellites and aircraft to acquire remote sensing data.
Since the 1967 inception of the Earth Resource Technology Satellite (ERTS) program (later renamed Landsat), mid-resolution spaceborne sensors have provided the vast majority of multispectral datasets to image analysts studying land use/land cover change, vegetation and agricultural production trends and cycles, water and environmental quality, soils, geology, and other earth resource and science problems. Landsat has been one of the most important sources of mid-resolution multispectral data globally.
The French SPOT satellites have been another important source of high-quality, mid-resolution multispectral data. The imagery is sold commercially, and is significantly more expensive than Landsat. SPOT can also collect stereo pairs; images in the pair are captured on successive days by the same satellite viewing off-nadir. [109] Collection of stereo pairs requires special control of the satellite; therefore, the availability of stereo imagery is limited. Both traditional photogrammetric terrain extraction techniques, as well as automatic correlation, can be used to create topographic data in inaccessible areas of the world, especially where a digital surface model may be an acceptable alternative to a bare-earth elevation model.
DigitalGlobe, a commercial company collects high-resolution multispectral imagery, which is sold commercially to users throughout the world. US Department of Defense users and partners have access to these datasets through commercial procurement contracts; therefore, these satellites are quickly becoming a critical source of multispectral imagery for the geospatial intelligence community. Bear in mind that the trade-off for high spatial resolution is limited geographic coverage. For vast areas, it is difficult to obtain seamless, cloud-free, high-resolution multispectral imagery within the single season or at the particular moment of the phenological cycle of interest to the researcher.
One obvious advantage satellites have over aircraft is global accessibility; there are numerous governmental restrictions that deny access to airspace over sensitive areas or over foreign countries. Satellite orbits are not subject to these restrictions, although there may well be legal agreements to limit distribution of imagery over particular areas.
The design of a sensor destined for a satellite platform begins many years before launch and cannot be easily changed to reflect advances in technology that may evolve during the interim period. While all systems are rigorously tested before launch, there is always the possibility that one or more will fail after the spacecraft reaches orbit. The sensor could be working perfectly, but a component of the spacecraft bus (attitude determination system, power subsystem, temperature control system, or communications system) could fail, rendering a very expensive sensor effectively useless. The financial risk involved in building and operating a satellite sensor and platform is considerable, presenting a significant obstacle to the commercialization of space-based remote sensing.
Satellites are placed at various heights and orbits to achieve desired coverage of the Earth's surface [110]. When the orbital speed exactly matches that of the Earth's rotation, the satellite stays above the same point at all times, in a geostationary [111] orbit. This is useful for communications and weather monitoring satellites. Satellite platforms for electro-optical (E/O) imaging systems are usually placed in a sun-synchronous [112], low-earth orbit (LEO) so that images of a given place are always acquired at the same local time (Figure 3.9). The revisit time for a particular location is a function of the individual platform and sensor, but generally it is on the order of several days to several weeks. While orbits are optimized for time of day, the satellite track may not always coincide with cloud-free conditions or specific vegetation conditions of interest to the end-user of the imagery. Therefore, it is not a given that usable imagery will be collected on every sensor pass over a given site.
Aircraft often have a definite advantage because of their flexibility. They can be deployed wherever and whenever weather conditions are favorable. Clouds often appear and dissipate over a target over a period of several hours during a given day. Aircraft on site can respond with a moment's notice to take advantage of clear conditions, while satellites are locked into a schedule dictated by orbital parameters. Aircraft can also be deployed in small or large numbers, making it possible to collect imagery seamlessly over an entire county or state in a matter of days or weeks simply by having lots of planes in the air at the same time.
Aircraft platforms range from the very small, slow, and low flying (Figure 3.10), to twin-engine turboprop and small jets capable of flying at altitudes up to 35,000 feet. Unmanned platforms (UAVs) are becoming increasingly important, particularly in military and emergency response applications, both international and domestic. Flying height, airspeed, and range are critical factors in choosing an appropriate remote sensing platform. Modifications to the fuselage and power system to accommodate a remote sensing instrument and data storage system are often far more expensive than the cost of the aircraft itself. While the planes themselves are fairly common, choosing the right aircraft to invest in requires a firm understanding of the applications for which that aircraft is likely to be used over its lifetime.
The scale and footprint of an aerial image is determined by the distance of the sensor from the ground; this distance is commonly referred to as the altitude above the mean terrain (AMT). The operating ceiling for an aircraft is defined in terms of altitude above mean sea level. It is important to remember this distinction when planning for a project in mountainous terrain. For example, the National Aerial Photography Program [113] (NAPP) and the National Agricultural Imagery Program [114] (NAIP) both call for imagery to be acquired from 20,000 feet AMT. In the western United States, this often requires flying much higher than 20,000 feet above mean sea level. A pressurized platform such as the Cessna Conquest (Figure 3.11) would be suitable for meeting these requirements.
With airborne systems, the flying height is determined on a project-by-project basis depending on the requirements for spatial resolution, GSD, and accuracy. The altitude of a satellite platform is fixed by the orbital considerations described above; scale and resolution of the imagery are determined by the sensor design. Medium resolution satellites, such as Landsat, and high-resolution satellites, such as GeoEye, orbit at nearly the same altitude, but collect imagery at very different ground sample distance (GSD).
Terrestrial sensors include seismic, acoustic, magnetic, and pyroelectric transducers, and optical and passive imaging sensors to detect the presence of persons or vehicles. Terrestrial sensors many be part of a wireless sensor network (WSN) of spatially distributed sensors intended to monitor and to report the data through a network to a main location. Think about all of the electronic sensors surrounding you right now. There are GPS sensors and motion detectors in your smartphone. Terrestrial sensors have become abundant because they keep getting smaller and cheaper, and network connectivity has increased. With new microelectronics design, a microchip that costs less than a dollar can now link an array of sensors to a low-power wireless communications network.
As you can see, there are innumerable types of platforms upon which to deploy an instrument. Satellites and aircraft collect the majority of base map data and imagery; the sensors typically deployed on these platforms include film and digital cameras, light-detection and ranging (lidar) systems, synthetic aperture radar (SAR) systems, and multispectral and hyperspectral scanners. Many of these instruments can also be mounted on land-based platforms, such as vans, trucks, tractors, and tanks. In the future, it is likely that a significant percentage of GIS and mapping data will originate from land-based sources.
You will be introduced to three types of optical sensors: airborne film mapping cameras, airborne digital mapping cameras, and satellite imaging. Each has particular characteristics, advantages, and disadvantages, but the principles of image acquisition and processing are largely the same regardless of the sensor type.
The size, or scale, of objects in a remotely sensed image varies with terrain elevation and with the tilt of the sensor with respect to the ground, as shown in Figure 3.12. Accurate measurements cannot be made from an image without rectification, the process of removing tilt and relief displacement. In order to use a rectified image as a map, it must also be georeferenced to a ground coordinate system.
If remotely sensed images are acquired such that there is overlap between them, then objects can be seen from multiple perspectives, creating a stereoscopic view, or stereomodel. A familiar application of this principle is the View-Master [115] toy many of us played with as children. The apparent shift of an object against a background due to a change in the observer's position is called parallax [116]. Following the same principle as depth perception in human binocular vision, heights of objects and distances between them can be measured precisely from the degree of parallax in image space if the overlapping photos can be properly oriented with respect to each other, in other words, if the relative orientation (Note: before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the interpretation of binocular stereo information) is known (Figure 3.13).
Airborne film cameras have been in use for decades. Black and white (panchromatic), natural color, and false color infrared aerial film can be chosen based on the intended use of the imagery; panchromatic provides the sharpest detail for precision mapping; natural color is the most popular for interpretation and general viewing; false color infrared is used for environmental applications. High-precision manufacturing of camera elements such as lens, body, and focal plane; rigorous camera calibration techniques; and continuous improvements in electronic controls have resulted in a mature technology capable of producing stable, geometrically well-defined, high-accuracy image products. Lens distortion can be measured precisely and modeled; image motion compensation mechanisms remove the blur caused by aircraft motion during exposure. Aerial film is developed using chemical processes and then scanned at resolutions as high as 3,000 dots per inch. In today's photogrammetric production environment, virtually all aerotriangulation, elevation, and feature extraction are performed in an all-digital work flow.
Airborne digital mapping cameras have evolved over the past few years from prototype designs to mass-produced operationally stable systems. In many aspects, they provide superior performance to film cameras, dramatically reducing production time with increased spectral and radiometric resolution. Detail in shadows can be seen and mapped more accurately. Panchromatic, red, green, blue, and infrared bands are captured simultaneously so that multiple image products can be made from a single acquisition (Figure 3.14).
High-resolution satellite imagery is now available from a number of commercial sources, both foreign and domestic. The federal government regulates the minimum allowable GSD for commercial distribution, based largely on national security concerns; 0.6-meter GSD is currently available and higher-resolution sensors being planned for the near future (McGlone, 2007). The image sensors are based on a linear push-broom design. Each sensor model is unique and contains proprietary design information; therefore, the sensor models are not distributed to commercial purchasers or users of the data. Through commercial contracts, these satellites provide imagery to NGA in support of geospatial intelligence activities around the globe.
As digital aerial photography has matured, it has become integrated into many consumer-level, web-based applications, such as Google Earth and numerous navigation and routing packages. Microsoft has recently deployed a large number of aerial survey planes equipped with the Vexcel UltraCam sensor in an ambitious Global Ortho [117] program. Their goal is to provide very high resolution color imagery over the entire land surface of the Earth, made publicly available through the Bing Maps platform.
Until very recently, spaceborne sensors produced the majority of multispectral data. Commercial data providers (SPOT, Digital Globe, and others) license imagery to end-users for a fee, with limits on further distribution. The origins of commercial multispectral remote sensing can be traced to interpretation of natural color and color infrared (CIR) aerial photography in the early 20th century. CIR film was developed during World War II as an aid in camouflage detection (Jensen, 2007). It also proved to be of significant value in locating and monitoring the condition of vegetation. Healthy green vegetation shows up in shades of red; deep, clear water appears dark or almost black; concrete and gravel appear in shades of grey. CIR photography captured under the USGS National Aerial Photography Program [113] was manually interpreted to produce National Wetlands Inventory (NWI) maps for much of the United States. While film is quickly being replaced by direct digital acquisition, most digital aerial cameras today are designed to replicate these familiar natural color or color-infrared multispectral images.
Computer monitors are designed to simultaneously display three color bands. Natural color image data is comprised of red, green, and blue bands. Color infrared data is comprised of infrared, red, and green bands. For multispectral data containing more than three spectral bands, the user must choose a subset of three bands to display at any given time, and furthermore must map those 3 bands to the computer display in such a way as to render an interpretable image.
Simple visual interpretation can be quite useful for general situational awareness and decision making. Additional preparation and processing is often required for any more complex analysis. If the end-user application requires the overlay of multiple remotely sensed images or detailed geospatial data, such as road centerlines or building outlines, georeferencing must be performed. If spectral information is to be used to classify pixels or areas in the image based on their content, then effects of the atmosphere must be accounted for. To detect change between multiple images, both georeferencing and atmospheric correction of all individual images may be required.
Digital images are clearly very useful—a picture is worth a thousand words—in many applications; however, the usefulness is greatly enhanced when the image is accurately georeferenced [118]. The ability to locate objects and make measurements makes almost every remotely sensed image far more useful. Georeferencing of images must be accomplished using either some form of technology (such as GPS) or method (such as warping to known control points or more rigorous aerotriangulation). Geometric distortions due to the sensor optics, atmosphere and earth curvature, perspective, and terrain displacement must all be taken in account. Furthermore, a reference system must be established in order to assign real-world coordinates to pixels or features in the image. Georeferencing is relatively simple in concept, but quickly becomes more complex in practice due to the intricacies of both technology and coordinate systems.
Georeferencing an analog or digital photograph is dependent on the interior geometry of the sensor as well as the spatial relationship between the sensor platform and the ground. The single vertical aerial photograph is the simplest case; we can use the internal camera model and six parameters of exterior orientation (X, Y, Z, roll, pitch, and yaw) to extrapolate a ground coordinate for each identifiable point in the image. We can either compute the exterior orientation parameters from a minimum of three ground control points using space resection equations, or we can use direct measurements of the exterior orientation parameters obtained from GPS and IMU.
Direct georeferencing solves a large part of the image rectification problem, but not all of it. We can only extrapolate an accurate coordinate on the ground when we actually know where the ground is in relationship to the sensor and platform. We need some way to control the scale of the image. Either we need stereo pairs to generate intersecting light rays, or we need some known points on the ground. A georeferenced satellite image can be orthorectified if an appropriate elevation model is available. The effects of relief displacement are often less pronounced in satellite imagery than in aerial photography, due to the great distance between the sensor and the ground. It is not uncommon for scientists and image analysts to make use of satellite imagery that has been registered or rectified, but not orthorectified. If one is attempting to identify objects or detect change, the additional effort and expense of orthorectification may not be necessary. If precise distance or area measurements are to be made, or if the analysis results are to be used in further GIS analysis, then orthorectification may be important. It is important for the analyst to be aware of the effects of each form of georeferencing on the spatial accuracy of his/her analysis results and the implications of this spatial accuracy in the decision-making process.
The degree of accuracy and rigor required for the georeferencing depends on the desired accuracy of the result. More error can be tolerated in an image backdrop intended for visual interpretation, where a human interpreter can use judgment to work around some geographic misalignments. If the intent is to use automated processing to intersect, combine, or subtract one data layer from others using mathematical algorithms, then the spatial overlay must be much more accurate in order to produce meaningful results. Higher accuracy is achieved only with better ground control, accurate elevation data, and thorough quality assurance. Most remotely-sensed data is delivered with some level of georeferencing information, which locates the image in a ground coordinate system. There are generally three levels of georeferencing, each corresponding to a different geometric accuracy. *< >Level 1: uses positioning information obtained directly from the sensor and platform to roughly geo-locate the remotely-sensed scene on the ground. This level of georeferencing is sufficient to provide geographic context and support visual interpretation of the data. It is often not accurate enough to support robust image or GIS analysis that requires combining the remotely-sensed dataset with other layers.Level 2: uses a Digital Elevation Model (DEM) to remove relief displacement caused by variation in the height of the terrain. This improves the relative spatial accuracy of the data; distances measured between points within the geo-corrected image will be more accurate, particularly in scenes containing significant elevation changes. The DEM is usually obtained from another source, and the spatial accuracy of the Level 2 image will depend on the accuracy of the DEM.Level 3: uses a DEM and ground control points to most accurately georeference the image on the ground. In addition to the DEM, ground control points must be obtained from another source, and the accuracy of the Level 3 image will depend on the accuracy of the ground control points. Level 3 processing is usually required in order to provide the most accurate overlays of remotely-sensed data sets and other relevant GIS data.
If the end-user application intends to make use of spectral information contained in the image pixels to identify and separate different types of material or surfaces based on sample spectral libraries, then contributions to those pixels values made by the atmosphere must be removed. Atmospheric correction is a complex process utilizing control measurements, information about the atmospheric content, and assumptions about the uniformity of the atmosphere across the project area. The process is automated, but requires sophisticated software, highly skilled technicians, and again, time. Furthermore, atmospheric correction parameters used on one dataset cannot be summarily applied to a dataset collected on another day.
A drone is an unmanned aerial vehicle (UAV) and also referred to as an unpiloted aerial vehicle (UPV) or a remotely piloted aircraft (RPA). The following discussion was developed as part of GEOG 597G, Geospatial Applications for Unmanned Aerial System (UAS) [119] by Penn State's College of Earth and Mineral Sciences [120] and is licensed under CC BY 3.0 [75].
Here you will learn about the history of UAS development and its introduction to civilian and military applications. The history of flying objects, or the unmanned aerial vehicle in its rudimentary forms, extends way back to ancient civilizations. The Chinese, around 200 AD, used paper balloons (equipped with oil lamps to heat the air) to fly over their enemies after dark, which caused fear among the enemy soldiers who believed that there was divine power involved in the flight.
The idea of unmanned aerial objects came long before manned flights. This was for the obvious reason of removing the risk of loss of life in conjunction with these experimental objects. In modern times, the idea of unmanned flying objects developed to mean flying aerial vehicles, or aircraft without pilots on board. Thanks to advancements in technology, the maneuvering and control of piloted flight can be sufficiently mimicked. Names like aerial torpedo, radio controlled vehicle, remotely piloted vehicle (RPV), remote controlled vehicle, autonomous controlled vehicle, pilotless vehicle, unmanned aerial vehicle (UAV), unmanned aircraft system (UAS), and drone are names that may be used to describe a flying object or machine without a pilot on board.
The main challenge that faced early aerospace pioneers of piloted and pilotless airplanes alike was the issue of controlling flight once the flying object was up in the air. The Wright Brothers (1903), and at about the same time, Dr. Samuel Pierpont Langley, taught the aviation world a lot about the secrets of controlled flight. Afterwards, the war machine of WWI put intense pressure on inventors and scientists to come up with innovations in all aspects of flight design including power plants, fuselage structures, lifting wing configurations and control surface arrangements. By the time WWI ended, modern day aviation had been born.
In late 1916, the US navy funded Sperry Gyroscope Company (later named Sperry Corporation) to develop an unmanned torpedo that could fly a guided distance of 1000 yards to detonate its warhead close enough to an enemy warship. Almost two years later, on March 6, 1918, after a series of failures, Sperry efforts succeeded in launching an unmanned torpedo to fly a 1000-yard course in stable guided flight. It dived onto its target at the desired time and place, and later was recovered and landed. With this successful flight, the world’s first unmanned aircraft system, which is called Curtis N-9, was born.
In the late 1930s, the U.S. Navy returned to the development of drones. This was highlighted by the Navy Research Lab’s development of the Curtis N2C-2 drone. (See Figure 3.15). The 2500-lb. bi-plane was instrumental in testing the accuracy and efficiency of the Navy anti-aircraft defense system.
Penn State Extension is testing drones to determine possible uses of drones for such purposes as observing pest control and fertilizer application patterns. Monitoring these types of things from the air can help growers with crop management decisions. Click on the image below to watch a short video (:36) about using drones in crop management.
If you are interested in reading the accompanying story, you can use the following link to access the "Penn State crop educator explores drone-driven crop management" [122] article.
The way a pilotless aircraft is controlled determines its categorization. In general, there are three main names for pilotless aircraft:
Whether it is named a UAV, an RPV, or a drone, at a minimum, the pilotless aircraft should include the following elements:
Naming the different missions for UAVs is a difficult task, as there are so many possibilities and there have never been enough systems in use to explore all the possibilities. However, the two main classifications for UAV missions are the following:
As of today, civilian missions include various applications such as:
Military and civilian missions of UAV overlap in many areas. They both use UAV for reconnaissance and surveillance. In addition, they both use UAV as a stationary platform over a point on the ground from which to perform many of the communications or remote sensing satellite functionalities with a fraction of the cost.
The following content was derived from the Foundations of Geographic Information and Spatial Analysis Boot Camp(No link is currently available) by Penn State's College of Earth and Mineral Sciences [120] and is licensed under CC BY 3.0 [75].
As I said on the previous page, both military and civilian missions use Drones or UAVs for reconnaissance and surveillance. These systems often deliver real-time video. Real-time video capabilities, referred to as "motion imagery" or "full motion video" (FMV) are expanding the role of remote sensing and GEOINT in high-tempo military operations and civilian applications. The conventional sensors and platforms presented in previous sections provide critically important, spatially accurate, base map layers for geographic information systems and traditional image analysis. FMV sensors and platforms open the door for persistent surveillance of pinpointed targets on the ground, tracking them as they move and fusing intelligence from other sources to support immediate action. FMV presents significant new challenges to geospatial infrastructure—hardware, software, and analysts.
According to the Motion Imagery Standards Board (MISB) a motion imagery system is "any imaging system that collects at a rate of one frame per second (1 Hz) or faster, over a common field of regard." While MISB makes no formal distinction between motion imagery and full motion video, FMV is generally regarded as "that subset of motion imagery at television-like frame rates (24 - 60 Hz)."
The key phrase "persistent surveillance" provides the important distinction between FMV and traditional analysis of discontinuous imagery. It connotes "constant stare," the ability to watch a point on the ground or to follow a moving target for a long period of time, without interruption. Contrast this need with the typical field of view and revisit time for traditional airborne or spaceborne remote sensing systems and you will begin to appreciate the paradigm shift in geospatial intelligence being stimulated by FMV technology. Traditional imagery analysis tends to be feature-based, focusing structural features of buildings or identification of known objects of interest, such as tank formations and fleets of ships or aircraft. FMV, on the other hand, is activity-based, focusing on capturing the movements of individual people and vehicles or the "patterns of life" observed by small groups (Copeland, 2009).
FMV technologies provide the capability to monitor high-interest activities, including "tracking moving, fleeting, and emerging targets as well as observation of rapidly developing events." (ASPRS, 2009) The phrase "find, fix, and finish" neatly describes the tactical advantage imparted to those who possess this powerful intelligence tool.
Terms commonly used when performing or discussing persistent surveillance are defined by Copeland (2009) as:
Digital video cameras, infrared, multispectral, and hyperspectral systems can all be adapted for an FMV application. Because these systems are primarily intended for human interpretation in surveillance applications, where limiting the size of the dataset is needed to facilitate real-time streaming and processing, they tend to have lower spatial resolution and smaller fields of view than their traditional remote sensing counterparts. As with conventional systems, the instantaneous field of view and scale of the resulting imagery will be determined by the operational altitude above ground level (AGL) and the focal length of the camera lens.
The most common sensors comprising an FMV system are a electro-optical (EO) panchromatic digital video camera and an infrared video camera. The EO sensor is simply either a panchromatic or color digital video camera using daylight as the illumination source and recording data in the visible (red to blue) part of the electromagnetic spectrum.
The infrared video (IR) camera acquires thermal imagery, which is useful for detecting thermally emissive objects (e.g. people, running vehicles, etc.) or for collecting imagery at night when there is no ambient light available for the EO camera. The IR cameras can be set to collect "white hot," where the hottest objects in the scene are depicted with high (light) grayscale values, or "black hot," where the hottest objects in the scene are depicted with low (dark) grayscale values. Choice of hot-white or hot-black would be made by those performing interpretation of the imagery, depending on what is of particular interest in the scene.
FMV sensor packages are compact, portable packages that can be quickly installed on a wide variety of airborne vehicles, manned and unmanned. Specifications follow for many of the primary motion imagery platforms, both manned and unmanned. Each of these are primarily focused on intelligence gathering activities where extended loitering over areas of interest is of key importance to operations. Around the world, these aircraft can also be used to collect against dire environmental emergency events.
There are differences between classical remote sensing and FMV, and the skill set required for an analyst in these respective domains. In classical image analysis, success depends on the ability of the analyst to reliably identify specific objects of interest based on shape, texture, or radiometric signature. In FMV, success is less dependent on object identification and largely achieved through the ability of the analyst to work through shortcomings in the system architecture without losing the tempo required to maintain real-time operations. Whereas the classical image analyst may be the type of person who is very detail-oriented, thorough, and focused in a specific niche of expertise, the FMV analyst must be an interactive integrator of many sources of intelligence, capable of acting and making critical decisions without the aid of extensive research.
The power of FMV is brought to bear when the team controlling acquisition and exploitation of real-time video know precisely where and when to look at a target of interest. Discovering a target is not the goal of FMV. The goal is to follow a target in order to ascertain patterns of behavior which can then be used for a tactical advantage. In classical remote sensing, on the other hand, previously unknown or unsuspected information can often be discovered by detecting changes in a region of interest over time. These two disciplines can potentially complement each other, but the skills, training, and technology needed to support them are clearly quite different.
Please view my second video (6:01) lecture concerning GEOINT data sources and collection strategies.
Any type of lawfully and ethically collected geospatial information from publicly available sources is considered to be open source material. Open source is contrasted with closed source material that is not available to the public. There is an increasing reliance on the open source collection of information.
One of the attractions of open source information is the perception that it is easily collected with no accountability. This can be incorrect, and in some countries the information must be collected for a legitimate purpose. European countries have incorporated the "European Convention on Human Rights" into their legislation. A search that engages Article 8, "private life" issues, must meet the threshold for interference as set out in the Convention. Article 8 protects the private life of individuals against arbitrary interference by public authorities and private organizations such as the media. Article 8 is a qualified right, so in certain circumstances public authorities can interfere with the private and family life to prevent disorder or crime, protect health or morals, or to protect the rights and freedoms of others. However, such interference must be in accordance with the law and necessary to protect national security, public safety, or the well-being of the country.
Cyber can be an important source for open source data. When viewed with an eye to intelligence analysis, social media, and more broadly, cyber transactions between individuals and groups, describe past and current events and help to anticipate future events. These transactions can be aggregated into trends, models and conditions that describe who is involved and where and when an event will be or is happening. While much of the information disseminated in social media is not geographic information per se, such social media transactions contain massive amounts of geographic information in the “from” and “to” communication nodes and possible geographic references in the content. Significantly, such cyber activities are intrinsically self-documenting and provide spatial and temporal information to enable analysts to focus on point events, group behaviors, or larger trends. The result is that cyber activities create a vast amount of useful data about an individual or group in the context of local, regional and global activities. Given the growing impact of the technology and the rich information content, their importance has and will likely continue to grow in value to the intelligence community.
Crowdsourced geospatial data (CGD) is an emerging trend that is influencing future methods for geospatial data acquisition. CGD involves the participation of untrained individuals with a high degree of interest in geospatial technology. Working collectively, these individuals collect, edit, and produce datasets. Crowdsourced geospatial data production is typically an open, lightly-controlled process with few constraints, specifications, or quality assurance processes. This contrasts with the highly-controlled geospatial data production practices of national mapping agencies and businesses. Adoption of CGD and production methods has been a concern, especially to government organizations, due to quality concerns related to differences in production methods.
You might enjoy reading: "Frustrating hunt for Genghis Khan’s long-lost tomb just got a whole lot easier." [123]
After a period of initial skepticism, government agencies are now incorporating CGD. There are three main methods, which are:
An important emerging area for hybrid CGD projects is in the area of emergency management aided by volunteers and by CGD. An example is organizations that are fighting Ebola in the three hardest-hit countries—Sierra Leone, Guinea, and Liberia—and who need maps to help aid workers get around the country and do the difficult job of checking village by village for victims of the disease. The UN, Red Cross, and Doctors Without Borders have turned to OpenStreetMap (OSM) for their map data. OSM's crowdsourced mapping project brings together mappers on the ground using GPS devices with mapping capabilities.
Explore OpenStreetMap (OSM) [124]. OSM is a collaborative project to create a free, editable map of the world. Two major driving forces behind the establishment and growth of OSM have been restrictions on use or availability of map information across much of the world, and the advent of inexpensive portable satellite navigation devices.
Created by Steve Coast in the UK in 2004, it was inspired by the success of Wikipedia and the preponderance of proprietary map data in the UK and elsewhere. Since then, it has grown to over 1.6 million registered users, who can collect data using manual survey, GPS devices, aerial photography, and other free sources. This crowdsourced data is then made available under the Open Database License. The OpenStreetMap Foundation, a non-profit organization registered in England, supports the site.
Rather than the map itself, the data generated by the OpenStreetMap project is considered its primary output. This data is then available for use in both traditional applications, like its usage by Craigslist, Geocaching, MapQuest Open, JMP statistical software, and Foursquare to replace Google Maps, and more unusual roles, like replacing default data included with GPS receivers. This data has been favorably compared with proprietary data sources, though data quality varies worldwide.
See the West African Ebola Response. Mapping efforts in support of the relief operation are still ongoing and new contributors are always welcome. Getting involved is simple: go to The HOT Tasking Manager [125] and check out an area to map from one of the ebola-related tasks. Currently tasks are open for Bo, Sierra Leone [126] and Panguma, Sierra Leone [127].
GEOINT Data are collected openly and covertly (secretly). Data collected covertly may be considered closed source. Some intelligence collection must remain secret as their revelation could jeopardize the individuals involved.
Closed source data is government or private data not available through open inquiry. In simple terms, it is closed source data if the data is not meant to be openly available to the public. Closed source data, because of its origin, is often considered more accurate and reliable.
Closed source data includes material that a government denotes "classified" in order to restrict public access so as to protect confidentiality, integrity, or availability. Access to government "classified data" is typically restricted by law to particular trusted individuals. An unauthorized disclosure can result in administrative or criminal penalties. A formal security clearance is often required to handle or access classified data. Classified data are typically marked with a level of sensitivity - e.g. restricted, confidential, secret, or top secret.
Closed source typically includes material such as proprietary business information, law enforcement data, educational records, banking records, and medical records. In general, closed source data are obtained or derived from sources that:
Proprietary describes the level of confidentiality given by the owner. The term proprietary information is often used interchangeably with the term trade secret. Generally, data termed by the owner as "proprietary" limits who can view it or know about its contents. Examples of proprietary information include:
There is no standard used by businesses for determining what is proprietary. The nature of what is held as proprietary varies by industry and individual business practice. If the information does not seem to be readily available, it may be considered proprietary and not for public use.
In the United States, the US Economic Espionage Act of 1996 addresses industrial espionage or commercial spying. This act imposes severe penalties for stealing trade secrets. An owner of a trade secret seeking to protect proprietary information must derive an economic value from not having this information publicly known and must take steps to maintain the secrecy of the information for any legal recourse if sensitive information is disclosed. Businesses often require employees to sign non-competition and proprietary information agreements that restrict the information employees can disclose during their employment or after leaving the company. Many businesses limit employee access to computer files, maintain secure areas where sensitive information is stored, and control visitor access.
Many countries deem the records they keep on an individual to be private. In some countries it might be a criminal offense to ask for or disclose any personal information to an unauthorized person.
Tomnod [128] is an example of a crowdsourcing platform that is used to extract GEOINT data from satellite imagery. DigitalGlobe’s constellation of satellites captures high-resolution images of an area the size of India every day. Tomnod expedites the analysis of this huge amount of satellite imagery by engaging a large public crowd. For example, millions of people took part in a Tomnod campaign to search for signs of the missing Malaysian Airlines flight MH370 [129].
Tomnod responds to global events like natural disasters, security incidents, search and rescue missions, or mapping remote populations. Tomnod loads current satellite images onto their website where they can be explored by a global community of volunteer image taggers. Each volunteer scans a subset of the imagery, pixel by pixel, and places a “tag” over objects of interest. Within hours, the crowd can cover thousands of square kilometers many times over. By gathering multiple, independent views of every location, crowdsourced consensus begins to emerge. Even though most individuals are novice imagery interpreters, the “wisdom of the crowd” converges on the locations that are most important. Using a statistical geospatial algorithm, this consensus can be extracted and used to focus the efforts of expert analysts (known as “tipping and cueing”) or provided to responders on the ground.
See how Tomnod helped to track the damage from the Boles Fire near Weed, California [131]. The Boles Fire started on September 15, 2014, southeast of Weed, California. The wind-driven fire quickly moved into town destroying 100 homes and forcing 1500 people to evacuate. DigitalGlobe’s satellites collect imagery in the infrared spectrum in addition to the red/green/blue colors visible to the human eye. Using such infrared imagery, in this Tomnod campaign the images are “false-colored” which highlight the burned areas in black, while areas of healthy vegetation appear red.
Use the following link to access the Tomnod Boles Fire near Weed, California site [132].
For this week's discussion I want to focus on Crowdsourced Geospatial Data (CGD). Here are a few prompts for this week's discussion:
I am always skeptical of crowdsourced data or, indeed, any data. As a geographer and remote sensor whose focus is enumerating displaced populations, I have to be. Skepticism is part of my job. All data contain error, so best to acknowledge it and decide what that error means. There is still a lot of uncertainty around these types of volunteered geographic information [133]; specifically questions over the positional accuracy, precision, and validity of these data among a wide variety of other issues [134]. These quantitative issues are important because the general assumption is that these data will be operationalized somehow and it is, therefore, [135] imperative that they add value to already confusing situations if this enterprise is to be taken seriously in an operational sense [136]. The good news is that research so far show that these “asserted” data are not – a priori – necessarily any worse than “authoritative” data and can be quite good due to the greater number of individuals to correct error [137].
Source: http://blog.standbytaskforce.com/tag/tomnod/ [138].
Head over to the dedicated discussion forum [49] to talk about these issues.
Forum link resolves to Coursera
This lesson introduced high-level GEOINT data source and collection strategies. The major focus of this lesson is that GEOINT source and collection strategies are about getting enough information to answer an intelligence question. We explored how GEOINT data is collected from a variety of sources using various collection strategies, none of which at a conceptual level are unique to GEOINT. Sources are the means or systems used to observe and record data. The collection strategy is an overarching approach to data collection. Discontinuous collection is defined as not collecting a full record of activity during a time period. Persistent collection was introduced as a strategy that emphasizes the ability to linger for a period of time to detect, locate, characterize, identify, track, target, and possibly provide near- or real-time data. We explored how GEOINT data can be collected openly and covertly. Data collected openly and maintained transparently is considered open source. Data collected covertly and maintained in secret is considered closed source. We discussed how closed source data is not unique to GEOINT.
Don't forget to complete the Lesson 3 Quiz [139]!
Link to quiz resolves to Coursera
Department of Defense dictionary of military and associated terms joint publication 1-02. (2002). Washington, D.C.: Joint Chiefs of Staff.
Priorities for GEOINT research at the National Geospatial-Intelligence Agency. (2006). Washington, D.C.: National Academies Press.
Text
Please view my video lecture (7:58) concerning the GEOINT tradecraft.
So what is the GEOINT tradecraft when examined broadly beyond its implementation by any single nationality? Tradecraft is how GEOINT gets done; it includes the principles, tools, and standards of rigor for an analyst's thinking. An interesting early example of the GEOINT clandestine tradecraft is the artwork of Robert Baden-Powell [140], who served as a military intelligence officer in the British Army during the 1890s. Powell sketched enemy fortifications by pretending to be an entomologist making detailed sketches of butterflies and leaves that, on close scrutiny, were revealed to be maps of gun emplacements or trenches.
Current day GEOINT tradecraft is a merging of science and technology with the conventional notion of tradecraft. Historically, tradecraft is defined as how an intelligence officer makes clandestine contact with an agent to obtain information, how the information is passed (or processed), and the tactics for preventing detection. In a more modern sense, tradecraft is also defined as an organization's sources and methods. GEOINT sources may include information obtained clandestinely with humans (like Baden-Powell), with technical collection systems (satellites), and using modern-day open source information (tweets). Methods pertain to the techniques used by analysts. An organization's geospatial sources and methods may be closely guarded so as not to give opponents the opportunity to know the capabilities and interests of an intelligence gathering organization. Unique from other forms of intelligence tradecrafts, GEOINT’s tradecraft is based on geospatial reasoning. Here, geospatial reasoning is reflective, skeptical, and analytic; implying that the successful application of the tradecraft can never be rote, but must always involve the educated mind of the analyst in an active questioning and examination of assumptions, techniques, and data if it is to meet the rigorous standards of good intelligence work. In this course we use the following definition:
The GEOINT Tradecraft is unique and sometimes privileged organizational sources and methods for obtaining information of a place and making sense of the information to support the decision maker in understanding human activities and intentions. Methods comprise the technologic tools to organize geospatial data and the cognitive techniques used by the analyst to make sense of it when rendering judgments, insights, and forecasts.
As we said, the GEOINT tradecraft is how an organization carries out its work producing geospatial intelligence. This GEOINT organization might be a branch of government, a business, or a law enforcement agency. The implication is that tradecraft know-how is not unique to an intelligence community. In a commercial enterprise, manufacturing tradecraft would be the confidential business knowledge of how to manufacture a product. Frequently in business, this institutional knowledge is guarded from a competitor as a trade secret [142]. Critically, there is an interrelationship of humans and technologies that shapes the aspects of analysis, which is why GEOINT's tradecraft has been particularly influenced by the Geospatial Revolution. Geospatial intelligence's analytic craft has been shaped by Geographic Information Science (GIScience) and Geographic Technologies. The relationship might be illustrated as in Figure 4.2:
GEOINT tradecraft guides the collection of information about a place and the analysis in terms of human activities and intentions. Frequently unique from geographic analysis taught in academia, the GEOINT analyst may need to contend with deceptive information and operate in conditions of secrecy.
Remotely sensed imagery, like the below DigitalGlobe image of Islamabad, Pakistan, is a central source of GEOINT data.
As you can imagine, there are circumstances where people want to deny successful analysis of such imagery. This is particularly true in a military context where we prevent enemies from performing reconnaissance by concealment, camouflage, and deception. However, it could be equally true for someone wanting to deny the ability to count the number of automobiles awaiting shipment in an auto factory's parking lot. A core skill of the GEOINT tradecraft is contending with deceptive information. Deception should not be mistaken for the error or misinformation you might deal with in a routine spatial analysis. Here deception is designed to gain an advantage. The idea of deceiving someone about geospatial features is not new and can occur on a grand scale. For example, after December 7, 1941, the Boeing aircraft factory was camouflaged as a village to hide it from Japanese aircraft attack. The plant had fake houses and trees over the factory.
Deception in the form of camouflage is common. In nature, protective coloration serves to protect some flora and fauna—either by making them difficult to see or by causing them to resemble something of little interest to predators. Deception in the form of camouflage allows an otherwise visible object to remain indiscernible from the surrounding environment. Avoiding being deceived requires the analyst to know the ways by which concealment or obscurity is attained. This know-how includes how the method is tailored to a particular observer and how the observer might make a false judgment about the camouflaged object.
Geospatial reasoning includes processes that support exploration, understanding, and sensemaking. Geospatial reasoning begins with the ability to use space as a context, or framework, to make sense of what we see. There are three spatial frames within which we can make the transition from what is observed to meaningful information; these are behavioral spaces, physical spaces, and cognitive spaces. In all cases, the definition of the space provides an interpretive context that gives meaning to the data.
Learning to think spatially is to consider objects in terms of their context. This is to say, the object's location in behavioral space, physical space, or cognitive space, to question why objects are located where they are, and to visualize relationships between and among these objects. The key skills of spatial thinking include having the ability to:
Please view my second video lecture (10:58) concerning the GEOINT tradecraft.
Geospatial analysis can be very difficult to do well. The difficulty is cognitive and most frequently not related to an individual's ability to use the tools that Geographic Information Technologies (GIT) provide. GEOINT analysis has two different approaches—the single hypothesis approach and the multiple (or competing) hypotheses approach. The first is most popular in academia; the second is popular in intelligence analysis. Here's a brief comparison:
Approaches | Description |
---|---|
Single Hypothesis | The natural human desire to reach an explanation can, and often does, lead the analyst to an interpretation based on a single hypothesis. Human nature is to trust the hypothesis, and the analyst is now blind to other possibilities. The early hypothesis becomes a tentative theory and then a ruling theory, and the analysis becomes focused on proving the ruling theory. The result is a blindness to evidence that disproves the ruling theory or supports an alternate explanation. Here, the results are left to the chance that the original tentative hypothesis was correct. A variation of this is to test a single working hypothesis for fact-finding that degenerates into a ruling theory. |
Multiple Hypotheses | The method of multiple hypotheses involves the development of several hypotheses that might explain the focus of the analysis. These hypotheses should be contradictory and compete so most will prove to be false. The development of multiple hypotheses prior to the analysis avoids the trap of the ruling hypothesis and thus makes it more likely that our analysis will lead to meaningful results. The major benefit is that the approach promotes greater thoroughness than an analysis directed toward one hypothesis. This leads to lines of inquiry that we might otherwise overlook, and thus to evidence and insights that might never have been encountered. |
Good geospatial analysis requires you to monitor your mental progress, make changes, and adapt the ways you are thinking. This is self-reflection, self-responsibility, and self-management. Richards Heuer addresses this in his work, the Psychology of Intelligence Analysis [145]. [145] Heuer makes three important points relative to intelligence analysis:
The core of his argument is that even though every analyst sees the same piece of information, it is interpreted differently due to a variety of factors. In essence, one's perceptions are molded by factors that are out of human control. These cognitive patterns, or mindsets, are potentially good and bad. On the positive side, they tend to simplify information for the sake of comprehension, but they also bias interpretation. The key risks of mindsets are that:
He provides the following series of images to illustrate how poorly we are cognitively equipped to accurately interpret the world.
Question #1: What do you see in figure 4.7 below?
Question #2: Look at the drawing of the man in the upper right of Figure 4.8. Are the drawings all of men?
Question #3: What do you see in Figure 4.9—an old woman or a young woman?
Now look to see if you can reorganize the drawing to form a different image of a young woman, if your original perception was of an old woman, or of the old woman if you first perceived the young one.
According to Heuer, and as the above figures illustrate, mental models, mindsets, or cognitive patterns are essentially the analogous images by which people perceive information. Even though every analyst sees the same piece of information, it is interpreted differently due to a variety of factors. In essence, one's perceptions are morphed by a variety of factors that are completely out of the control of the analyst. However, cognitive patterns are critical to allowing individuals to process what otherwise would be an incomprehensible volume of information. Yet, they can cause analysts to overlook, reject, or forget important incoming or missing information that is not in accordance with their assumptions and expectations. Ironically, the experienced analysts may be more susceptible to these mindset problems as a result of their expertise and past success in using time-tested mental models. Since people observe the same information with inherent and different biases, Richards Heuer believes an effective analysis method needs a few safeguards. The analysis method should:
Heuer advocates using Structured Analytic Techniques (SATs) as a means to overcome mindsets.
Most people solve geospatial problems intuitively by trial and error. Structured analytic techniques (SAT) are a "box of tools" to help the analyst mitigate one's cognitive limitations and pitfalls. Structured thinking in general, and structured geospatial thinking specifically, is at variance with the way in which the human mind is in the habit of working. Structured analysis is an approach to intelligence analysis with the driving forces behind the use of these techniques being:
Taken alone, SATs do not constitute an analytic method for solving geospatial analytic problems. The most distinctive characteristic is that structured techniques help to decompose one's geospatial thinking in a manner that enables it to be reviewed, documented, and critiqued. "A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis [149]" (CIA, 2009) highlights a few of the key structured analytic techniques used in the private sector, academia, and the intelligence profession.
The US Intelligence Community began focusing on structured techniques because analytic failures led to the recognition that it had to do a better job overcoming cognitive limitations, analytic pitfalls, and addressing the problems associated with mindsets. Structured analytic techniques help the mind think more rigorously about an analytic problem. In the geospatial realm, they ensure that our key geospatial assumptions, biases, and cognitive patterns are not just assumed but are considered. The use of these techniques later helps to review the geospatial analysis and identify the cause of any error.
Moreover, structured techniques provide a variety of tools to help reach a conclusion. Even if intuitive and scientific approaches provide the same degree of accuracy; structured techniques have value in that they can be easily used to balance the art and science of GEOINT. Heuer categorized structured techniques by how they help analysts overcome human cognitive limitations. Heuer's grouping is as follows:
Others have categorized techniques by their purpose: Diagnostic techniques are primarily aimed at making analytic arguments, assumptions, or intelligence gaps more transparent; Contrarian techniques explicitly challenge current thinking; and, Imaginative thinking techniques aim to develop alternative outcomes. In fact, many of the techniques will do some combination of these functions. These different categories of techniques notwithstanding, the analysts should select the technique that best accomplishes the specific task they set out for themselves. The techniques are not a guarantee of analytic precision or accuracy of judgments; they do improve the usefulness, sophistication, and credibility of intelligence assessments.
The term “sensemaking” is used as a term to describe an analytic process or method. Sherman Kent, who has been described as "the father of intelligence analysis," is often acknowledged as first proposing an analytic method specifically for intelligence. The essence of Kent’s method was understanding the problem, data collection, hypotheses generation, data evaluation, more data collection, followed by hypotheses generation. Richards Heuer subsequently proposed an ordered eight-step model of “an ideal” analytic method, emphasizing early deliberate generation of hypotheses prior to information acquisition:
Heuer’s technique has become known as Analysis of Competing Hypothesis (ACH). The technique entails identifying possible hypotheses by brainstorming, listing evidence for and against each, analyzing the evidence and then refining hypotheses, trying to disprove hypotheses, analyzing the sensitivity of critical evidence, reporting conclusions with the relative likelihood of all hypotheses, and identifying milestones that indicate events are taking an unexpected course. The use of brainstorming is critical. The quality of the hypotheses is dependent on the existing knowledge and experience of the analysts since hypotheses generation occurs before additional information acquisition augments the existing knowledge of the problem.
While ACH is widely cited in the intelligence literature as a means for improving analysis, the primary advantage of ACH is a consistent approach for rejection or validation of many potential conclusions. According to David Moore [150], ACH makes it explicit that evidence may be consistent with more than one hypothesis. "Since the most likely hypothesis is deemed to be the one with the least evidence against it, honest consideration may reveal that an alternative explanation is as likely, or even more likely, than that which is favored. The synthesis of the evidence and the subsequent interpretations in light of multiple hypotheses is also more thorough than when no such formalized method is employed." The ACH matrix might look like:
Evidence | Hypothesis 1 | Hypothesis ..... |
---|---|---|
Evidence A | ||
Evidence B | ||
Evidence C | ||
........ |
An excellent and simple explanation of the ACH approach is found in Structured Analysis of Competing Hypotheses: Improving a Tested Intelligence Methodology [151] by Kristan J. Wheaton and Diane E. McManis.
ACH and other problem solving approaches can be said to be applied in three stages. The stages and outputs are:
Stage | Description | Result |
---|---|---|
Stage 1: Problem Initiation | This stage develops the question. The question focuses on the nature of the geospatial and temporal patterns the analyst is seeking to identify and understand. Many new geospatial analysts struggle to translate the question into the context of spatial concepts. To overcome this, we stress the importance of understanding the general question. The question can be viewed as an active two-way discussion between the client requiring the information and the analyst supplying it. | A question that seeks to:
|
Stage 2: Information Foraging | The foraging actions are exploring for new information, narrowing the set of items that have been collected, and exploiting items in the narrowed set trade off against one another. Some analysts' work never departs from the foraging loop and simply consists of extracting information and repackaging it without much actual analysis. This stage recognizes that analysts tended to forage for data by beginning with a broad set of data and then proceeded to narrow that set down into successively smaller, higher-precision sets of data (Pirolli,1999). | The data and information necessary for sensemaking. |
Stage 3: Sensemaking | This stage results in the development of a detailed analytic assessment. Sensemaking is the ability to make sense of an ambiguous situation; it is creating situational awareness and understanding in situations of high complexity or uncertainty in order to make decisions. It is "a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively" (Klein, G., Moon, B. and Hoffman, R.F. 2006). Making sense of sensemaking. IEEE Intelligent Systems, 21(4), 70-73.). | An analytic result. |
It is difficult for an analyst to appreciate how various SATs and geospatial tools fit together. The following table summarizes this relationship:
Stage | Possible SAT | Example Geospatial Technology Operation |
---|---|---|
Stage 1: Problem Initiation |
|
|
Stage 2: Information Foraging |
|
|
Stage 3: Sensemaking |
|
|
The Analytic Standards of the Intelligence Community are the core of GEOINT's tradecraft. The following is based upon US Intelligence Community (IC) Directive Number 203, Analytic Standards, dated June 21, 2007 [152]. The standards articulate a commitment to meet the highest standards of integrity and rigorous analytic thinking, and serve as goals for analysts who strive for excellence in their analytic work. The analytic standards are characterized by:
Individual Standard | Description |
---|---|
Properly describes quality and reliability of underlying sources. | Analytic products should accurately characterize the information in the underlying sources and explain which information proved key to analytic judgments and why. Consistent with classification of the product, factors significantly affecting the weighting that the analysis gives to available, relevant information, such as denial and deception, source access, source motivations and bias, age and continued currency of information, or other factors affecting the quality and potential reliability of the information, should be included in the product. When appropriate, analytic products may identify a prospective information strategy to improve the reporting base when significant gaps exist. |
Properly caveats and expresses uncertainties or confidence in analytic judgments. | Analytic products should indicate both the level of confidence in analytic judgments and explain the basis for ascribing it. Sources of uncertainty—including information gaps and significant contrary reporting—should be noted and linked logically and consistently to confidence levels in judgments. As appropriate, products also should identify indicators that would enhance or reduce confidence or prompt revision of existing judgments. |
Properly distinguishes between underlying intelligence and analysts' assumptions and judgments. | All assumptions are clearly stated and defined. Assumptions deal with identifying underlying causes and/or behavior of systems, people, organizations, states, or conditions. Assumptions comprise the foundational premises on which the information and logical argumentation build to reach analytic conclusions. Assumptions may also span information gaps that would otherwise inhibit the analysis from reaching defensible judgments. Judgments are defined as logical inferences from the available information or the results of explicit tests of hypotheses. They comprise the conclusions of the analysis. Analytic products should explicitly identify the critical assumptions on which the analysis is based and explain the implications for judgments if those assumptions are incorrect. As appropriate, analytic products should identify indicators that would signal whether assumptions or judgments are more or less likely to be correct. |
Incorporates alternative analysis where appropriate. | Where appropriate, analytic products should identify and explain the strengths and weaknesses of alternative hypotheses, viewpoints, or outcomes in light of both available information and information gaps. Analytic products should explain how alternatives are linked to key assumptions and/or assess the probability of each alternative. To the extent possible, analysis should incorporate insights from the application of structured analytic technique(s) appropriate to the topic being analyzed and include discussion of key indicators that, if detected, would help clarify which alternative hypothesis, viewpoint, or outcome is more likely or is becoming more likely. |
Demonstrates relevance to the domain. | Analytic products should provide information and insight on issues relevant to the products' intended consumers and/or provide useful context, warning, or opportunity analysis. The information and insight may be particularly difficult to obtain without extensive expertise. To meet this standard fully, analytic products should examine and explicitly address direct or near-term implications of the information and judgments for the intended audience and/or for appropriate security interests, and, when possible, also examine longer-term implications or identify potential indirect or second-order effects. |
Uses logical argumentation. | Analytic presentation should facilitate clear understanding of the information and reasoning underlying analytic judgments. Key points should be effectively supported by information or, for more speculative warning or "think pieces," by coherent reasoning. Language and syntax should convey meaning unambiguously. Products should be internally consistent and acknowledge significant supporting and contrary information affecting key judgments. Graphics and images should be readily understandable and should illustrate, support, or summarize key information or analytic judgments. |
Exhibits consistency of analysis over time, or highlights changes and explains rationale. | Analytic products should deliver a key message that is either consistent with previous production on the topic from the same analytic element or, if the key analytic message has changed, highlight the change and explain its rationale and implications. |
Makes accurate judgments and assessments. | Analytic elements should apply expertise and logic to make the most accurate judgments and assessments possible given the information available to the analytic element and known information gaps. Where products are estimative, the analysis should anticipate and correctly characterize the impact and significance of key factors affecting outcomes or situations. Accuracy is sometimes difficult to establish and can only be evaluated retrospectively if necessary information is collected and available. |
If you recall, intelligence is not truth. Intelligence is an approximation of truth with some level of confidence. It is necessary for decision makers to know how confident their analysts are in the results of an analysis; however, analysts are often reluctant to state the uncertainties surrounding their work because they may not:
An often cited example of the damage that vague statements can have is the US President's Daily Brief (PDB) from August 6, 2001. For example:
The PDB briefing did not present the President with a clear estimate of Bin Laden’s likely activities in the coming months. Consider the different message to the decision maker if the above PDB were stated as, “Bin Laden implied...that his followers will probably ‘bring the fighting to America.’” U.S. Joint Publication 2-0, Doctrine for Intelligence Support to Joint Operations [153] suggests terms and expressions to communicate analytic judgments. The terms selected are based on three factors:
Each factor should be assessed independently and then in concert with the other factors to determine the confidence level. Multiple levels are stated as Low, Moderate, and High. Phrases such as "we judge" or "we assess" are used to call attention to a product's key assessment. Supporting assessments may use likelihood terms or expressions to distinguish them from assumptions or reporting. Table 4.5 shows the guidelines for likeliness terms and the confidence levels with which they correspond.
Terms/Expressions:
Low | Moderate | High |
---|---|---|
|
|
|
Terms/Expressions:
|
Terms/Expressions:
|
Terms/Expressions:
|
Scenario: For this assignment, you are a volunteer providing short-term GEOINT support to a Non-Governmental Organization (NGO).
Task: Using ACH (see L4.05 Sensemaking [154]), your task is to suggest the general location for a regional depot of critical emergency materials. Please note that I said general location—I do not intend for you to select the specific site for the facility. The depot is intended to provide relief support for possible regional disasters. Stocks in the depot's warehouse might include emergency food, various types of relief goods, mobile cooking facilities, rapid response equipment, medicines and medical kits.
Conditions: The location meets, as a minimum, criteria 1-4. You may wish to add an additional criterion:
Given: ArcGIS Online with access to the following data:
You will perform the following:
Step 1: Using ArcGIS Online, explore the three identified locations listed below.
Step 2: Complete the below matrix (Table 4.7) using the identified locations from Step 1 and the geospatial information you can gather as evidence relative to the criteria. Using both the maps you have access to with ArcGIS Online and other relevant information you can find on the Internet, evaluate the criteria versus the locations. You will evaluate the criteria by moving across the rows, one piece of evidence at a time, and deciding whether the evidence is consistent (+), inconsistent (-), or not applicable (N/A) to the site being tested. This will help you determine if the broad range of evidence supports or refutes selecting the site.
Step 3: Now, working down the columns of the matrix in Table 4.7, review each site to draw tentative conclusions about the relative suitability of each one. Select the site most consistent with the evidence.
Step 4: State the selected location (Site A, B, or C) to the Lesson 4 Discussion Forum [156] and what you believe to be the nature of the event [157] (Geophysical, Meteorological, Hydrological, Climatological, or Biological) this site would have the greatest likelihood of supporting using the appropriate terms and expressions from Table 4.8.
Step 5: Comment on another student’s analysis in the discussion forum.
Site A: Lagos, Nigeria |
Site B: Dhaka, Bangladesh |
Site C: Honolulu, Hawaii, USA. |
||
---|---|---|---|---|
Evidence Related to Criteria 1: Probability of occurrence of an event. |
||||
Evidence Related to Criteria 2: Expected demand for supplies determined by the proximity to a population. |
||||
Evidence Related to Criteria 3: Minimal transportation time to get supplies to an event. |
||||
Evidence Related to Criteria 4: Regional safety and security of the depot. |
||||
Other criteria and evidence you included in this analysis. | ||||
|
Low | Moderate | High |
---|---|---|
|
|
|
Terms/Expressions:
|
Terms/Expressions:
|
Terms/Expressions:
|
Credits
The ArcGIS Online capabilities were developed by Joseph Kerski [14], Esri Education Manager.
For this week's discussion I want to focus on the analysis of selecting the emergency depot location completed in L4.07. Here are a few things to do and discuss:
Head over to the dedicated discussion forum here [49] to talk about these issues.
Participation in discussions is worth 10% of your overall grade in the class. To earn the full 10%, you need to make a total of 10 original posts or comments by the end of the class. If I were you, I'd shoot for posting twice each week. Read the Grading Policy [70] page for more details.
Forum link and grading link resolves to Coursera
This lesson demonstrated the concepts of tradecraft and focused on the art of intelligence analysis aided by technology. Tradecraft was defined as the "knowledge work" of the GEOINT professional. It is the work that produces geospatial intelligence. Tradecraft consists of the principles, tools, and standards of rigor for an analyst's thinking. Unique from other forms of geographic analysis, the GEOINT tradecraft may be required to operate in secrecy and contend with deliberately deceptive information. This tradecraft is a merging of the conventional notion of an intelligence tradecraft with the advances in geographic information science and technologies (GIS&T). Geospatial sensemaking creates the objective connection between a geospatial problem representation and geospatial evidence. Here one set of activities—information foraging—focuses on finding information, while another set of activities—sensemaking—focuses on giving meaning to the information.
Don't forget to complete the Lesson 4 Quiz!
Central Intelligence Agency (2009), A tradecraft primer structured analytic techniques for improving intelligence analysis.Washington, D.C., Center for the Study of Intelligence.
Copeland, Nate. (2009), Booz Allen Hamilton, Personal Correspondence. 02 June 2009.
Heuer, R. (1999), Psychology of intelligence analysis. Washington, D.C.: Center for the Study of Intelligence, Central Intelligence Agency.
Moore, D. (2012), Sensemaking: A structure for an intelligence revolution (2nd ed.). Washington, DC: National Defense Intelligence College Press.
Urbánek, Jirí F., Jirí Barta, Josef Heretík, and Jaroslav Prucha (2010), Computerized aided camouflage. In Proceedings of the 10th WSEAS international conference on applied informatics and communications, and 3rd WSEAS international conference on Biomedical electronics and biomedical informatics (AIC'10/BEBI'10), Nikos E. Mastorakis, Valeri Mladenov, and Zoran Bojkovic (Eds.). World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, Wisconsin, USA, 289-294.
Welcome to Lesson 5! This is the capstone assignment where you investigate a real-world problem using the techniques and tools introduced during your past coursework. In Lesson 1, we identified five principles of Geospatial Intelligence. These principles are:
In Lesson 5, you will apply the principles. As we discussed in the previous lessons, GEOINT is the art and science of place and time on Earth's surface. Its subject matter is the human phenomena that make up the world's places. The GEOINT professional studies and describes the changing pattern of places in words, maps, and graphics, explaining how patterns come to be, unraveling their meaning, and anticipating changes in patterns for decision makers. In the capstone work you will experience how a GEOINT professional thinks about real-world-like problems.
Now is the time to apply what you have learned!
LEARNING OBJECTIVES
Throughout this lesson, we will demonstrate the concepts of Geospatial Intelligence. By the end of this lesson, you will be able to:
Please view my final video of the course (2 minutes).
View the National Geographic video about a Doctors Without Borders Ebola clinic [158] in Monrovia, the capital of Liberia. Read: "Liberia: Ebola clinic fills up within hours of opening" [159] about Monrovia's desperate shortage of available beds for those struck by the vicious Ebola virus. Sick and dying people have been sent to holding areas because the Ebola Treatment Unit beds in the city are full.
The following summary of the Ebola disease is from the US Centers for Disease Control and Prevention [160] and World Health Organization [161].
The Ebola virus causes an acute, serious illness which is often fatal if untreated. Ebola virus disease (EVD) first appeared in 1976 in 2 simultaneous outbreaks, one in Nzara, Sudan, and the other in Yambuku, Democratic Republic of Congo. The latter occurred in a village near the Ebola River, from which the disease takes its name.
The number of recent EVD cases is growing faster than the abilities of West Africa health officials to handle them. In the three hardest hit countries, Guinea, Liberia and Sierra Leone, the number of new cases is growing faster than the capacity to manage them in the Ebola-specific treatment centers. Ebola treatment centers are being established to provide comprehensive care to people diagnosed with EVD.
The current outbreak in West Africa is the largest and most complex Ebola outbreak since the Ebola virus was first discovered in 1976. There have been more cases and deaths in this outbreak than all others combined. It has also spread between countries, starting in Guinea and then spreading across land borders to Sierra Leone and Liberia, by air (1 traveler only) to Nigeria, and by land (1 traveler) to Senegal.
The most severely affected countries, Guinea, Sierra Leone, and Liberia, have very weak health systems, lacking human and infrastructural resources, having only recently emerged from long periods of conflict and instability. On August 8, the WHO Director-General declared this outbreak a Public Health Emergency of International Concern.
It is thought that fruit bats are natural Ebola virus hosts. Ebola is introduced into the human population through close contact with the blood, secretions, organs, or other bodily fluids of infected animals such as chimpanzees, gorillas, fruit bats, monkeys, forest antelope, and porcupines found ill or dead or in the rainforest.
Ebola then spreads through human-to-human transmission via direct contact (through broken skin or mucous membranes) with the blood, secretions, organs or other bodily fluids of infected people, and with surfaces and materials (e.g. bedding, clothing) contaminated with these fluids. Health-care workers have frequently been infected while treating patients. This has occurred through close contact with patients when infection control precautions are not strictly practiced. Burial ceremonies in which mourners have direct contact with the body of the deceased person can also play a role in the transmission of Ebola.
Control relies on effective case management, surveillance and contact tracing, a good laboratory service, safe burials and social mobilization. Community engagement is key to successfully controlling outbreaks. Raising awareness of risk factors for Ebola infection and protective measures that individuals can take is an effective way to reduce human transmission. Risk reduction messaging focuses on several factors:
Story maps combine interactive maps and multimedia content into elegant and informative user experiences. They make it easier to harness the power of maps to tell a story. Story maps are designed for general, non-technical audiences. However, story maps can also serve highly specialized audiences. They can summarize issues for managers and decision makers. They can help departments or teams within organizations to communicate with their colleagues.
For your capstone and final assignment, you will select a location for a notional Ebola Treatment Unit in Monrovia, Liberia. Please note that our problem is hypothetical and bears no direct relationship to the location of planned clinics! The exercise uses data and a capability similar to the NGA Ebola Map site designed to assist Ebola relief in West Africa. Similarly, our parallel site is supported with Esri's ArcGIS Online, DigitalGlobe's Human Geography data, and hosted on Amazon Web Services. Your peer reviewed work:
You might want to review the explanation of the ACH approach found in Structured Analysis of Competing Hypotheses: Improving a Tested Intelligence Methodology [151] by Kristan J. Wheaton and Diane E. McManis. We are just using evidence and hypotheses that are geospatial in nature.
This assignment makes use of Coursera's Peer Assessment tool. Use the following link to access the Peer Assessment tool [163]. The key deliverables and due dates for this assignment are:
Date | Time | Activity | Your Task |
---|---|---|---|
9 February, 2015 | Lesson 5 opens 0001 UTC/GMT (12:01 AM) | Assignment Starts |
|
14 February, 2015 | Due at 2359 UTC/GMT (11:59 PM) | Submission Deadline | After this time, you can no longer change your submission. If you have not clicked the "Submit" button by this time, your classmates will not see your submission, you will not receive an evaluation for this assignment, and you will not be permitted to evaluate your classmates' submissions. |
15 February, 2015 | Begin on 15 February at 0001 UTC/GMT (12:01 AM) | Evaluation Starts | After this time, you will evaluate the work of five (5) of your peers. Finally, you will evaluate your own work. Optionally, you can choose to evaluate the work of even more of your classmates before the evaluation deadline passes. This is very helpful for the success of the course. You are an awesome person if you do this. Remember: as with the submission stage, this evaluation stage is required if you want your own assignment submission to be evaluated. |
17 February, 2015 | Due by 2359 UTC/GMT (11:59 PM) | Evaluation Deadline | After this time, you can no longer evaluate the work of your peers or your own. If you have not finished your assigned evaluation tasks by this time, your own assignment may not be evaluated and you may receive a grade penalty. |
17 February, 2015 | Due by 2359 UTC/GMT (11:59 PM) | Final Examination Deadline | Complete the Final Examination. |
There will be no exceptions or extensions to these deadlines. The Coursera Peer Assessment tool does not allow us to make individual-level exceptions.
The following is the hypothetical scenario. You are a volunteer providing short-term support to a Non-Governmental Organization (NGO) in Monrovia, Liberia. You have available the information and capabilities of the below Ebola Map website which provides mapping capabilities, satellite imagery, and human geography data.
Using the Ebola Map and the proposed locations in Monrovia, Liberia, select where to establish the next Ebola Treatment Unit. You may optionally propose an alternate location in Monrovia but this is not required.
The coordinates in parentheses are longitude (X) and latitude (Y) expressed in decimal degrees that are suitable for ArcGIS Online.
Step 1: Review the problem, background, criteria, and explore the technology and data. In our ArcGIS Online Ebola map [166], click on the "content" tab in the legend to display the full list of imagery and human geography data themes. Learn about the data. Develop a deep and rich mental picture of the place.
Step 2: Think about and apply a simple data collection strategy. Using ArcGIS Online and the data themes, forage for geospatial evidence to help in your evaluation of the hypothesized best sites in Table 5.1. Note the data you are lacking.
Step 3: Using the art of geospatial reasoning aided with ArcGIS Online's maps:
Step 4: Share and discuss your analysis in the Lesson 5 Discussion Forum [167]. This is an opportunity to vet your analysis before you finalize your submission for peer assessment.
Step 5: Using the Peer Assessment tool [163], Coursera link post your selected location for the Ebola Treatment Unit in the appropriate terms and expressions provided in Tables 5.1 and 5.2. Discuss how you completed this geospatial analysis using ACH. Peer review your submission and four other students' submissions (total of 5 reviews) using the standards of analysis (Table 5.3). Please note the due date!
Hypothesis: Site 1 is the best location. | Hypothesis: Site 2 is the best location. | Hypothesis: Site 3 is the best location. | Hypothesis (Optional): Site 4 is the best location. Specify the location. | |
---|---|---|---|---|
Evidence Related to Criterion 1: The clinic is in an area known to have the Ebola disease present. | ||||
Evidence Related to Criterion 2: The clinic is in an area not served by an existing Ebola facility. | ||||
Evidence Related to Criterion 3: The clinic is in proximity to existing medical facilities. | ||||
Evidence Related to Criterion 4: The clinic is located to serve a significant population. | ||||
Evidence Related to Criterion 5: The clinic has access to transportation. | ||||
Evidence Related to Criterion 6: The clinic is in a favorable physical environment at the location. | ||||
Evidence Related to Criterion 7: The clinic is in a location safe for the patients and staff. | ||||
Other evidence you included in the analysis. Add as many rows as needed. | ||||
State the selected site using the appropriate terms and expressions from Table 5.2, below Referring to the Lesson 4 discussion of Analysis of Competing Hypothesis (ACH), describe how you arrived at your conclusion. |
Low | Moderate | High |
---|---|---|
Terms/Expressions:
|
Terms/Expressions:
|
Terms/Expressions:
|
Item | Analytic Standard |
---|---|
A | Properly describes quality and reliability of underlying sources. |
B | Properly caveats and expresses uncertainties or confidence in analytic judgments. |
C | Properly distinguishes between underlying intelligence and analysts' assumptions and judgments. |
D | Incorporates alternative analysis where appropriate. |
E | Demonstrates relevance to the domain. |
F | Uses logical argumentation. |
G | If appropriate, exhibits consistency of analysis over time, or highlights changes and explains rationale. |
H | Makes accurate judgments and assessments. |
This lab was developed by Todd Bacastow, Gary Parkhurst [168] (DigitalGlobe), and Joseph Kerski [14] (Esri). Esri and DigitalGlobe generously provided help creating this instance using ArcGIS Online, current satellite imagery, and human geography data as in the NGA Ebola Map.
For our final week's discussion, I want to focus on your geospatial analysis using the technique of Analysis of Competing Hypothesis (ACH). Separate from submitting your analysis for the peer assessment activity, this is your chance to share your work with the class and receive feedback. Here's what to do:
Go to the dedicated Lesson 5 Discussion Forum [49] to talk about these issues.
Participation in discussions is worth 10% of your overall grade in the class. To earn the full 10%, you need to make a total of 10 original posts or comments by the end of the class. If I were you, I'd shoot for posting twice each week. Read the Grading Policy [70] page for more details.
Links here go to Coursera.
The final exam for this class is due the last day of the course (check the syllabus and schedule [169] for the date!) at 11:59 PM (2359) UTC.
Use the following link to access the Quizzes and Exam page [139] where you will see the link to the Final Exam.
Question: I lost my Internet connection and the time expired, can you reset the exam for me?
Answer: Sorry, there is no way for me to do this on an individual basis in Coursera. In addition, there is no way I can manage the process of deciding who gets another try with thousands of students.
Question: What kinds of questions can I expect on the exam?
Answer: This exam features 30 questions, and is cumulative for the course. All of the questions were drawn from the course Lessons.
Question: Why are you setting a time limit and only letting us take it once?
Answer: This is a chance for me to understand how well you are retaining knowledge from the past weeks of the course. Leaving it wide open and offering multiple tries would not help me understand whether or not you've actually learned something that will carry on beyond this course.
Question: Why is the exam not longer (or shorter)?
Answer: I've provided more than two minutes for each question. That should be plenty of time for students who have completed the Lesson 1-4 quizzes and read the lesson content.
Provided by DigitalGlobe, Inc. in cooperation with Esri and NGA.
DigitalGlobe Landscape +Human [172] datasets are analyst-ready map layers that serve as a foundation for conducting geospatial analysis. The data conforms to the World-Wide Human Geography Data Working Group (WWHGD WG) [173] efforts to provide a basis for a deeper understanding of cultures, activities, and attitudes. Such datasets are especially useful in supporting crisis situations, such as the Ebola virus outbreak, in parts of the world that tend to lack publicly available map data for use with GIS tools such as ArcGIS Online.
The dataset used for this exercise to evaluate possible locations for Ebola treatment clinics in Monrovia, Liberia contains the following layers:
Links
[1] http://geospatialrevolution.psu.edu/episode3
[2] https://www.e-education.psu.edu/geointmooc/../lecture/view%3Flecture_id%3D7
[3] http://education.maps.arcgis.com/apps/Viewer/index.html?appid=d6a1a0b213b04172878ef7db395dfdc4
[4] http://education.maps.arcgis.com/apps/Viewer/index.html?appid=b1bc65ad8891425b923d57fb3d682822
[5] http://www.emdat.be/classification#Geophysical
[6] http://www.emdat.be/classification#Meteorological
[7] http://www.emdat.be/classification#Hydrological
[8] http://www.emdat.be/classification#Climatological
[9] http://www.emdat.be/classification#Biological
[10] http://www.emdat.be/
[11] http://video.esri.com/iframe/3558/000000/width/960/0/00:00:00
[12] http://video.arcgis.com/series/18/arcgis-online
[13] http://learn.arcgis.com
[14] http://www.josephkerski.com/
[15] http://en.wikipedia.org/wiki/Geospatial_intelligence
[16] http://fas.org/irp/doddir/dod/d5105_60.htm
[17] http://www.defense.gov/Releases/Release.aspx?ReleaseID=1055
[18] http://www.wifcon.com/dodauth03.htm
[19] https://www.nga.mil/Pages/default.aspx
[20] http://www.dni.gov/index.php/about/leadership/director-of-national-intelligence
[21] https://www.nga.mil/About/History/Pages/default.aspx
[22] http://www.fbi.gov
[23] https://www.cia.gov/index.html
[24] http://www.dia.mil
[25] https://www.nsa.gov
[26] http://www.nro.gov
[27] http://usgif.org
[28] http://www.youtube.com/watch?feature=player_embedded&v=IXL6nK58VB0
[29] https://www.youtube.com/watch?feature=player_embedded&v=IXL6nK58VB0
[30] https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol3no2/html/v03i2a03p_0001.htm
[31] http://en.wikipedia.org/wiki/Mark_M._Lowenthal
[32] http://www.gpo.gov/fdsys/pkg/USCODE-2006-title10/html/USCODE-2006-title10-subtitleA-partI-chap22-subchapIV-sec467.htm
[33] http://www.thefreedictionary.com/intelligence+source
[34] http://www.princeton.edu/~achaney/tmve/wiki100k/docs/HUMINT.html
[35] https://www.nsa.gov/sigint/
[36] http://www.aag.org/galleries/publications-files/GIST_Body_of_knowledge.pdf
[37] http://education.nationalgeographic.com/education/activity/characteristics-of-place/?ar_a=1
[38] http://www.ncgia.ucsb.edu/giscc/units/u002/
[39] https://www.digitalglobe.com/resources/satellite-information
[40] https://www.faa.gov/uas/
[41] http://en.wikipedia.org/wiki/Unmanned_aerial_vehicle
[42] http://en.wikipedia.org/wiki/Global_Positioning_System
[43] http://en.wikipedia.org/wiki/Geographic_information_system
[44] http://dictionary.reference.com/browse/cyber
[45] http://www.merriam-webster.com/dictionary/tradecraft
[46] http://vimeo.com/20421848
[47] https://www.e-education.psu.edu/geointmooc/../lecture/view%3Flecture_id%3D29
[48] http://cybertracker.org/tracking-in-the-cyber-age
[49] https://class.coursera.org/geoint-001/forum
[50] http://www.ted.com/talks/daniele_quercia_happy_maps
[51] http://fas.org/irp/agency/army/mipb/
[52] http://fas.org/irp/doddir/army/fm34-130.pdf
[53] https://www.e-education.psu.edu/geog586/node/2054
[54] http://en.wikipedia.org/wiki/Waldo_R._Tobler
[55] http://www.csiss.org/classics/content/29
[56] http://en.wikipedia.org/wiki/Time_geography
[57] http://www.defensenews.com/article/20130708/C4ISR02/307010020/Activity-Based-Intelligence-Uses-Metadata-Map-Adversary-Networks
[58] http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780195375886.001.0001/oxfordhb-9780195375886-e-0004
[59] http://en.wikipedia.org/wiki/Deception
[60] http://www.oxforddictionaries.com/us/definition/american_english/spy
[61] https://www.mi5.gov.uk/home/the-threats/espionage/what-is-espionage.html
[62] http://www.webopedia.com/TERM/S/structured_data.html
[63] https://www.wickedproblems.com/1_wicked_problems.php
[64] http://www.bletchleypark.org.uk/content/hist/
[65] http://mobile.nytimes.com/2014/10/17/opinion/the-dark-market-for-personal-data.html?referrer=&_r=0
[66] https://www.e-education.psu.edu/research/projects/gisethics
[67] http://ist.psu.edu/directory/drs36
[68] https://www.e-education.psu.edu/sites/default/files/ethics/mapping_muslim_neighbors_case.pdf
[69] http://ethics.iit.edu/
[70] https://class.coursera.org/geoint-001/wiki/grading
[71] https://www.e-education.psu.edu/geog482fall2/node/1405
[72] https://www.e-education.psu.edu/natureofgeoinfo/sites/www.e-education.psu.edu.natureofgeoinfo/files/flash/coord_practice_geo_v22.swf
[73] http://www.adobe.com/shockwave/download/index.cgi?P1_Prod_Version=ShockwaveFlash
[74] https://www.e-education.psu.edu/natureofgeoinfo/
[75] http://creativecommons.org/licenses/by-nc-sa/3.0/us/
[76] https:=
[77] https://www.e-education.psu.edu/geog482fall2/
[78] https://www.fas.org/irp/agency/nga/doctrine.pdf
[79] http://support.esri.com/en/knowledgebase/GISDictionary/term/coordinate%20system
[80] http://support.esri.com/en/knowledgebase/GISDictionary/term/topology
[81] http://support.esri.com/en/knowledgebase/GISDictionary/term/orthorectification
[82] http://www.geoplatform.gov/wwhgd-home
[83] http://www.claritas.com/MyBestSegments/Default.jsp?ID=20
[84] http://investor.digitalglobe.com/phoenix.zhtml?c=70788&p=irol-newsArticle_print&ID=2004765
[85] http://www.oxforddictionaries.com/us/definition/american_english/data
[86] http://www.oxforddictionaries.com/us/definition/american_english/information
[87] http://www.oxforddictionaries.com/us/definition/american_english/insight
[88] http://www.directionsmag.com/articles/thinking-spatially/123985
[89] https://people.hofstra.edu/geotrans/eng/methods/ch5m1en.html
[90] http://www.csiss.org/classics/content/67
[91] http://www.csiss.org/classics/content/51
[92] http://www.csiss.org/classics/content/9
[93] http://siteresources.worldbank.org/DEC/Resources/84797-1251813753820/6415739-1251813951236/krugman.pdf
[94] https://class.coursera.org/geoint-001/wiki/L2_Try_This_Solution
[95] https://class.coursera.org/geoint-001/wiki/L2_Practice_Answer_A
[96] http://education.maps.arcgis.com/home/webmap/viewer.html?webmap=167402e476f84e2e95b52e2b19cd10f7
[97] https://class.coursera.org/geoint-001/wiki/L2_Practice_Answer_B
[98] http://education.maps.arcgis.com/home/webmap/viewer.html?webmap=9c2e94d9f4c14a23844c9e5b4ee5c8a1
[99] http://www.gpo.gov/fdsys/pkg/GPO-WMD/pdf/GPO-WMD.pdf
[100] http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf
[101] https://www.e-education.psu.edu/geog480/
[102] http://www.wpafb.af.mil/shared/media/document/AFD-080820-005.pdf
[103] http://www.merriam-webster.com/dictionary/spaceborne
[104] https://www.digitalglobe.com
[105] http://www.merriam-webster.com/dictionary/airborne
[106] http://www.merriam-webster.com/dictionary/terrestrial
[107] http://en.wikipedia.org/wiki/WorldView-3
[108] http://www.asprs.org/10-Year-Industry-Forecast/Ten-Year-Industry-Forecast.html
[109] http://support.esri.com/en/knowledgebase/GISDictionary/term/off-nadir
[110] http://www.classzone.com/books/earth_science/terc/content/investigations/esu101/esu101page03.cfm
[111] http://en.wikipedia.org/wiki/Geostationary_orbit
[112] http://en.wikipedia.org/wiki/Sun-synchronous_orbit
[113] https://lta.cr.usgs.gov/NAPP
[114] http://www.fsa.usda.gov/FSA/apfoapp?area=home&subject=prog&topic=nai
[115] http://en.wikipedia.org/wiki/View-master
[116] http://en.wikipedia.org/wiki/Parallax
[117] http://www.bing.com/community/site_blogs/b/maps/archive/2011/06/27/bing-maps-unveils-exclusive-high-res-imagery-with-global-ortho-project.aspx
[118] http://en.wikipedia.org/wiki/Georeference
[119] https://www.e-education.psu.edu/geog597g
[120] http://open.ems.psu.edu/
[121] http://www.navalaviationmuseum.org/
[122] http://news.it.psu.edu/article/penn-state-crop-educator-explores-drone-driven-crop-management
[123] http://www.washingtonpost.com/news/morning-mix/wp/2015/01/08/the-frustrating-hunt-for-genghis-kahns-long-lost-tomb-just-got-a-whole-lot-easier/
[124] http://www.openstreetmap.org/
[125] http://tasks.hotosm.org/
[126] http://tasks.hotosm.org/project/605
[127] http://tasks.hotosm.org/project/586
[128] http://www.tomnod.com/
[129] http://www.theguardian.com/world/2014/mar/14/tomnod-online-search-malaysian-airlines-flight-mh370
[130] https://www.e-education.psu.edu/emsc100s/sites/www.e-education.psu.edu.emsc100s/files/images_textversions/GeoIntMOOCL3_LD.html
[131] http://chimes.biola.edu/story/2014/sep/28/boles-fire-devastates-community/
[132] http://www.tomnod.com/campaign/bolesfireca2014/map/2f2xfyo
[133] http://www.esri.com/news/arcwatch/1208/goodchild-talks.html
[134] http://www.springerlink.com/content/2414838775l810tr/
[135] http://mobileactive.org/how-useful-humanitarian-crowdsourcing
[136] http://www.crowdsourcing.org/document/if-all-you-have-is-a-hammer---how-useful-is-humanitarian-crowdsourcing/3533
[137] http://povesham.wordpress.com/2011/01/10/how-many-volunteers-does-it-take-to-map-an-area-well-the-validity-of-linus-law-to-volunteered-geographic-information/
[138] http://blog.standbytaskforce.com/tag/tomnod/
[139] https://class.coursera.org/geoint-001/quiz
[140] http://publicdomainreview.org/2013/07/10/robert-baden-powells-entomological-intrigues/
[141] http://commons.wikimedia.org/wiki/File:Lt_Robert_Baden-Powell.jpg
[142] http://en.wikipedia.org/wiki/Trade_secret
[143] https://d396qusza40orc.cloudfront.net/geoint/images/screen_20060518154016_7ngaislamabad-20060518.jpg
[144] https://d396qusza40orc.cloudfront.net/geoint/images/boeingaircoplantno2_81.jpg
[145] https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/PsychofIntelNew.pdf
[146] https://www.e-education.psu.edu/emsc100s/sites/www.e-education.psu.edu.emsc100s/files/images_textversions/L4_Q1_answer.html
[147] https://www.e-education.psu.edu/emsc100s/sites/www.e-education.psu.edu.emsc100s/files/images_textversions/L4_Q2_answer.html
[148] https://www.e-education.psu.edu/emsc100s/sites/www.e-education.psu.edu.emsc100s/files/images_textversions/L4_Q3_answer.html
[149] https://www.e-education.psu.edu/drupal6/files/sgam/TradecraftPrimer-apr09.pdf
[150] http://www.ni-u.edu/ni_press/pdf/Sensemaking.pdf
[151] http://www.mcmanis-monsalve.com/files/publications/intelligence-methodology-1-07-chido.pdf
[152] http://www.dni.gov/files/documents/ICD/ICD%20203%20Analytic%20Standards%20pdf-unclassified.pdf
[153] http://www.dtic.mil/doctrine/new_pubs/jp2_0.pdf
[154] https://www.e-education.psu.edu/emsc100s/node/837
[155] http://education.maps.arcgis.com/home/webmap/viewer.html?webmap=dd0cc4f22841427384eed20145e3c201
[156] https://class.coursera.org/geoint-001/forum/list?forum_id=10012
[157] http://www.emdat.be/explanatory-notes
[158] http://video.nationalgeographic.com/video/news/141016-inside-ebola-clinic-vin
[159] http://www.who.int/features/2014/liberia-ebola-clinic/en/
[160] http://www.cdc.gov/vhf/ebola/hcp/preparing-ebola-treatment-centers.html
[161] http://www.who.int/mediacentre/factsheets/fs103/en/
[162] http://techsupportuk.maps.arcgis.com/apps/MapJournal/?appid=deb9d5151d954de5b3933294c911b67f
[163] https://class.coursera.org/geoint-001/human_grading
[164] https://www.e-education.psu.edu/geointmooc/node/2039
[165] https://class.coursera.org/geoint-001/wiki/Human_Geography_Data
[166] http://education.maps.arcgis.com/home/webmap/viewer.html?webmap=684d5dab42824b3582cf224a6d5ba3c0
[167] https://class.coursera.org/geoint-001/forum/list?forum_id=3
[168] https://www.linkedin.com/pub/gary-parkhurst/41/504/84a
[169] https://class.coursera.org/geoint-001/wiki/syllabus
[170] http://healthmap.org/ebola/
[171] http://www.who.int/csr/disease/ebola/en/
[172] https://www.digitalglobe.com/products/human-landscape
[173] http://wwhgd.org/sites/default/files/OnePager.docx